An Inference Sharing Architecture for a More Efficient Context Reasoning

24
An Inference Sharing Architecture for a More Efficient Context Reasoning Aitor Almeida, Diego López-de- Ipiña Deusto Institute of Technology - DeustoTech University of Deusto Bilbao, Spain {aitor.almeida, dipina}@deusto.es

description

Slides fo the paper "An Inference Sharing Architecture for a More Efficient Context Reasoning" presented in SENAMI 2012

Transcript of An Inference Sharing Architecture for a More Efficient Context Reasoning

Page 1: An Inference Sharing Architecture for a More Efficient Context Reasoning

An Inference Sharing Architecture for a More Efficient Context Reasoning

Aitor Almeida, Diego López-de-IpiñaDeusto Institute of Technology - DeustoTech

University of DeustoBilbao, Spain

{aitor.almeida, dipina}@deusto.es

Page 2: An Inference Sharing Architecture for a More Efficient Context Reasoning

Semantic Inference in Context Management

• Ontologies are regarded as one of the best approaches to model the context information [Strang04]

• OWL ontologies offer a rich expression framework to represent the context data and its relations.

• Problem as the number of triples in the ontology increases the inference time for environment actions become unsustainable.

Page 3: An Inference Sharing Architecture for a More Efficient Context Reasoning

Semantic Inference in Context Management

• Modern smart environments host a diverse ecosystem of computationally enabled devices

• Smart environments have at their disposal nowadays more pervasive computing capabilities than ever.

• The existence of a large number of devices available in the environment offers several advantages to help context management systems overcome the inference problem.

Page 4: An Inference Sharing Architecture for a More Efficient Context Reasoning

Semantic Inference in Context Management

• We have decided to split the inference problem among different reasoning engines or context consumers according to the interests stated by each of them.

• Inference is no longer performed by a central reasoning engine– Is divided into a peer-to-peer network of context producers and

context consumers.

• Dividing the inference problem into smaller sub-units makes it more computationally affordable.

Page 5: An Inference Sharing Architecture for a More Efficient Context Reasoning

Inference Sharing Process

• Objetives1. Attain the temporal decoupling of the different inference units.

• Allow the inference to be done concurrently in various reasoning engines.

2. Attain the spatial decoupling of the inference process. • Increase the general robustness of the system making it more fault-tolerant.

3. Reduce the number of triples and rules that each reasoning engine has to manage. • Use of more computationally constrained devices to carry out the inference

process.

4. Compartmentalize the information according to the different interests of the reasoning engines..

5. Allow the dynamic modification of the created organization. • The created hierarchy must change to adapt itself to these modifications.

Page 6: An Inference Sharing Architecture for a More Efficient Context Reasoning

Inference Sharing Process

• This organization also allows the creation of a reasoning engine hierarchy that provides different abstraction levels in the inferred facts.

Page 7: An Inference Sharing Architecture for a More Efficient Context Reasoning

Inference Sharing Process

Page 8: An Inference Sharing Architecture for a More Efficient Context Reasoning

Inference Sharing Process

• To divide the inference process three factors are taken into account:

1. The ontological concepts that will be used by the reasoner. – Concepts are organized in a taxonomy depicting the class relations of

the ontology. – We use the AMBI2ONT ontology. – The context consumer can search for specific concepts (in our

example “Luminosity”) or broader ones (as “Electrical Consumption” that encompass “LightStatus”, “AirConditioning” and “ApplianceStatus”).

Page 9: An Inference Sharing Architecture for a More Efficient Context Reasoning

Inference Sharing Process

Page 10: An Inference Sharing Architecture for a More Efficient Context Reasoning

Inference Sharing Process

2. The location where the measures originate from. – We have a taxonomy of the locations extracted from the domain

ontology. – This taxonomy models the “contains” relations between the different

rooms, floors and buildings– The context consumer can search for an specific location (the

Smartlab laboratory in our example) or for a set of related locations (for example all the rooms in the first floor of the engineering building)

Page 11: An Inference Sharing Architecture for a More Efficient Context Reasoning

Inference Sharing Process

Page 12: An Inference Sharing Architecture for a More Efficient Context Reasoning

Inference Sharing Process

3. The certainty factor (CF) associated to each ontological concept. – In our ontology we model the uncertain nature of the environments. – Sensors and devices are not perfect and their measures carry a

degree of uncertainty.– The context consumer can specify a minimum CF in its searches.

Page 13: An Inference Sharing Architecture for a More Efficient Context Reasoning

System Architecture

• We use an agent based architecture.

• Two type of agents:– Context Providers: These agents represent those elements in the

architecture that can provide any context data.– Context Consumers: These agents represent those elements that

need to consume context data.• The differentiation between providers and consumers is

simply logical. – The same device can act as a provider and a consumer at the same

time, receiving data from sensors and supplying the inferred facts to another device.

– While the resulting inference process follows somewhat a hierarchy, the agent structure is purely a peer-to-peer architecture where there is no central element.

Page 14: An Inference Sharing Architecture for a More Efficient Context Reasoning

System Architecture• To be able to take part in the negotiation process all participants

have to have information about the three factors described before.

• The negotiation follows these steps (Contract Net Protocol):1. The Context Consumer Agent (CCA) sends a Call For Proposals (CFP) stating

the context type and location he is interested in and the minimum certainty factor (CF) expected from the context.

2. The Context Provider Agents (CPA) replies to the CFP with individual proposals stating what they can offer to the CCA.

3. The CCA checks the received proposals. Some the context consumers have a maximum number of context sources they can subscribe to concurrently. This is dictated by the computational capabilities of the device running the CCA. If the number of received proposals is above this limit the CCA select those that it considers better (comparing their CF).

4. The CCA sends an Accept to the selected CCPs and subscribes to the context updates.

Page 15: An Inference Sharing Architecture for a More Efficient Context Reasoning

System Architecture

Page 16: An Inference Sharing Architecture for a More Efficient Context Reasoning

Validation

• Using the OWL-DL reasoner that comes with Jena Framework

• OWL entailent + domain rules– Each measure of the context provider is related to one rule.– 50% of the measures fire a rule.

• Average network latency of 50 ms• 2 scenarios

1. A centralized reasoning process2. Inference sharing among several reasoners.

Page 17: An Inference Sharing Architecture for a More Efficient Context Reasoning

Validation

Page 18: An Inference Sharing Architecture for a More Efficient Context Reasoning

Validation

Page 19: An Inference Sharing Architecture for a More Efficient Context Reasoning

Validation

Page 20: An Inference Sharing Architecture for a More Efficient Context Reasoning

Validation

Page 21: An Inference Sharing Architecture for a More Efficient Context Reasoning

Validation

Page 22: An Inference Sharing Architecture for a More Efficient Context Reasoning

Validation

Page 23: An Inference Sharing Architecture for a More Efficient Context Reasoning

Future Work

• As future work we intend to create a more complex negotiation process where the computational capabilities of each device are taken into account to better divide the inference according to them.

• We would like to dynamically reassign the rules from one reasoning engine to another according to their capabilities. – This will allow us to create the most efficient inference

structure for any given domain.

Page 24: An Inference Sharing Architecture for a More Efficient Context Reasoning

Thanks for your attention

Questions?This work has been supported by project grant TIN2010-20510-C04-03 (TALIS+ENGINE), funded by the Spanish Ministerio de Ciencia e Innovación