Explanation in the Semantic Web
-
Upload
rakebul-hasan -
Category
Education
-
view
561 -
download
0
description
Transcript of Explanation in the Semantic Web
Explanation in Semantic Web: an overview
Rakebul HasanPhD student, INRIA Sophia Antipolis-Méditerranée
2
• PhD topic: Solving upstream and downstream problems of a distributed query on the semantic web– Task 4: Traces and explanations
• Task4.2: Opening query-solving mechanisms.
• 2009: MSc in Computer Science, University of Trento, Italy– CliP-MoKi: a collaborative tool for the modeling of clinical guidelines
• Previous employer: Semantic Technology Institute Innsbruck, Austria– Information diversity in the Web
3
• Early research in the expert systems• Explanation in the Semantic Web• Future work
4
Explanation
“An information processing operation that takes the operation of an information processing system as input and generates a description of that processing operation as an output.”
- Wick and Thompson, 1992
5
Early research on explanation facilities
• Reasons that first gave rise to explanation facilities– Debugging expert systems– Assuring that the reasoning process was correct– Understanding the problem domain– Convincing the human users
6
Understanding
7
The expert systems should be able to provide information about how answers were obtained if users are expected to understand, trust and use the conclusions
8
First generation of expert systems
• MYCIN and its derivatives (GUIDON, NEOMYCIN)– Why and how explanations– Explanation based on invoked rule trace
9
Example of MYCIN Post-consultation explanation
10
useful for knowledgeable users
little justification for less knowledgeable users
experienced programmer
11
• The reasoning strategies employed by programs do not form a good basis for understandable explanations
• Categorization of knowledge and explicit representation of linkages between different types of knowledge are important
12
Explainable Expert System (EES)
• Explicit representation of “strategic” knowledge– Relation between goals and plans-> capability
descriptions• Explicit representation of design rationale– ‘Good’ explanations/justifications
• Abstract explanations of the reasoning process
W. Swartout et al. Explanations in knowledge systems: Design for explainable expert systems. IEEE Expert: Intelligent Systems and Their Applications, 6(3):58–64, 1991.
13
Reconstructive Explainer (Rex)
• Reasoning and explanation construction are done separately
• Representation of domain knowledge along with domain rule knowledge (causality)
• A causal chain of explanation is constructed
M. R. Wick. Second generation expert system explanation. In Second Generation Expert Systems, pages 614–640. 1993
14
Reconstructive Explainer (Rex)
The story teller tree
We have a concrete dam under an excessive load. I attempted to find the cause of the excessive load. Not knowing the solution and based on the broken pipes in the foundation of the dam, and the downstream sliding of the dam, and the high uplift pressures acting on the dam, and the slow drainage of water from the upstream side of the dam to the downstream side I was able to make an initial hypothesis. To achieve this 1 used the strategy of striving to simply determine causal relationships. In attempting to determine causes, I found that the internal erosion of soil from under the dam causes broken pipes causing slow drainage resulting in uplift and in turn sliding. This led me to hypothesize that internal erosion was the cause of the excessive load. Feeling confident in this solution, I concluded that the internal erosion of soil from under the dam was the cause of the excessive load.
15
DesignExpert
• A second knowledge representation– Communication domain knowledge (CDK):
knowledge about the domain knowledge– Domain communication (DCK): knowledge about
how to communicate in the domain– The purpose is to communicate explanations
• This representation is populated by the expert systems as it reasons, not in a separate process afterwards
R. Barzilay et al. A new approach to expert system explanations. In 9thInternational Workshop on Natural Language Generation, pages 78–87. 1998.
16
DesignExpert
17
• Categorization of knowledge and explicit representation of problem solving steps are necessary for generating natural and complete explanation
• Explanation should be able to change its content according to the varying users and context
18
Explanation in Semantic Web
• Query answering:– The traditional Web: explicitly stored information
retrieved– The Semantic Web: • requires more processing steps than database retrieval• results often require inference capabilities• mashup, multiple sources, distributed services, etc
19
Similar to the Expert Systems, the Semantic Web applications should be able to provide information on how the results are obtained if users are expected to understand, trust and use the conclusions.
20
“Linking Open Data cloud diagram, by Richard Cyganiak and Anja Jentzsch. http://lod-cloud.net/”
-Distributed-Openness
21
Explanations make the process of obtaining a result transparent
22
“Oh, yeah?” button to support the user in assessing the reliability of information encountered on the Web
Tim Berners-Lee
Explanation criteria in Semantic Web
• Types of explanations– Justifications– Provenance
• Trust• Consumption of explanations– Machine consumption– Human consumption• User expertise
D. L. McGuinness et al. Explaining Semantic Web Applications. In Semantic Web Engineering in the Knowledge Society. 2008. 23
24
Semantic Web Features(an explanation perspective)
• Collaboration• Autonomy• Ontologies
25
Collaboration
• Interaction and sharing of knowledge between agents
• The flow of information should be explained• Provenance based explanation will add
transparency
26
Autonomy
• The ability of an agent to act independently• Reasoning process should be explained
27
Ontologies
• Interoperable representation of explanation, provenance, and trust
28
Inference Web (IW)
• A knowledge provenance infrastructure– Provenance, metadata about sources– Explanation, manipulation trace information– Trust, rating the sources
29
• Proof Markup Language (PML) Ontology– Proof interlingua– Representation of justifications– Representation of provenance information– Representation of trust information
30
• IWBase– Registry of meta-information related to proofs and
explanations• Inference rules; ontologies; inference engines
• IW Toolkit– Tools aimed at human users to browse, debug,
explain, and abstract the knowledge encoded in PML.
31Step-by-step view focusing on one step with a list of follow-up actions
abstraction of a piece of a proof
32
Accountability In RDF (AIR)
A Semantic Web-based rule language focusing on generation and tracking of explanation for inferences and actions.
L. Kagal et al. Gasping for AIR-why we need linked rules and justifications on the semantic web. Rapport technique MIT- CSAIL-TR-2011-023, Massachusetts Institute of Technology, 2011.
33
AIR Features
• Coping with logical inconsistencies• Scoped contextualized reasoning• Capturing and tracking provenance– Deduction traces or justification
• Linked Rules which allow rules to be linked and re-used
34
AIR Ontology
• Two independent ontologies– An ontology for specifying AIR rules– An ontology for describing justifications
35
Given as input:a set of AIR rulesa RDF graph
an AIR reasoner produces justifications for the
inferences made
36
Proof Explanation in Semantic Web
A nonmonotonic rule system based on defeasible logic to extract and represent explanations on the Semantic Web
G. Antoniou et al. Proof Explanation for the Semantic Web Using Defeasible Logic. In Zili Zhang and Jörg Siekmann, editeurs, Knowl- edge Science, Engineering and Management, volume 4798 of Lecture Notes in Computer Science, pages 186–197. Springer Berlin / Heidelberg, 2007
37
• Extension of RuleML– Formal representation of explanation of defeasible
logic based reasoning• Automatic generation of explanation– Proof tree represented using the RuleML
extension
38
39
Remarks on Explanation in Semantic Web
• Justification (rule trace) based explanation– Abstraction not researched enough
• User adaption• Understanding of domain knowledge is
difficult• Representation, computation, combination,
and presentation of trust not researched enough in this context
40
Future work at Edelweiss (Outline)
• Corese 3.0– implements RDF, RDFS, SPARQL and Inference
Rules• SPARQL with RDFS entailment• SPARQL with Rules
41
• Justification explanation– RDFS entailments– SPARQL Rules
• Abstraction of justification explanation• User adaption– User modelling
42
• Communication– Presentation and provision mechanisms of
explanation• Provenance explanation• Domain understanding– Explanation based on term definitions
43
References
• K.W. Darlington. Designing for Explanation in Health Care Applications of Expert Systems, SAGE Open, SAGE Publications, 2011
• S.R. Haynes. Explanation in Information Systems: A Design Rationale Approach. PhD thesis, The London School of Economics, 2001
• Informa Tion I et al. Explanation in expert systems: A survey. University of Southern California, 1988
44
Thank you