Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and...

90
Eindhoven University of Technology MASTER Ontology-based interactive user modeling for adaptive web information systems enhancement of task sequencing Denaux, R.O. Award date: 2005 Link to publication Disclaimer This document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Student theses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the document as presented in the repository. The required complexity or quality of research of student theses may vary by program, and the required minimum study period may vary in duration. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

Transcript of Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and...

Page 1: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

Eindhoven University of Technology

MASTER

Ontology-based interactive user modeling for adaptive web information systemsenhancement of task sequencing

Denaux, R.O.

Award date:2005

Link to publication

DisclaimerThis document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Studenttheses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the documentas presented in the repository. The required complexity or quality of research of student theses may vary by program, and the requiredminimum study period may vary in duration.

General rightsCopyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright ownersand it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.

• Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain

Page 2: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

TECHNISCHE UNIVERSITEIT EINDHOVENDepartment of Mathematics and Computing Science

MASTER’S THESISOntology-based Interactive User Modeling

for Adaptive Web Information Systems

Enhancement of Task Sequencing

byR.O. Denaux

Supervisor: Dr. Lora AroyoExternal Adviser: Dr. Vania Dimitrova

Eindhoven, January 2005

Page 3: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

I. Abstract

Current Adaptive Web Information Systems suffer from several problems such as cold-start, in-accuracy of the user model and difference in the semantics of the end-users and the authors. Totackle these problems, we propose an OWL-based framework for interactive user modeling in in-formation systems on Semantic Web. The interactive user modeling approach is implemented asOWL-OLM and is integrated and demonstrated in the OntoAIMS Learner Content ManagementSystem. The user model in OntoAIMS is extracted —in collaboration with the user— with thehelp of graphical dialog episodes, which discuss the user’s domain knowledge on the domain on-tology in OntoAIMS. This user model is further used to enhance the adaptiveness of the coursetask recommendation in OntoAIMS. We present an analysis of the benefits of such a frameworkfor interactive user modeling integrated in adaptive web information systems. Detailed designand implementation of the OWL-OLM framework, as well as an evaluation with real users arealso presented in this thesis.

2

Page 4: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

II. Acknowledgements

Special thanks to Lora Aroyo and Vania Dimitrova for their guidance throughout this project,sharing their experience in the field and for making this work an enjoyable experience.

Also thanks to all those who contributed to this work. Michael Pye for his implementation ofthe OWL-OLM graphical communication interface. Vania Dimitrova for her work on the evalua-tion. All the people who evaluated the system at one time or another.

The SWALE project and the research reported here is supported by the UK-Netherlands Part-nership in Science program and is part of the work on work package 1 - Personalized AdaptiveLearning - of the EU Network of Excellence PROLEARN.

3

Page 5: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

III. Publications within this Project

The work described in this thesis resulted in the following papers:

• Integrating Open User Modeling and Learning Content Management for the SemanticWeb. Presented at the SWEL workshop at AH 2004: 3rd International Conference on Adap-tive Hypermedia and Adaptive Web-Based Systems.

• An Approach for Ontology-based Elicitation of User Models to Enable Personalization onthe Semantic-Web. Submitted to WWW 2005: 14th International World Wide Web Confer-ence.

• Integrating Open User Modeling and Learning Content Management for the SemanticWeb. Submitted to the UM 2005: 10th International Conference on User Modeling.

4

Page 6: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

IV. Related Web Sites

Web site URLDemo web site http://swale.comp.leeds.ac.uk:8080/staims/viewer.htmlProject web site http://wwwis.win.tue.nl/˜swaleBasic Linux Ontology http://wwwis.win.tue.nl/˜swale/blo

5

Page 7: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

V. Table of Contents

1 Introduction 91.1 Motivation . . . . . . . . . . . . . . . . . . . . . . 9

1.1.1 Scenarios . . . . . . . . . . . . . . . . . . . . 101.1.2 Problem definition . . . . . . . . . . . . . . . . . 12

1.2 Research Questions . . . . . . . . . . . . . . . . . . . 121.3 Goals and Deliverables . . . . . . . . . . . . . . . . . . 131.4 Project Overview . . . . . . . . . . . . . . . . . . . . 14

2 Analysis 162.1 Context . . . . . . . . . . . . . . . . . . . . . . . 16

2.1.1 AIMS: Learning Content Management System . . . . . . . . 162.1.2 STyLE-OLM: Interactive Ontology-Based Student Modeling . . . . 18

2.2 Integrating AIMS and STyLE-OLM . . . . . . . . . . . . . . 202.2.1 Ontology Languages . . . . . . . . . . . . . . . . 202.2.2 Tackling the Defined Problems . . . . . . . . . . . . . 21

2.3 Example of a Dialog to Introduce the Approach . . . . . . . . . . 222.4 Summary . . . . . . . . . . . . . . . . . . . . . . 25

3 Design 263.1 Integrated System Design . . . . . . . . . . . . . . . . . 26

3.1.1 Motivation for OWL-OLM instead of STyLE-OLM . . . . . . . 273.2 Interactive User Modeling . . . . . . . . . . . . . . . . . 28

3.2.1 Dialog Agent . . . . . . . . . . . . . . . . . . . 283.2.2 Dialog Games . . . . . . . . . . . . . . . . . . . 293.2.3 Utterances, Intentions and Domain Statements . . . . . . . . 303.2.4 Game Analyzer . . . . . . . . . . . . . . . . . . 313.2.5 User Interface . . . . . . . . . . . . . . . . . . . 32

3.3 User Model Representation . . . . . . . . . . . . . . . . . 323.3.1 User’s Conceptual State . . . . . . . . . . . . . . . 32

3.4 User-Adaptive Task Recommendation . . . . . . . . . . . . . 34

4 Architecture and Implementation 354.1 Implementation Context . . . . . . . . . . . . . . . . . . 35

4.1.1 OntoAIMS Main Packages . . . . . . . . . . . . . . . 354.2 OWL-OLM Main Packages . . . . . . . . . . . . . . . . . 364.3 OWL and Jena . . . . . . . . . . . . . . . . . . . . . 38

4.3.1 OWL Building Blocks . . . . . . . . . . . . . . . . 384.3.2 Jena: Framework for Building Semantic Web Applications . . . . 39

4.4 The Base Package . . . . . . . . . . . . . . . . . . . 394.4.1 Utterance . . . . . . . . . . . . . . . . . . . 394.4.2 Opener . . . . . . . . . . . . . . . . . . . . . 404.4.3 Intention . . . . . . . . . . . . . . . . . . . 41

4.5 The Statements Package . . . . . . . . . . . . . . . . . 414.5.1 OwlStatement Interface . . . . . . . . . . . . . . . 424.5.2 Classes Implementing the OwlStatement Interface . . . . . . 44

4.6 The Games Package . . . . . . . . . . . . . . . . . . . 474.6.1 DialogGame Interface and Implementing Classes . . . . . . . 48

6

Page 8: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4.6.2 DialogGameAgent . . . . . . . . . . . . . . . . . 49

4.6.3 GameAnalyzer . . . . . . . . . . . . . . . . . . 564.7 The OwlUtils Package . . . . . . . . . . . . . . . . . 57

4.7.1 SubModelMaker . . . . . . . . . . . . . . . . . 584.7.2 OwlToNaturalLanguage . . . . . . . . . . . . . . 584.7.3 OwlResourceComparator . . . . . . . . . . . . . . 604.7.4 OwlModelComparator . . . . . . . . . . . . . . . 604.7.5 OwlMismatch . . . . . . . . . . . . . . . . . . 61

4.8 The UM Package . . . . . . . . . . . . . . . . . . . . 614.8.1 ConceptualState . . . . . . . . . . . . . . . . . 624.8.2 Linking History Properties to Concepts and Relations . . . . . . 634.8.3 ConceptualStateMerger . . . . . . . . . . . . . . 65

4.9 The UI Package . . . . . . . . . . . . . . . . . . . . 654.10 Task Recommendation . . . . . . . . . . . . . . . . . . 68

4.10.1 UserProfileAgent . . . . . . . . . . . . . . . . 68

5 Evaluation 725.1 Experimental settings . . . . . . . . . . . . . . . . . . 725.2 Results and Discussion . . . . . . . . . . . . . . . . . . 73

5.2.1 Interaction sequence . . . . . . . . . . . . . . . . 735.2.2 Interactive User Modeling . . . . . . . . . . . . . . . 735.2.3 Task recommendation . . . . . . . . . . . . . . . . 745.2.4 Resource browsing . . . . . . . . . . . . . . . . . 745.2.5 Improvements . . . . . . . . . . . . . . . . . . . 74

6 Conclusions 766.1 Summary . . . . . . . . . . . . . . . . . . . . . . 766.2 Contribution of This Work . . . . . . . . . . . . . . . . . 766.3 Future Work . . . . . . . . . . . . . . . . . . . . . . 78

6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 786.3.2 Related Future Research . . . . . . . . . . . . . . . 79

6.4 Research Questions Results . . . . . . . . . . . . . . . . 81

A Source Code Documentation 87

B Basic Linux Ontology 88

C Learning about Linux Course 89

7

Page 9: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

VI. Abbreviations

Several terms, acronyms and abbreviations are used throughout this document. Below you findan overview of these abbreviations and their meaning.

Abbreviation ExplanationAIMS Adaptive Information Management System.aimsUM: http://wwwis.win.tue.nl/˜swale/aimsUM# XML prefix pointing to the RDF

definition of the user model as used in OntoAIMS.blo: http://wwwis.win.tue.nl/˜swale/blo# XML prefix pointing to the RDF/OWL

definition of the Basic Linux OntologyLTCS Long Term Conceptual State. Model of the user’s domain conceptualization as

the result of all the interactions with the system.OntoAIMS Ontology-based version of AIMS.OWL Web Ontology Language. A standard proposed by the W3C. As described in

[31]: “OWL is intended to be used when the information contained in docu-ments needs to be processed by applications, as opposed to situations wherethe content only needs to be presented to humans. OWL can be used to ex-plicitly represent the meaning of terms in vocabularies and the relationshipsbetween those terms.”

owl: http://www.w3.org/2002/07/owl# XML prefix pointing to the OWL definition.OWL-OLM OWL-based version of STyLE-OLM.rdf: http://www.w3.org/2000/01/rdf-schema# XML prefix pointing to the RDF def-

inition.rdfs: http://www.w3.org/1999/02/22-rdf-syntax-ns# XML prefix pointing to the

RDF-schema definition.STCS Short Term Conceptual State. Model of the user’s domain conceptualization as

the result of one dialog episode between the user and the system.STyLE-OLM Scientific Terminology Learning Environment - Open Learner Modeling. A

dialog-based user modeling system.SWALE Semantic Web and Adaptive Learning Environment.swaleQ: http://wwwis.win.tue.nl/˜swale/swale_question# XML prefix pointing to the

RDF definition of resources for building questions using OWL.UM User Model or User Modeling, depending on the context.xmls: http://www.w3.org/2001/XMLSchema# XML prefix pointing to the definition

of XMLSchema, which defines several datatypes used in OWL.

8

Page 10: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

Chapter 1

Introduction

1.1 Motivation

The rapid expansion of semantic-enriched services on the web results in exponential growth ofthe people who use such services. Users differ in many aspects, e.g. capabilities, expectations,goals, requirements, preferences and usage context. To truly fulfill the Semantic Web visionfor improving the way computers and people work together [4], the user perspective has to betaken into account [29]. The one-size-fits-all-users approach to developing web applications isbecoming outdated. Personalization functionality on the Semantic Web has to be implementedand applied to deal with user diversity [23]. The personalization has been a prime concern of theuser-modeling community which deals with methods for gaining some understanding of users,i.e. a user model [26], and using that understanding to tailor the system’s behavior to the needs ofindividuals, see for example [24]. However, for UM methods to be deployed on the Semantic Web,they should deal with semantics defined with ontologies [22, 5], and should enable interoperabilityof algorithms that elicit and utilize UMs[23] based on common ontology specification languages,such as OWL [31].

The open-world assumption of the Semantic Web [16] refers to the need to take into account userviewpoints ranging from domain experts to complete novices. By doing this, the web systemswill provide functionality properly adapted to individual users. Traditional personalization andadaptation architectures were suited to deal with closed-world assumption, where user modelingmethods, such as overlay, bug library, constraint-based modeling, etc. (see review in [24]) markeddiscrepancies in a user and expert’s semantics as erroneous, and often called them misconceptions.New approaches for open-world user modeling able to elicit extended models of users and todeal with the dynamics of a user’s conceptualization are required to effectively personalize theSemantic Web.

This research is part of the SWALE project, which is a joint research project between TU/eand Leeds University that investigates issues of knowledge, content and adaptation alignmentbetween students and instructors in the context of the application of Semantic Web technologiesand standards in adaptive learning systems. The main goal of the SWALE project is to con-struct and maintain an enhanced user model that integrates different user perspectives, suchas knowledge, personal preferences, interests, browsing patterns, cognitive and physical state.This enables us to utilize the user model for web-based personalized content management anduser-adaptive information retrieval. SWALE illustrates this in an integrated environment forpersonalized learning content management, called OntoAIMS. The user model in OntoAIMS

9

Page 11: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

1. Introduction

is based on the collection, interpretation and validation of user data from diverse sources, suchas user-defined preferences, diagnostic dialog with the user, and monitoring the user’s behaviorwith the system (e.g. links followed, resources opened, and searches performed). The enhanceduser model is used in OntoAIMS to improve its task recommendation mechanism.

This project focuses mainly on the diagnostic dialog with the user and presents a novel ap-proach based on a series of dialog games where a user and the system discuss and constructtogether a model of the user’s conceptualization of the domain. The approach is illustrated inOWL-OLM —an OWL-based version of the STyLE-OLM [17] tool for open user modeling, whichwas originally based on conceptual graphs— and presents how interactive user modeling (UMelicitation) and adaptive content management (UM utilization) on the Semantic Web can be in-tegrated in a learning domain to deal with typical adaptation problems, such as cold start, lack ofclear semantics of the user’s interaction data, inaccuracy of the assumptions drawn on the basis ofinterpretation of this data, and dynamics of the student’s knowledge [15]. This work demonstratesthe following novel aspects:

• ontological approach for integration of methods for eliciting and utilizing user models

• improved adaptation functionality resulting from that integration, validated in studies withreal users

• support of interoperability and reusability on the educational Semantic Web

1.1.1 Scenarios

To make the context of this project more concrete, the following three scenarios illustrate a num-ber of common problems in current adaptive information systems. We consider a situation wherea learner has to explore materials from a large repository in order to accomplish a learning goal.Each learning object is linked to a list of concepts that, according to the beliefs of the creator of theobject, will be mastered when the learner studies the object content. Throughout this documentwe use concepts from the topic Learning about Linux which is based on a Basic Linux Ontology(see Appendix B) which defines main concepts needed to understand how the Linux operatingsystem functions (domain concepts are shown in typewriter). The Basic Linux Ontology wasdeveloped as part of this project.

Scenario 1: The Cold Start Problem

Bill is a Computing student and has an assignment that requires using the Linux operatingsystem. Bill happens to have previous experiences with the Microsoft Windows andMS-DOS operating systems and already knows about operating systems,file systems,and command-line interfaces. Bill has been advised to use a web-based system that pro-vides learning materials on Linux. His goal is to log to the system and quickly learn how to copyhis files from a floppy disk and how to compress them, as required in his assignment.

Some adaptive systems would allow Bill to directly search for concepts of interest or to choosea topic to study from a list of contents [6]. In this case, the system cannot accurately diagnoseBill’s goals and prior knowledge based on his choice. The system may provide learning objectsconfusing him as they may contain unfamiliar concepts. Furthermore, Bill’s goal of working withlearning objects may not necessarily correspond to the goal defined by their creators.

10

Page 12: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

1.1 Motivation

In other adaptive tutoring systems Bill may have to do a predefined test allowing the systemto extract a model of his domain knowledge [6]. This may have a negative effect on Bill’s learningmotivation because of both the unpopularity of the tests and the learning distraction that testingmay cause.

Now imagine a content management system which first interacts with Bill in the form of ashort dialog to determine his previous knowledge and how it relates to his learning goal. To bedomain independent, the interaction should be guided by a generic dialog planning mechanismlinked to reasoning about the course domain and structure. As a result the system extracts aninitial learner model which indicates an adequate starting point for Bill and allows the system tofurther recommend appropriate learning objects.

Scenario 2: Semantics is Defined Solely by a Course Creator

Linus, as opposed to Bill, has quite some experience with Linux, but he also wants to use resourcesto learn more about Linux. Suppose now that both Bill and Linus want to learn more about useraccounts and file security.

Some adaptive systems would offer different courses, e.g. for beginners, intermediate, andadvanced learners. Stereotyping in learning systems is prone to problems. It is difficult to definea general meaning for each of the above categories, let alone to correctly classify a learner intoone. However, the main drawback is that it is almost impossible for the course creator to considerthe specific goals of each learner and to determine the concepts he will refer to when trying tounderstand a topic within each category.

What one ideally wants is the system to use a user model in combination with the domain,course and resource knowledge to attend the needs of each learner as best as possible. Choosingresources will depend on the learner’s semantics of what the goal is and what concepts are rele-vant rather than on the semantics pre-encoded by experts unaware of the specific needs of eachindividual learner.

Scenario 3: Inaccuracy and Dynamics of the Learner Model

Assume that both Bill and Linus have spent considerable time with an adaptive course manage-ment system. Based on some user modeling mechanisms, e.g. tests or tracking the user’s inter-actions with learning objects, the system builds a model that represents the user’s understandingof the domain [6].

The diagnostic mechanism may produce inaccurate learner models. For instance, the correctanswers in tests may be guessed and may not necessarily be based on domain understanding.In addition, the system’s judgment about the learner’s understanding based on pre-encoded on-tology and on reading a document, may often be inaccurate. For example, let us suppose thatBill and Linus both have read a document about user accounts that includes other domainconcepts, such as file system, root, file permissions, etc. Bill may be able to createa link between user account and file permissions but may be unsure of all relationsbetween these concepts, while Linus, who already has some knowledge of both concepts, will beable to understand deeply what their relations are. Reading the document, Bill may also discoverthe link between file system and root, although that link is not explicitly discussed in thedocument on user accounts. Linus, being confident of his knowledge on user accounts,may skim through the document and, thus, miss some elaborated domain concepts’ relations butthe system may not have any means to account this.

11

Page 13: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

1. Introduction

Furthermore, the learner modeling mechanism may not be able to capture the dynamics ofthe user’s knowledge. For example, Bill and Linus may use other resources to learn about Linux.They may test what they have learned in practice, and may compare their experience with that offellow students. Their learner models will evolve: more beliefs will be acquired and existing oneswill be refined. It will be impossible to capture the dynamics of the learner’s knowledge withconventional diagnostic mechanisms.

A desired situation would be to have some combination of open learner modeling [35] and guideddiagnostic interaction [17]. For example, after a knowledgeable learner (like Linus) finishes with alearning object, a clarification dialog can be initiated to enable the learner to identify learned con-cepts and discovered domain relations. For less knowledgeable learners (like Bill) the clarificationdialog may be triggered by showing them their learner models, as discussed in [17].

1.1.2 Problem definition

The description of the research context presented above outlines a number of problems of currentadaptive information systems. The scenarios further illustrate these problems, making themmore concrete. Here, we list the problems and give a short explanation.

• Deal with cold start. Adaptive systems face problems to deal with first time users for whoma user model is not accumulated yet. Techniques are needed to rapidly extract specific partsof the user model.

• Use of previous knowledge and experience. The same document can have different learn-ing effects on different people depending on their prior knowledge. Methods for diagnosingthe user’s prior knowledge are needed.

• User models based solely on analysis of learner-system interaction are inaccurate. In reallife, teachers use implicit ways to assess the students’ results and performance. Learningsystems need explicit mechanisms to clarify the obtained learner models.

• Semantics of the course author and the learner may differ. This leads to inaccurate learnerdiagnosis or inappropriate recommendation of relevant material. Furthermore, such differ-ence justifies why one cannot rely only on the learners’ opinion of their domain knowledge(as in open learner models [35]) and that some clarification dialog is needed.

• The course author’s goals and the learner’s goals may differ. Most educational systems as-sume these two goals are equal and do not adapt to special goals the learners might have.Mechanisms for clarifying the users’ goals are critical for providing effective recommenda-tions and guidance.

• The learner’s goals, preferences, and knowledge evolve. The changes in the learner modelare not only gradually between the start and end of the course, but also temporarily during acertain session. Appropriate mechanisms for capturing this dynamics should be employed.

1.2 Research Questions

To tackle the problems defined above, several research questions need to be answered. Here, wepresent the questions grouped according to the problem they attempt to tackle. We first presentthe main questions discussed in this work:

12

Page 14: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

1.3 Goals and Deliverables

• Dealing with cold start.

– How can we use domain and course knowledge with interactive ontology-based usermodeling to rapidly obtain a user model from scratch?

• User models based solely on analysis of learner-system interaction are inaccurate.

– Can we improve user modeling accuracy using interactive ontology-based dialogs forrapid user modeling? If we can, what is the improvement?

• Semantics of the course author and the learner may differ.

– How can we capture semantic differences between course author and learner?

– How can we use this information to align the semantics?

The following research questions based on the defined problems are also related to this projectbut do not form the main focus of this work and are not discussed in detail:

• Use of previous knowledge and experience.

– How can we involve prior user knowledge for selection of relevant resources?

• The course author’s goals and the learner’s goals may differ.

– How can we express goals in the user model?

– How can we use the tasks model of AIMS to infer the course author’s goals?

– How can we use extract the learner’s goals?

– How can the system adapt to the learner’s goals?

• Learner’s goals, preferences, and knowledge evolve.

– How can we model episodic and long-term user properties?

– How can we use this information for adaptation purposes?

1.3 Goals and Deliverables

We argue that a fruitful approach to address the problems presented above is the combinationof interactive open user modeling and adaptive (educational) content management. We considerontology-based dialog between the system and the user to rapidly extract a model of the user’sconceptual understanding and goals. Furthermore, traditional diagnostic techniques will be im-proved by employing mechanisms from open and interactively constructed learner models todeal with dynamics and accuracy issues. The extracted user model will be used in an educationalinformation retrieval system to provide personalized learning resources adapted to the needs ofeach individual learner. To show this in practice, we use two existing systems: AIMS (a learningcontent management system) and STyLE-OLM (an interactive ontology-based student modelingsystem), which are described in Sect. 2.1.1 and 2.1.2. Chapters 2 and 3 argue that STyLE-OLMand AIMS are complementary and their integration allows us to address the problems outlinedabove.

This project delivered the following results:

13

Page 15: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

1. Introduction

• An analysis of the potential of an adaptive information system with interactive open usermodeling with regard to solving the problems defined above.

• The design, architecture and implementation of OWL-OLM, an interactive open user mod-eling system which can be integrated in adaptive information systems.

• A prototype of OntoAIMS, an adaptive learning content management system which usesOWL-OLM for enhancing its task recommendation functionality.

• Master’s Thesis report describing the research and implementation parts of the project.

• Mid-term presentation on the research part of the project.

• Final presentation of the research, implementation and results of the project.

• Evaluation of the OntoAIMS prototype with real users in a realistic environment.

Additional research results were reported in:

• a workshop paper for the 3rd International Conference of Adaptive Hypermedia and Adap-tive Web-Based Systems(AH 2004)

• a paper submitted to the 14th International World Wide Web Conference(WWW 2005)

• a paper submitted to the 10th International Conference on User Modeling(UM 2005)

1.4 Project Overview

This section gives a chronological overview of the work performed and relates the work to thechapters of this document.

The introduction phase of the project was spent on discussing and determining the scope ofthe project. Current research and implementations in User Modeling and Semantic Web werestudied. Introductory papers on Adaptive Information Systems (particularly for education) werealso used. Furthermore, both AIMS and STyLE-OLM were studied. Chapter 1 presents the resultsachieved: a project planning, a set of research questions and goals.

The research and analysis phase focused on achieving the goals and answering the mainresearch questions for this project. Specific research was done on OWL and user modeling ofconceptualizations. AIMS and STyLE-OLM were further analyzed focusing on how both systemscould be integrated. Chapter 2 presents the results of the research and analysis phase. Anotherresult of this phase was the Basic Linux Ontology (see Appendix B) and a course definition for aLinux course using AIMS and the Basic Linux Ontology (see Appendix C). Both products weredeveloped in an effort to further learn and analyze AIMS, the OWL language and the possibilitiesprovided by current OWL reasoners. The first two phases of this project further resulted in theproduction of a paper [15] for the SWEL workshop at the AH’04 conference.

In the design phase, the analysis and research lead to an initial design for an integrated system.The architectures of both AIMS and STyLE-OLM were studied to devise the initial design based onthese architectures. The design phase results in further formalizations of the integrated system.Chapter 3 presents the resulting design for the integrated system.

In the implementation phase, the design and formalizations of the system were implementedbased on OntoAIMS. Most of the implementation was done in Eindhoven, although a crucial part

14

Page 16: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

1.4 Project Overview

of the system, the user interface, was implemented in Leeds by Michael Pye under supervision ofVania Dimitrova. Chapter 4 describes the result of the implementation phase.

The evaluation phase tested the integrated system with real users from Leeds and Eindhoven.A first evaluation showed several problems with the implementation and resulted in improve-ments to both the user interface and the adaptive behavior of the system. The improved imple-mentation was then used in a comprehensive evaluation of the system. Chapter 5 discusses theevaluation phase of this project.

During documentation phase, the design, implementation and evaluation efforts resulted inthe production and submittal of two papers, one for the WWW’05 conference [13] and another forthe UM’05 conference [14]. Besides the production of the two papers, this phase is dedicated toproduce documentation for the source code (see Appendix A) and for the project as a whole (thisdocument).

Summarizing, Chapter 2 presents an analysis of the context and possible solutions to theproblems listed in Sect. 1.1.2. Chapter 3 presents a design for integrating interactive user mod-eling and adaptive information systems based on semantic web technologies. Chapter 4 thendescribes the implementation of this design in the form of OWL-OLM in OntoAIMS. Chapter 5describes an evaluation of OWL-OLM with real users and finally, Chapter 6 summarizes the pre-sented work, relates it to relevant research and discusses related future work.

15

Page 17: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

2. Analysis

Chapter 2

Analysis

Chapter 1 introduced the context, problems and goals for this project. Chapter 2 expands onthe context, explaining in detail the systems and technologies used and proposing a design forcombining the systems to achieve the goals for this project.

2.1 Context

Section 1.1 mentions the AIMS and STyLE-OLM systems as a real-life application of the proposedresearch solutions. AIMS was chosen because it suffers from several of the problems discussedin Sect 1.1.2. STyLE-OLM implements interactive user modeling and produces user conceptual-izations which can be used to improve adaptive information systems. Both systems were furtherchosen due to previous experience present in the SWALE project. This section provides thedesign of both systems, shedding light on how learning content management systems and in-teractive ontology-based student modeling systems work. We further describe OWL, an ontologylanguage for the web, which plays an important role in the proposed design explained in Sect. 2.2.

2.1.1 AIMS: Learning Content Management System

AIMS is an information handling support system focused on an efficient information provisionfor task-oriented problem solving. It offers adaptive contextual support (through its Search andBrowse Tool, see Figure 2.1) that enables users to identify information necessary for performinga particular learning task [1].

The AIMS general information model —see Fig. 2.2— consists of the following models:

• The domain model conceptualizes the subject domain.

• The resource model gives a semantic annotation of the resources used in AIMS and presentsa strategy for their most efficient management. Each resource is described with a set ofmetadata focused on educational applicability. The resources are further linked to domainmodel concepts.

• The course task model describes the tasks the users can follow to learn the course. Thetasks are described in terms of concepts —prerequisites for the task and concepts learnedby performing the task— and learning resources from the domain and resource models.Tasks are furthermore linked to each other in a hierarchical structure defined by the tasksprerequisites and taught concepts.

16

Page 18: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

2.1 Context

Figure 2.1: AIMS search and browse window

• The user model keeps track of several aspects of the user like which tasks that have been vis-ited and finished. The UM also contains an overlay model of the domain model, where thedomain concepts are instantiated with the specific user values, indicating his/her knowl-edge for each of them and the current user’s state within the entire instructional cycle.

Figure 2.2: AIMS information model.

As a consequence the task-orientation in AIMS, itfocuses on the classification of the knowledge in theapplication (application semantics). Thus, concept-based representations of the content play a pivotal rolein modeling the domain content (domain model) andthe user (user model), but also in modeling the appli-cation problem-solving strategies (task model) and theadaptive mechanisms employed in the system (sequenc-ing model). The strict separation of concern betweendomain-dependent and application related issues andthe resources themselves, allows for flexible solutionsand reusability. It offers a good content/informationbasis for adaptive recommendations, where the domain and the sequencing strategies can beeasily changed, as well as the user model, in order to achieve various goals for the adaptation.

Adaptive task-based systems like AIMS appear to be especially effective in learning and train-ing oriented application areas, where the sequencing of content, and the concept-based visual-ization of search results and domain concepts, proves to be a good instrument to guide the user

17

Page 19: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

2. Analysis

through the material. A key role for achieving this adaptation is played by the user model(e.g.construction and its use by the systems and by the learner). The following section gives an analy-sis of how the user model relates to the other parts in AIMS, which illustrates the importance ofthe user model.

Information Related to the User Model in AIMS

Each of the information models (domain, resource library and tasks) of AIMS has some relationto the user model. The domain model provides a point of reference to be used for buildingan extended overlay learner model. The resource library model is related to the user model bykeeping track of which resources have been used by the learner and measuring the learningeffects of the resources on the learners. The tasks model in AIMS can be a source of meta-levelinformation about the user and the domain, as it provides the following information:

• Purpose of the current task. Each task in AIMS has a purpose which corresponds to the goalthe teacher has in mind for the student when performing the task. In practice, the goalsof the student might differ from the intended purpose of the course author. Examples ofpurposes are: Learning new concepts, Brushing current knowledge about some concepts,Reviewing some topics which are already known, etc.

• Focus of the task. Each task in AIMS is related to a set of concepts X which the learnershould learn about. It can be used to maintain the learner’s focus on the current task.

• Ideal or standard learning path. The tasks model contains information about pre-conditionsand learning effects of tasks. This information can be used to infer which concepts arebasic knowledge for the course (concepts required by most of the tasks) and which areintermediate or advanced knowledge. This information could be used to try to solve the coldstart problem by using STyLE-OLM to determine the learner’s knowledge about conceptsrequired for tasks.

AIMS is an example of an adaptive web-based system that suffers from most of the problemsoutlined in Sect. 1.1.2, e.g. cold start problem, lack of alignment between author’s semanticsand learner’s semantics and lack of understanding of the user’s goals (and adapting to them).The main cause for these problems is the very limited user model in AIMS which restricts theadaptation to the users. A primary goal for this project is to empower AIMS with user modelingapproaches to allow for a rapid user modeling, accuracy in the user assessment and alignment ofthe user’s goals and semantics with the ones of the course author. Some of the aforementionedproblems are already addressed in STyLE-OLM.

2.1.2 STyLE-OLM: Interactive Ontology-Based Student Modeling

STyLE-OLM is a learner modeling system where a learner model is constructed with the activeparticipation of the learner being allowed to inspect and discuss the content of the model thecomputer builds of him/her. The system is based on a computational framework for interactiveopen learner modeling [18] that includes a domain ontology, discourse model, and belief mechanismfor maintaining a jointly constructed user model (see Fig. 2.4). The framework is fairly general andfully domain independent, two instantiations of STyLE-OLM in a Computing and Finance domainhave been demonstrated. STyLE-OLM deals with an extended overlay learner model that captures

18

Page 20: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

2.1 Context

not only correct and incomplete learner beliefs (i.e. overlay upon the domain ontology) but alsobeliefs that are not confirmed by the domain ontology. Patterns of the learner’s reasoning thatmight cause erroneous beliefs are also modeled.

Figure 2.3: STyLE-OLM Screenshot

Figure 2.4: STyLE-OLM informationmodel.

The interaction in STyLE-OLM is in a graphicalmanner (see Fig. 2.3) and has two modes: DISCUSS,where the learners discuss aspects of their domainknowledge and influence the content of their models,and BROWSE, where the learners inspect the currentstate of their models [17]. Throughout the discussions,the system makes plausible inferences about what isfurther believed by the learner on the basis of what isexplicitly asserted, and from this a dialog strategy is de-termined. Important in the context of this research isthat STyLE-OLM can be adapted by changing its dialogstrategies to tackle knowledge alignment issues, e.g. byfollowing studies about differences that may occur be-tween conceptual models of people [21]. Knowledge probing strategies can also be added to artic-ulate the learner’s conceptualization at the beginning of a session.

The major drawback of STyLE-OLM is that the user model it generates is not based on a stan-dard ontology language which makes reusability of the UM difficult. As we discuss in Sect. sec:style-olm-impl and sec:motivation-reimplementation, this drawback calls for a reimplementation of

19

Page 21: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

2. Analysis

STyLE-OLM using an accepted standard ontology language.

2.2 Integrating AIMS and STyLE-OLM

Sect. 2.1 introduced learning content management systems, interactive ontology-based student modelingand ontology languages and described an instantiation of each in the form of AIMS, STyLE-OLMand OWL. This section argues that the combination of learning content management systems andinteractive ontology-based student modeling can help tackle the problems described in Sect. 1.1.2.

Figure 2.5 shows what a system integrating AIMS and STyLE-OWL would look like. Themost important advantage of integrating the systems is that both the domain model and the usermodel are shared by the two subsystems. Both STyLE and AIMS have used ontologies developedfor their specific needs. The use of ontologies, especially if they are based on standards, provideseveral advantages: extensibility and flexibility (we can plug in other ontologies or relate our do-main ontology to a top ontology), interoperability (we can share ontologies with other applications)and reusability of previous work (we can use existing tools and methods for eliciting, aligning, pars-ing and visualizing ontologies). A stimulating discussion on these issues in a broader scope isgiven in [40]. Using a common ontology encoded is critical as it ensures interoperability of com-ponents of both systems and wide application of the integrated system. AIMS is already movingtowards using OWL to represent its ontology. Sect. 2.2.1 discusses ontology languages in generaland OWL in particular.

Figure 2.5: Integration of AIMS and STyLE-OLM model.

The other key aspect of the integration is the common user model. As Sect. 2.1.1 describes, theuser model relates to all the other information models in AIMS and is the provider of informationfor adaptability. By using a common user model with STyLE-OLM, AIMS will be able to buildparts of the user model more efficiently which will enable better adaptability. Sect. 2.2.2 gives ananalysis of how a common user model can be used to tackle the problems described in Sect. 1.1.2.

2.2.1 Ontology Languages

In both AIMS and STyLE-OLM, the domain model has a central role. It describes the domainconcepts that are used and managed by the systems. The language used to represent the domainmodel influences what the systems can do with the information provided by the domain model.In AIMS, the domain model is described by a database of concepts and their relationships. Thisdatabase suffices for AIMS, allowing the system to relate the resources and course tasks to theconcepts. STyLE-OLM needs to do more with the information from the domain ontology: in-terchange domain related messages with the user and use these messages to construct a UM.

20

Page 22: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

2.2 Integrating AIMS and STyLE-OLM

STyLE-OLM needs to perform reasoning with the domain model information and thus needs amore structured domain model representation which allows for reasoning, message generationand analysis. Ontology languages provide such a structured representation of knowledge.

There are several ontology languages available for domain knowledge representation like OWL,topic maps, conceptual graphs and RDF. All of these have different strengths and weaknesseson expressibility and reasoning. Expressibility refers to the type and complexity of the informationthe language can describe. By reasoning, we mean that computers can infer new informationbased on the ontology, find specific information relevant to a concept and determine whethersome statement about a concept or a set of concepts is true or not. All this reasoning can be doneindependently of the actual contents in the ontology.

In this project we have chosen for OWL because it provides good expressibility and it provides agood semantic framework for performing reasoning and defining operations on the informationdescribed. Furthermore, there is much research on OWL, which provides a large pool of resourcesand options for implementing OWL-based systems. Another advantage of OWL is that it’s XMLbased, making it suitable for easy sharing of information between systems.

OWL

The Semantic Web effort has produced OWL, an ontology language for the web. As describedin [31]: “OWL is intended to be used when the information contained in documents needs to beprocessed by applications, as opposed to situations where the content only needs to be presentedto humans. OWL can be used to explicitly represent the meaning of terms in vocabularies and therelationships between those terms”. This is done by defining the domain in terms of individuals,classes and properties. Individuals refer to real-world entities with relations between each other(properties). Individuals can be grouped in classes —sets of individuals sharing some commonproperties. OWL further defines basic properties which enforce a hierarchy of classes and prop-erties. By defining the basic semantics of the OWL building blocks, theories like description logic[3] can be used to perform reasoning about the information described in OWL. Sect 4.3 discussesOWL in detail.

2.2.2 Tackling the Defined Problems

Interactive user modeling like that of STyLE-OLM has two main applications when integrated inan adaptive information system: (1)it extract specific parts of the user model and (2) it provides asemantic interface between the system and the user. Here, we discuss how these two functionalitiescan be used to tackle the problems in adaptive information systems as defined in Sect. 1.1.2. Inparticular, we discuss how the adaptability of AIMS can be improved.

• Validating task and resource effects. AIMS assumes that completing a task or viewing aneducational resource results in the same learning effect for every user. STyLE-OLM can beused as a review of the task or resource to find out what the learner’s knowledge is aftercompleting the task or viewing the resource. Depending on the desired quality of the UM,this STyLE-OLM interaction could be done after every task or after a series of tasks or onlywhen a user requests such a review.

• Tackling the cold start problem and aiding with the task sequencing. STyLE-OLM can usethe tasks model to determine which concepts to probe for if no information is known about

21

Page 23: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

2. Analysis

a user. In general, STyLE-OLM could be used to determine whether a user has enoughknowledge for recommending a task.

• Adaptive browsing. AIMS can narrow down the number of concepts shown. The currenttask already roughly defines which concepts should be learned by the user during the task.By combining the user knowledge, preferences and goals and the specification of the cur-rent task, it is possible to determine which concepts and links are more relevant whenbrowsing.

• Visual representation of open learner model. The idea behind this is to show the user whattheir model is. This has two advantages: (1) the user can disagree with their model andinitiate a dialog with STyLE-OLM to discuss about the concepts (as a result, the learnermodel will be refined); (2) the user can see which concepts are little known to him/her andcan decide to learn about them.

• STyLE-OLM as an aid for understanding the domain. When browsing the domain, userscan use STyLE-OLM to focus on a certain set of concepts and discuss their understandingabout those concepts. The students can have the initiative every time when starting a dis-cussion and STyLE-OLM will answer students’ questions or help the students find out whatare the wrong assumptions/conclusions they make.

• Using the UM for selecting and sorting resources. When users want to learn more abouta certain topic, they can query the resource library to find suitable educational resources.When there are several resources as a query result, these resources have to be filtered andsorted. This can be done based on the user’s current knowledge, goals and preferencesto determine the kind of resource most suitable for the user. Examples of preferencesare preferences for a kind of material format (multimedia or only text) and preference forpractical oriented resources (tutorials and examples instead of theoretical resources).

• Articulating the learner’s goals. To be able to adapt to the user’s goals we need to identifythose goals. For this we may use information from the current task, a STyLE-OLM dialog ora form filled in by the user. In order to provide this kind of adaptation we need enough andappropriate annotation of the resource library. Also, because the user’s goals, preferencesand knowledge change in time we need a way of modeling users through time in orderto maintain continuity and momentum in the learning activities. We consider the use ofepisodic and long term user models. During dialog episodes, a short-term, episodic usermodel is accumulated. It is used as a source for updating the long term user model atthe end of each episode by employing STyLE-OLM algorithms based on non-monotonicreasoning.

2.3 Example of a Dialog to Introduce the Approach

Before we explain the design of the an integrated system in Chapter 3 we give an example of whatan interaction with such an integrated system will look like. This example introduces relevantterminology needed for describing and understanding the problems involved in eliciting a usermodel using a dialog.

This example illustrates the application of an interactive user modeling system to deal withthe cold start problem concerning the lack of information about a user who logs on for a first

22

Page 24: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

2.3 Example of a Dialog to Introduce the Approach

time. This results in the system’s incapability of adapting its behavior to user needs. The Useris a student who follows an introductory web course on "Linux Operating System". When he logson to the system, an interactive user modeling system is employed to elicit a user model via adialog. This dialog is guided by using an existing domain ontology and explores aspects of theUser’s conceptualization of the topic at hand "Files and operations on files". The communication ispresented here in a textual form but with current computers it would be conducted in a graphicaluser interface.

(1) DialogAgent: I’ll try to suggest a suitable task for you. I will first ask you some questions related to"Files and operations on files".

(2) DialogAgent: Do you think a file has a unix user as its owner?

(3) User: Yes, a file has a unix user as its owner.

(4) DialogAgent: Do you think a file has a path as its full name?

(5) User: I don’t know whether a file has a path as its full name.

(6) DialogAgent: Do you think a file has a readable permission as its group readableright?

The agent opens the dialog episode and tells what the goal is. It then starts a probing dialoggame to elicit the part of the user’s conceptualization related to the current task. It begins withthe concept file, which is a main concept in this topic, and checks whether the user knowsconcepts linked to file — utterances (2) - (6). A dialog utterance consists of a domain statement,a statement about domain related concepts, and a dialog intention, the intention of the utterancein the dialog context.

(7) User: Can you tell me more about who or what can be the group readable right of a file?

(8) DialogAgent: As far as I know, a file has a readable permission as its group readableright.

(9) User : [Skip]

(10) DialogAgent: If you don’t have more questions, let’s now return to discussing files.

The user interrupts the dialog, asking a question. A dialog-based interactive user modelingcomponent needs to be able to handle questions asked by the user. In this example the agentsuspends the current dialog game and initiates a new game to answer the user’s question. It sendsa query to the domain ontology and then composes the answer. The agent then waits to seewhether the user has more questions. Since the user’s questions have been answered, the agentreturns to the dialog game that probes what the user knows about file. It checks for propertiesof file.

(11) DialogAgent: Do you think a file has a filename?

(12) User: Yes, a file has a filename.

(13) DialogAgent: We have finished with files. Let’s now discuss moving files.

When the goal of a dialog game has been achieved, the interactive user modeling componentshould be able to choose a new goal and a new dialog game to continue the user modeling elici-tation. In this example the agent closes the game that probes for files. It then opens a dialoggame that probes for another main concept —moving files— and starts with probing for theparent.

23

Page 25: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

2. Analysis

(14) DialogAgent: Do you know what moving file is?

(15) User: I think that moving a file is a command.

During a dialog, the user may produce an utterance which is not supported by the domainontology, we call this a mismatch. In this example the mismatch is detected. According to thedomain ontology used by the agent, moving a file is a file operation not a command.The agent again suspends the probing game and starts a clarification game.

(16) DialogAgent: I’d like to check why you think moving a file is a command.

(17) DialogAgent: Do you think moving a file is also a file operation?

(18) User: Yes, I think that moving a file is a file operation.

(19) DialogAgent: Do you think moving a file has a program as its executing program?

(20) User: Yes, moving a file has a program as its executing program.

(21) DialogAgent: Do you think moving a file has a version?

(22) User: I don’t think moving a file has a version.

Clarifying a mismatch requires some strategy for discovering more information as to why themismatch has occurred. In this example, the agent first checks for an alignment between theuser’s conceptualization and the domain ontology (utterance (17)), and then checks for inheritedproperties between moving file and file operation (utterance (19)). Finally, the agentchecks for an inherited property between moving file and command (utterance (21)). Sincethe user does not confirm the property, the agent checks the property for command to highlightthe inheritance link the user makes between moving files and command.

(23) DialogAgent: Do you think a command has a version?

(24) User: Yes, a command has a version.

(25) DialogAgent: Do you think moving a file is also a command?

(26) User: I don’t think that moving a file is a command.

The user realizes that the property is not inherited and the agent checks again the link fromthe user’s conceptualization. The agent has done the probing on the current topic. It finishes thegame and sums up the result of the dialog episode based on the extracted user model.

(27) DialogAgent: We have finished with moving files.

(28) DialogAgent: We have now finished discussing "Files and operation on files". I think that you have toread more about this topic as it contains information you should learn. You can now go to the browserto read recommended resources that are appropriate for your level.

(29) DialogAgent: Thank you for your time. Good bye.

The dialog episode is closed by the agent because it has elicited sufficient aspects from theuser’s conceptualization to take the adaptation decision, which in this case is related to recom-mending appropriate resources. The user can also decide to close the dialog at any time.

24

Page 26: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

2.4 Summary

2.4 Summary

This chapter presented how learning content management systems can be made more adaptiveby integrating these systems with interactive ontology-based student modeling. In particular, weintroduced the AIMS and STyLE-OLM systems and analyzed the information models used bythese systems, resulting in an overview of how an integrated system has the potential to tackleseveral of the problems introduced in Sect. 1.1.2.

This chapter also presented an example of how an integrated system would interact with theuser to extract a user model. This example introduced several problems which should be solvedby a student modeling system. STyLE-OLM provides a framework for solving those problems.

This chapter further introduced ontology languages and gave and overview of OWL as an ex-ample of such a language. We argued that OWL is a suitable language to use for a system whichwould integrate AIMS and STyLE-OLM. AIMS is already moving towards supporting standardontology languages like OWL, which will result in a new instantiation of the system called On-toAIMS. Chapter 3 gives a design for OntoAIMS, focusing on an integrated OWL-based versionof STyLE-OLM system which we call OWL-OLM.

25

Page 27: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

3. Design

Chapter 3

Design

This chapter explains the design of an adaptive information system with an integrated interactiveuser modeling component. We have implemented this design in the OntoAIMS system as wedescribe in chapter 4. Throughout this chapter we continually refer to the OntoAIMS systems,but the design we describe is generic and should apply to most adaptive information systems.Sect. 3.1 describes the design of the entire system. The focus of this project is the interactive usermodeling component and Sect. 3.2 describes the design of such a system. Related to the usermodeling is the user model itself and how information is represented in the user model, Sect. 3.3describes the design of the user model in the system. Finally, the described interactive usermodeling component has to be actually used inside the adaptive information system. Sect. 3.4describes the design of how the interactive user modeling component can be used to provideadaptivity when recommending tasks to users.

3.1 Integrated System Design

OntoAIMS1 is an Ontology-based version of the AIMS Adaptive Information Management System—described in Sect. 2.1.1— providing an information searching and browsing environment thatrecommends to learners the most appropriate (for their current knowledge) task to work on andaids them to explore domain concepts and read resources related to the task.

Figure 3.1: OntoAIMS Integrated Design

Sect. 2.1.1 already explains the role of the different information models inside the AIMS sys-

1The system is available at http://swale.comp.leeds.ac.uk:8080/staims/viewer.html with username visitor, pass-word visitor.

26

Page 28: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

3.1 Integrated System Design

tem; this explanation is also valid for OntoAIMS. Fig. 3.1 shows the design of OntoAIMS, whichis based on the integrated architecture proposed in Fig. 2.5. The “Concept & Content Browser”in OntoAIMS comes directly from AIMS. The “Activity User Profile” is partly based on the usermodeling which was done in AIMS. New in OntoAIMS is the OWL-OLM Dialog which is stronglybased on STyLE-OLM. OntoAIMS differs from AIMS in several ways as it tries to improve the sys-tem. We explain the most important differences:

• OntoAIMS uses ontologies (see Fig. 3.1) to represent the aspects of the application se-mantics, to allow a strict separation of domain-dependent data, application-related data andresources, and to further enable reusability and sharing of data on the Semantic Web.

• To enable adaptivity, OntoAIMS utilizes a more comprehensive User Model than the UMused in AIMS. The UM in OntoAIMS covers learner preferences, personal characteristics,goals and domain understanding. Sect. 3.3 describes the design of the UM in detail.

• OntoAIMS has an integrated interactive user modeling component which is used to build amodel of the user’s conceptual state. The design of this component is described in Sect. 3.2.

3.1.1 Motivation for OWL-OLM instead of STyLE-OLM

As Fig. 3.1 shows OntoAIMS uses OWL-OLM instead of STyLE-OLM. Here, we motivate thedecision to use OWL-OLM instead of directly integrating STyLE-OLM in OntoAIMS. We firstpresent some implementation details of STyLE-OLM relevant for this motivation.

STyLE-OLM Implementation

STyLE-OLM has an implementation which combines Java and Prolog. The reasoning needed forperforming the tasks of the Dialog Agent, Dialog Games and Dialog Analyzer (see Sect. 3.2) isdone in Prolog. Java is only used for the user interface and uses a binding mechanism to Prologto communicate with the Prolog components.

The domain ontology and the user models in STyLE-OLM are based on Conceptual Graphs [41].A Prolog library was developed to handle input, output and reasoning over conceptual graph mod-els used by the system.

Keeping the STyLE-OLM implementation in mind, the main reasons for not favoring STyLE-OLMare the following:

• STyLE-OLM was originally designed to extract a detailed user model. This resulted in rel-atively lengthy interactions with the users. For OntoAIMS, we aimed at a system whichwould keep the interactions as short as possible.

• STyLE-OLM uses conceptual graphs while OntoAIMS uses OWL as its ontology language.We could use an OWL to conceptual graphs and vice versa conversion program, but due tofundamental differences in the design of these ontology languages, this would introducecumbersome incompatibilities.

• STyLE-OLM uses Prolog for everything but the user interface. The use of Prolog was fur-thermore strongly related to reasoning on the conceptual graph models. Since OntoAIMSuses its own reasoning mechanism based on OWL, we prefer to reuse this reasoning mech-anism and avoid using Prolog only for the interactive user modeling component.

27

Page 29: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

3. Design

• The user models produced by STyLE-OLM are also based on conceptual graphs. Since thedomain ontology in OntoAIMS is based on OWL, we prefer to also describe the conceptualmodels in OWL.

The above shows that the design of STyLE-OLM can be largely reused, but the system needsto be implemented from scratch using OWL instead of Conceptual Graphs. This OWL-basedimplementation is called OWL-OLM and its design is discussed next. The actual implementationof OWL-OLM is discussed in Chapter 4.

3.2 Interactive User Modeling

This section describes the design of OWL-OLM, the interactive user modeling component usedin OntoAIMS. OWL-OLM uses the STyLE-OLM framework for interactive open user modeling[17]. The design of OWL-OLM is presented in Figure 3.2.

Figure 3.2: The design of OWL-OLM.

The Dialog Agent is the main OWL-OLMcomponent which maintains the user model-ing dialog. The whole dialog is geared towardsachieving the goal of the user, according towhich the agent defines its dialog goals.

A Domain Ontology is used to maintain thedialog and to update the user’s Short-Term Con-ceptual State, which provides a model of theuser’s conceptualization gathered throughoutthe dialog. Conceptual states are described inSect. 3.3.1.

The dialog agent maintains a dialog episodegoal, which is divided into sub-goals that trig-ger Dialog Games —sequences of utterances toachieve a specific sub-goal, see Section 3.2.2.

The agent has also a Game Analyzer thatanalyzes each user’s utterance to decide theagent’s response and to update the user’s short term conceptual state, see Section 3.2.4. Whena dialog episode finishes, the short-term conceptual state is used to update the user’s Long-TermConceptual State, which is part of the User Model defined by OntoAIMS. For this, the simplifiedbelief revision framework from [18] is employed.

The user can interact with the system by using a graphical user interface described in Sect. 3.2.5.

3.2.1 Dialog Agent

The job of the dialog agent is to use Dialog Games and the Game Analyzer to manage a coherentdialog with the user. A dialog in this context consists of sending and receiving utterances to andfrom the user. This section describes the problems and solutions the dialog agent should dealwith. The problems and solutions are based on the STyLE-OLM system design and are morecomprehensively described and analyzed in [18].

To achieve effective dialogs, which feel natural and efficient to the user, several problems haveto be tackled. Firstly, the dialog focus has to be maintained, so that the interaction is structured and

28

Page 30: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

3.2 Interactive User Modeling

organized. In terms of utterances, this means that consecutive utterances should have a conceptin common whenever possible. For this reason, a scope of relevant concepts is maintained.

Secondly, the dialog continuity has to be ensured to provide a logical exchange of utterances.This means that each utterance needs to be a logical response to the previous utterance. Cuephrases are used when a game is opened, closed, or re-initiated, to show the logical flow and tocommunicate the dialog goal, as in the example in Section 2.3.

Thirdly, to avoid repetition of utterances a history of the dialog is maintained to exclude alreadysaid utterances.

Furthermore, mixed initiative should be enabled, so that both participants in the dialog canintroduce a new topic and ask clarification questions. A simple mechanism for maintainingthe initiative is used in OWL-OLM —the initiative is in the agent unless the user makes aninterruption (which can be either a question or a statement that is not on focus) when the currentgame is suspended and a new game is initiated to answer the user’s question or to clarify thetopic. When this game finishes, the suspended game continues.

Finally, a dialog episode goal should be achieved. Dialog goals in a learning situation we takeinto account:

• Learning resource recommendation. The user may want some advice of what he shouldread on a topic. The goal of the dialog agent is to asses the user’s knowledge accordingto what is needed for performing a task in the course. Dialog subgoals probe the user’sknowledge of concepts related to the task, as shown in the example above.

• Question answering or domain clarification. The user may want to clarify some part of thedomain, so he asks questions. Dialog subgoals answer the user’s questions and help himclarify the particular domain aspects, e.g. see utterances (7)-(8) in Sect. 2.3.

• Knowledge assessment. The user may want to assess his knowledge about a part of the do-main (a set of concepts). The goal of the dialog agent is to help with the assessment. Dialogsubgoals probe the user’s knowledge about concepts and relationships between them.

To achieve the subgoals encountered, the DialogAgent triggers Dialog Games, which are de-scribed next.

3.2.2 Dialog Games

A dialog game DG in OWL-OLM is defined as:

DG = (C,P ,R,U,S)where: C is a set of domain concepts and relations targeted in the game used to maintain aglobal focus [27]; P is a set of preconditions which define conditions of the state of the system(e.g. a history of previous utterances or a user’s current conceptual state) which are needed totrigger the game; R is a set of postconditions that define conditions of the state of the system (e.g.changes in the user’s conceptual state) which become valid when the game ends; U is a set ofrules to generate dialog utterances; S defines the scope that is a set of concepts used to maintaina dialog focus space [27]. The definition follows the formalization in [17] which is derived fromLevin and Moore’s dialog games model [28].

To illustrate, we use the example in Section 2.3. The probing DG that starts with utterance (1)has:

29

Page 31: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

3. Design

C = [file, moving a file]P = [there is no enough information in the user’s conceptual state]R = [update the user’s conceptual state]U = [ask questions about concepts in S]S = [file, unix user, path, group readable permission, moving a file,

file operation]The game is suspended when a question answering DG and a clarification DG are triggered, andre-initiated after each of these games ends. The probing game ends with utterance (28).

The question answering DG, utterances (7) - (10), has:C = [file, group readable right]P = [the user has asked a question]R = [update the user’s conceptual state]U = [answer the user’s question based on a query to the domain ontology]S = [file, group readable right, readable permission]

The clarification DG, utterances (16) - (27), has:C = [moving a file, command]P = [mismatch is detected]R = [update the user’s conceptual state]U = [ask questions to confirm inherited properties]S = [moving a file, command, file operation, executable program,

version]

3.2.3 Utterances, Intentions and Domain Statements

The building blocks of every dialog are utterances: sentences interchanged between the dialogagent and the user. Each utterance consists of three parts: an originator, intention, and domainstatement. The originator is the producer of the utterance, which can be the dialog agent or theuser.

A domain statement is the domain-related proposition of an utterance. Each domain statementis a model defining a set of concepts and relations. The key restriction to this model is thatit should describe only one semantic relation, as it represents the domain-related semantics of asingle utterance.

We use domain statements (a) to extract a section of an domain ontology and only focus onthis section; (b) to generate a representation (i.e. natural language sentence or a diagram) to com-municate the semantics enclosed by the domain statement to the user; and (c) to exchange veryfocused information about the exact relationship between a small number of domain concepts,which is critical for diagnosing users and identifying mismatches between the user’s conceptualmodel and the domain ontology, as shown later in Sect. 3.2.4.

The intention states the dialog purpose of the utterance, i.e. the intention of the originator ofthe utterance. Sample intention operators are:

• Give Answer: The originator is answering a previous utterance which was a question withthe semantics enclosed in the domain statement.

• Inform Ignorance: The originator informs the he does not know whether the semanticsdescribed by the domain statement are true.

30

Page 32: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

3.2 Interactive User Modeling

• Agree: The originator states that she agrees with the semantics enclosed in the domainstatement.

• Disagree: The originator states that she disagrees with the semantics in the domain state-ment.

• Ask: The originator asks whether the semantics described in the domain statement is true.

3.2.4 Game Analyzer

This section explains how the utterances received from the user can be analyzed and how theresults of this analysis can be used to decide the next move of the Dialog Agent and to update theuser’s conceptual state. The following features need to be analyzed:

• The scope and intention of two consecutive utterances need to be compared. If the scopesdo not match, the agent has to decide whether to follow the user and change the dialogfocus or to stay in the current scope.

• The incoming utterance should be analyzed to determine whether it asks a question to thesystem, which should trigger a question answering game.

• The domain statement of the incoming utterance needs to be compared to the domainontology to determine whether the semantic relation described by the domain statement issupported by the domain ontology.

• If a domain statement is not supported by the domain ontology, the agent needs to checkfor a recognizable mismatch between that statement and the domain ontology.

Answering Questions

Two different types of questions should be catered for: those which can be answered with a yesor no (yes/no questions) and those where the answer contains an element missing in the ques-tion (open questions). To answer the questions, the domain statement needs to be compared tothe domain ontology. In case of a yes/no question it suffices to check whether the semanticsexpressed by the domain statement are also present in the domain ontology. For open questions,the domain ontology must be searched (queried) in order to find an answer which fits the missingelement in the domain statement of the question.

Mismatches

When the user submits an utterance that cannot be verified by the domain ontology, this is con-sidered as a mismatch. Mismatches are discrepancies in the semantics of a domain statementand the domain ontology. The GameAnalyzer component should be able to detect and classifymismatches into mismatch categories. By classifying the types of mismatches the dialog agent isable to define dialog games to clarify each of the recognized mismatches. For instance, in theexample in Section 2.3, utterance (15) has a mismatch of type “wrong inheritance”. This triggers aclarification dialog game (see utterances (17) - (26)).

31

Page 33: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

3. Design

3.2.5 User Interface

The main responsibility of the user interface is to present system generated utterances to the userand provide an interface for the user to create and send utterances back to the system. Ideally, thiscould be done by an interface presenting a life-like character [37] or using state of the art naturallanguage generation. The design and implementation of an intuitive user interface is beyondthe scope of this project —which focuses on the user modeling and the maintenance of the usermodel. In order to demonstrate the system, we choose a pragmatic approach and present a UIdesign based on a combination of basic natural language generation and graphical representationof the utterances based on the user interface as implemented in STyLE-OLM (see Fig. 2.3). Thedesign and implementation of the user interface was mainly done by Michael Pye as part of theSWALE project and was funded by the University of Leeds.

The task of the user interface is to translate utterances to a natural language sentence and adiagram. The natural language sentence can be generated by using templates and filling in thenames of the concepts and relations described by the utterance. Diagrams of a utterance can begenerated by depicting concepts as nodes and relations as edges of a graph.

To allow users to generate their own utterances, the user interface allows them to create andedit diagrams related to domain concepts and relations. These diagrams can be mapped to the do-main statement of the utterance. The user can choose the intention of the utterance by choosingfrom a list of possible intentions, presented as sentence openers.

Extra functionality in the user interface is desirable to allow for a better usability. Usersshould be able to preview their utterances before sending them to the system. A history of thedialog episode should be presented to the user. The system should present clarifications for itsbehavior. In communications between people these clarifications are inferred —among others—by looking at gestures and hearing changes in tonalities. The user interface clarifies its behaviorby explicitly communicating the subgoals and goals (its reasoning) used to generate a utterance.Refer to Sect 4.9 for implementation details about the user interface.

3.3 User Model Representation

Throughout the user interaction, information about concepts visited, searches performed, taskstatus (current, started, finished, not started yet), resources opened, and bookmarks saved isstored in the Activity User Profile. It is useful for deducing user preferences and characteristics,and to make initial assumptions about the user’s domain understanding. To have an in-depthrepresentation of aspects of the user’s domain understanding, OntoAIMS defines a User’s Con-ceptual State, which is the main improvement to the UM in AIMS. This section outlines how itis represented and maintained.

3.3.1 User’s Conceptual State

The main reason for maintaining a conceptual state is to have an intermediate model that links auser’s conceptualization to an existing domain ontology. The conceptual state is a model of theuser’s conceptualization inferred during interactions with the user.

To distinguish between a temporary, short-term state that gives a snapshot of a user’s knowl-edge extracted during an interaction session and a long-term state that is built as a result of manyinteractions with the system, we consider short term conceptual states (STCS) and a long term con-ceptual state (LTCS), respectively. The former is a partial representation of some aspects of a user’s

32

Page 34: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

3.3 User Model Representation

conceptualization and is used as the basis for extracting the latter, used as the basis for the UserModel in OntoAIMS.

STCS is defined as a triple pointing to a Conceptual model, a set of Domain ontologies and aLTCS. The Conceptual model defines a conceptualization of the domain and is suitable for reason-ing upon it. The Conceptual Model resembles an ontology specification defining concepts andrelationships. Most of the concepts and relations defined in the conceptual model describe thesame semantics described by the domain ontologies related to the STCS. The conceptualizationdescribed by the conceptual model has direct links to concepts and relationships defined in thedomain ontologies.

STCS History Properties

Since the STCS is built by interacting with the user there is extra information which can beadded to the STCS to indicate how the conceptual model has been achieved and how it differsfrom the original domain ontologies it is linked to. This STCS history information can be usedto determine when a conceptualization is accurate or not. This is relevant because user’s con-ceptualizations change in time as they learn more about the domain. The STCS history is usefulwhen building the LTCS from the STCS. We have identified a set of history properties which arevaluable to maintain:

• correct use: the user has used a concept or relationship correctly. By correct we mean thatthe described concept or relationship is supported by the domain ontology.

• wrong use: the user has used a concept or relationship in a way that contradicts the domainontology.

• total use: the user has used a concept or relationship, either in a correct or wrong way.

• affirmation: the user has stated he knows about this concept or relation.

• negation: the user has stated he has no knowledge about the concept or relation.

The history properties are linked to concepts and relationships defined in the STCS. Theaffirmation and negation properties describe the user’s opinion about whether he knows aboutthe concept or not. The correct and wrong use properties keep track of mismatches between theconceptual model and the domain ontologies. This can be viewed as the system’s opinion aboutthe user’s knowledge because the systems knowledge is encoded in the domain ontologies. Tounderstand why this is an opinion and not truly a fact note that a mismatch only indicates thatthere is a discrepancy between the conceptual state and the domain ontology, it does not indicatethat a concept or relationship is not true. See Sect 4.8.1 for an example of an implemented STCS.

LTCS

LTCS is a conceptual state like the STCS. It is also defined as a triple but now pointing to a Con-ceptual model, a set of Domain ontologies and the rest of the User Model (which contains moreinformation beside the user’s conceptualization of the domain). LTCS associates a belief value foreach domain concept and relation. The belief value is calculated based on the conceptual modeland the history properties of the STCS and indicates how well a user knows a concept or rela-tionship based on the STCS and thus based on the dialog interaction with the system. This belief

33

Page 35: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

3. Design

value offers a simple interface to the rest of the system to use the elicited user conceptualization.The actual calculation of the belief value can be implemented in different ways, refer to Chapter 4to see an example of an implementation.

3.4 User-Adaptive Task Recommendation

Sect. 2.2.2 presents a list of possible applications of the integrated interactive user modeling inOntoAIMS. In this project we use OWL-OLM and the improved user model for enhancing theadaptive task recommendation in OntoAIMS. See Sect. 2.1.1 for a description of (Onto)AIMS andhow the Course Task Model relates to the rest of the system. The Course Task Model consists ofa hierarchy of tasks. Each course task T is represented as:

(TI D, Tin, Tout , Tconcepts, Tpre)

, where Tin is the input from the user’s Activity Profile, Tout is the output for the user’s ActivityProfile and for STCS based on the user’s work in T, Tconcepts is a set of domain concepts studiedin T, and Tpre indicates the prerequisites for T (e.g. knowledge level for each Tconcept , other tasksand resources required).

To recommend a certain task from the set (AT) of all tasks defined in a course, OntoAIMSfirst needs to calculate a set of potential tasks (PT) by checking whether Tin and Tpre are sup-ported by the Activity Profile and the belief values for the concepts in LTCS. The interactive usermodeling provided by OWL-OLM can be used to extract the required belief values for the conceptsrelevant to the potential task. OntoAIMS can then check the concept knowledge threshold for theconcepts in Tconcepts and recommend either to follow the task if the knowledge is not sufficient orto skip the task otherwise.

Sect. 4.10 describes the implementation of the task sequencing. See [2] for another descrip-tion of the OntoAIMS task sequencing in the context of the Semantic Web.

34

Page 36: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

Chapter 4

Architecture and Implementation

This chapter presents a detailed view of OWL-OLM in OntoAIMS, focusing on how the designissues described in Chapter 3 are implemented. We begin shortly describing the implementationcontext: how is OntoAIMS implemented and how this influences the implementation of OWL-OLM. We then describe the implementation of the major components of OWL-OLM. For an evenmore detailed description of the implementation we refer to the source code documentation inAppendix A.

4.1 Implementation Context

OntoAIMS is being implemented in a number of projects besides this project, each updating apart of AIMS. Sect. 3.1 discusses the design of OntoAIMS. Here, we discuss the implementationof that design. One of the main improvements in OntoAIMS is the use of an ontology languageinstead of a custom domain model to represent domain knowledge. OWL has been chosen asthe ontology language used in OntoAIMS (see Sect. 2.2.1 for a discussion of advantages OWLprovides).

OntoAIMS has a client-server architecture, where the information models are kept in theserver and most course-related calculations are performed by the server. The client is a web-basedapplication which provides a user interface to the server which can be used to browse throughthe domain concepts as well as for searching and viewing learning resources related to a learningtask. Both client and server implementations use Java1, using Apache Tomcat2 as the web-serverand Java Applets in the clients.

4.1.1 OntoAIMS Main Packages

Figure 4.1 shows the main packages which implement OntoAIMS. The Base package definesbasic utilities (logging and input/output methods) used by the application. Common defines thebasic objects (concepts, links, documents, users, courses, etc.) and communication primitivesused by both the client and the server packages. The Client package defines the user interfaceand communicates via the communication primitives with the Server package, which reads andwrites from and to the OntoAIMS information models. The Server package also implements thecourse sequencing, learning resource querying and recommending in OntoAIMS. The OWL-OLM

1See http://java.sun.com2See http://jakarta.apache.org/tomcat

35

Page 37: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4. Architecture and Implementation

Figure 4.1: Main packages of OntoAIMS

package implements the interactive user modeling and is the main focus of this chapter. Parts ofthe Base, Common, Client and Server packages are also described when they are relevant toOWL-OLM.

We decided to implement OWL-OLM as a standalone package instead of including it in theCommon, Client or Server packages to allow other systems to easily reuse the OWL-OLMimplementation.

4.2 OWL-OLM Main Packages

Figure 4.2 shows the main packages of OWL-OLM. Each of these packages is explained in asection bellow (except Testing, as it does not add functionality to the system and is mentionedhere for completeness). First we give a short description of each of these packages:

36

Page 38: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4.2 OWL-OLM Main Packages

Figure 4.2: Main packages of OWL-OLM

• The Base package implements utterances and provides methods for generating a set ofquestion utterances given an OWL ontology, a relation type and a concept.

• The UM package defines conceptual states and provides methods for creating and updatingan STCS or LTCS.

• OwlUtils defines methods for managing information in OWL models and ontologies likeOWL to natural language conversion, comparing OWL resources, comparing OWL models,defining OWL mismatches, etc.

• The UI package provides a graphical user interface that the user can use to interact withthe system.

• Games implements the Dialog Agent and defines different types of games which can beused to achieve the dialog episode goal.

37

Page 39: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4. Architecture and Implementation

• The Statementspackage defines the types of domain statements the system understandsand provides methods for classifying incoming utterances into one of the known types andgenerating statements of a certain type.

• Finally, the Testing package provides methods for testing OWL-OLM.

Before we continue describing each of these packages, we mention that OWL-OLM uses twoexternal libraries: Jena —which is used by most of the packages in OWL-OLM and is explainedbelow— and JGraph3 —which is used only by the UI package as described in Sect. 4.9.

4.3 OWL and Jena

Section 2.2.1 introduced OWL and ontology languages. This section describes OWL in detail aswell as Jena, a Java library for managing OWL (and RDF) documents and models (ontologies).We first give a detailed view of the building blocks of OWL and then describe how Jena representsthese building blocks. We also describe how Jena provides a reasoner for OWL models.

4.3.1 OWL Building Blocks

A thorough description of the OWL building blocks is out of the scope of this document, we referto [31] for this. Here however, we give a brief description of some basic OWL-related terminologyfor the sake of clarity, as we use this terminology extensively in the sections below. As OWL isbased on RDF, some RDF terminology is also used and explained:

• RDF is “a language for representing information about resources in the World Wide Web.” [34].

• An RDF or Web resource —often simply referred to as ’resource’— was originally meant torepresent a Web page or document found in the World Wide Web. This concept, howeverhas been extended to “represent information about things that can be identified on the Web,even when they cannot be directly retrieved on the Web” [34]. With other words, a resourceidentifies some type, any type, of information. Note that OntoAIMS defines a different typeof resources: learning resources which are documents the user can view in order to learnabout the domain.

• A URI is the address of an RDF resource and uniquely identifies the resource in the WorldWide Web.

• RDF properties —often simply referred to as ’properties’— are a special type of RDF re-source. They represent a link between two RDF resources.

• RDF triples or RDF statements are composed of three parts: a subject, a predicate and anobject; where the subject can be any RDF resource, the predicate can be any RDF propertyand the object can be represented either by an RDF resource or by data of a certain type,such as string, integer or float value. Note that we prefer to use the term RDF triples to avoidany confusion between RDF statements and domain statements (see Sect. 3.2.3) or OWLstatements (see Sect. 4.5.1). The difference being that RDF statements are always triples,while domain statements and OWL statements can be any description of information whichcan be translated to a natural language sentence.

3See http://www.jgraph.com

38

Page 40: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4.4 The Base Package

• A reified statement is a resource which describes an RDF triple. Reified statements is theonly way in RDF to link a whole RDF triple with a resource. This is needed when you needto link the whole information represented by a RDF triple to some other information.

• An OWL individual is a resource which represents information which can be verified in thereal world by all observers who care to verify the information.

• An OWL class is a set of individuals.

• An OWL object property is a property which represents a link between two classes or indi-viduals.

• An OWL datatype property is a property which represents a link between a class or individ-ual and data of a certain type (string, integer, etc.).

• An OWL property is either an object property or a datatype property.

• An OWL model is a set of RDF triples which define a set of classes, individuals and OWLproperties. An OWL model provides no guarantee that the information described is sound(it can be verified by an observer following rules of logic and observations of the world).

• An OWL ontology is a OWL model which has been agreed upon by several observers, thismeans the information described by the ontology might be sound. The more observers(and the more talented the observers are in finding inconsistencies) the more you can trustthe ontology to be sound.

4.3.2 Jena: Framework for Building Semantic Web Applications

Jena was originally a Java API for RDF which now also supports OWL. It provides methods forinput and output of OWL models from and to files, databases or the world wide web. Jena “con-tains interfaces for representing models, resources, properties, literals, statements and all theother key concepts of RDF” [30] and OWL.

Furthermore, Jena provides methods for navigating and querying OWL models. It also de-fines some basic operations on models like union, intersection and difference of two models.Finally, Jena provides a set of reasoners for making inferences from and validating OWL models.

4.4 The Base Package

Figure 4.3 shows the Java classes defined by the Base package.

4.4.1 Utterance

As stated in Sect 3.2.3, utterances consist of an originator, intention and domain statement. Theoriginator is defined as an Object which in practice can refer to a DialogAgent (seeSect. 4.6.2) or a User (defined in OntoAIMS). Intentions are described in the following section.Domain statements are implemented as OwlStatement objects.

The statementModel is the raw OWL model provided by Jena. OWL-OLM “understands”the information described in this OWL model if it can convert the model into an OwlStatement

39

Page 41: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4. Architecture and Implementation

Figure 4.3: Class diagram for the Base package

(see Sect. 4.5.1). If statementModel cannot be converted, statementModelInvalid is setto true to indicate that the utterance cannot be processed by OWL-OLM.

This class provides three methods. toString() returns a natural language approximationof the utterance. isEquivalent() returns true if another utterance has the same meaning asthis utterance (both the intention as the owl statement are equivalent). isQuestion returnstrue if the utterance asks a question.

4.4.2 Opener

In interactive dialog systems, the intention is expressed with a sentence opener [39]. The Openerclass defines a set of sentence openers which are supported by OWL-OLM. This set is shown inTable 4.4.2. Method toString() returns a string representation of the opener and is used togenerate the natural language string of the utterance. This class further defines methods fordetermining whether the opener indicates a question, suggestion, skip of turn or end of dialog.getAllOpenerTypes() returns a list of the opener types as shown in Table 4.4.2.

Openers set constraints on the domain statement which can be used together with the opener.For instance, an open question opener must be used together with an OWL statement missingsome resource (see Sect. 3.2.4). Method supports() returns whether a given OWL statementcan be used together with an opener.

Openers also put restrictions as to which type of sentences can follow each other. For instance,after asking a question, you would not expect an agreement as an answer. Method canFollow()

40

Page 42: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4.5 The Statements Package

Type String RepresentationINFORM_WHAT_I_THINK_YOU_THINK You think thatINFORM_WHAT_I_THINK I think thatINFORM_DISAGREE As far as I know, it’s not true thatINFORM_AGREE Yes,INFORM_IGNORANCE I don’t knowINFORM_IGNORANCE_BOUNDARY Sorry, I can’t tell you more aboutINQUIRE_BELIEF Do you thinkINQUIRE_BELIEF_OPEN Do you knowINQUIRE_MORE_INFO Can you tell me more aboutINQUIRE_CONFIRMATION Is it true thatCHALLENGE_REASON Why do you think thatCHALLENGE_DOUBT I doubtSUGGEST_TOPIC Let us talk aboutSKIP_TURN ...END_DIALOG Thank you for your time

Table 4.1: Types of opener and corresponding string representation

returns whether the current opener can follow the previous opener.

4.4.3 Intention

This class defines a set of intentions which can be used in utterances. Intentions are coupled toa set of opener objects (the members attribute) which convey the semantics of the intention asshown in Table 4.4.3. Since not all combinations of openers and OWL statements are allowed,each intention can check which opener is allowed in combination with a given OWL statement(getSupportedOpener()). Method getDescription() returns a description of the se-mantics of the intention (used by the user interface). If more than one opener can be chosen incombination with an OWL statement, getDefaultOpener() returns an opener.

4.5 The Statements Package

Figure 4.4 shows the Java classes defined by the Statement package. This package definesOWL statements, which are the OWL implementation of domain statements (see Sect. 3.2.3).OWLStatement is an interface, defining the methods all OWL statements should implement.The package further defines a set of classes which implement the OWLStatement interface (notdepicted in Fig. 4.4). Sect. 4.5.1 explains more about OWL statements.

The package further defines the StatementAgent class, which is a utility class for therest of the system to hide the set of OWL statement implementations. The StatementAgentknows about all OWL statement implementations and uses this knowledge to classify any OWLmodel (Jena’s OntModel object class) into one of the known OWL statement implementations.This works by creating an instance of a StatementAgent passing an OWL model m, thencalling the init() method and finally, calling the getOwlStatement() method to get anOwlStatement based on m. If m cannot be classified into one of the known OWL statements,

41

Page 43: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4. Architecture and Implementation

Table 4.2: Types of intentions and corresponding sentence openersType Sentence Opener(s)REPORT INFORM_WHAT_I_THINK_YOU_THINKGIVE_ANSWER INFORM_WHAT_I_THINKGIVE_OPINION INFORM_WHAT_I_THINKINFORM_IGNORANCE INFORM_IGNORANCE,INFORM_IGNORANCE_BOUNDARYAGREE INFORM_I_AGREEDISAGREE INFORM_I_DISAGREEASK INQUIRE_BELIEF,INQUIRE_BELIEF_OPEN,

INQUIRE_MORE_INFO, INQUIRE_CONFIRMATIONCHALLENGE CHALLENGE_REASON,CHALLENGE_DOUBTSUGGEST_TOPIC SUGGEST_TOPICSKIP SKIP_TURNEND_DIALOG END_DIALOG

an OwlStatementNotRecognized exception is thrown.

4.5.1 OwlStatement Interface

Recall that OWL statements are OWL models describing the domain-related semantics of anutterance. Each OWL statement is linked to one OWL model which describes its semantics.The reason we do not use OWL models directly is that OWL models can be very complex. Forinstance, every OWL ontology is an OWL model, but it would be almost impossible to state all ofthe semantics of an ontology in one sentence or utterance. OWL-OLM manages this complexityby defining a set of semantic relations which are supported by the system. This set correspondsto the set of classes implementing the OwlStatement interface and is presented in Sect. 4.5.2.Because each type of OWL statement focuses on one type of semantics, it can easily analyze andprovide information needed by OWL-OLM. This is done through the following methods:

• setAgent() sets the StatementAgentwhich is managing this OwlStatement. Notethat the StatementAgent contains the OWL model which defines the semantics of theOWL statement.

• modelIsValid() returns true if the OWL model provided by the StatementAgenthasthe semantics corresponding to this type of OwlStatement. If this method returns false,then all the other methods will not return valid values for the OWL model.

• toString() returns a natural language representation of this OwlStatement. SeeSect. 4.5.1 for more information.

• getMainRDFStatements() returns a set of RDF triples which uniquely identifies thesemantics of the OWL statement. See Sect. 4.5.1 for more information.

• isEquivalent()compares an OWL statement with another OWL statement and returnswhether both represent the same semantics.

• containsQuestion() returns true is the OWL model contains a question resource in-dicating an open question (see Sect. 4.6.2).

42

Page 44: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4.5 The Statements Package

Figure 4.4: Statement package class diagram

Full, Single and Empty OWL Statements

The interface provides methods for determining the complexity of information described by theOWL statement. OWL-OLM discerns three types of complexity. Empty OWL statements do notdescribe anything and are based on empty OWL models. Single OWL statements describe onesingle concept (one class, one individual or one property). Full OWL statements describe morethan one concept and all the concepts described are related to each other by a set of relations.

The reason we need to make this distinction is that Intentions and OWL statements need tosupport each other. For instance, the SKIP intention can only be used in combination with anempty OWL statement because the idea of SKIP is that you do not have anything domain-relatedto say, so you allow the other side of the dialog to come up with something to say. The same istrue for the END_DIALOG intention. On the other extreme, it is impossible to ask a questionwithout providing some domain-related information so the ASK intention can only be combinedwith single or full OWL statements.

Classifying OWL Models into OWLStatement types

StatementAgent objects accept an OWL model and classify the model into an OWL statementtype which exactly supports the semantics in the OWL model. Instead of letting the StatementAgentperform a complex analysis on each incoming OWL model, each OWL statement implementa-tion defines a method (modelIsValid()) which analyzes the OWL model and returns whetherit contains exactly the semantics supported by the OWL statement type.

The main restriction to OWL models imposed by every OWL statement is that all resourcesdefined in the OWL model are interconnected in some way. Thus, if you represent the OWLmodel as a graph, you should get only one graph and not two or more disconnected subgraphs.

Recall that each OWL model consists of a set of RDF triples. During the classification we takeadvantage of the fact that not all RDF triples are as important. Each OWL statement type defines

43

Page 45: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4. Architecture and Implementation

a small set of main RDF triples which define the semantics of the OWL statement type; let uscall this triples the main RDF triples. The main RDF triples are composed of different buildingblocks (classes, individuals and properties) which are defined in the rest of the RDF triples in theOWL model. Let us call these triples the supporting RDF triples. The supporting RDF triples inthe OWL model are there only to define resources which are needed to be able to state the mainRDF triples. Indeed, the problem OWL statement implementations solve is that there are severalways to define supporting RDF triples which support the same type of main RDF triples.

Concluding, what modelIsValid()does is check whether the OWL model contains the setor main RDF triples corresponding to the OWL statement type and checks whether all other RDFtriples are supporting RDF triples. See Sect. 4.5.2 for examples of how this is done in OWL-OLM.

Transforming OWLStatement Objects to Natural Language

Because each OWL statement type defines a separate type of semantics it can represent (seeSect. 4.5.2), each OWL statement type can define a template where the names of the classes andproperties related to the OWL statement can be filled in. Furthermore, the templates can take intoaccount information provided by the supporting RDF triples which introduce small differencesin what is being said. For instance, in the case the main message of the OWL statement is therelation between a class c and an object property p (c has property p). Supporting RDF triplesmay state that the property p also has a range of a certain type. In which case you can use atemplate which mentions the range type. Also p may be functional and thus only one instance ofthe range type can be related to c at any one time instead of multiple instances. In this example,the main message is always that c has property p, but the exact relation varies depending on thesupporting RDF triples.

4.5.2 Classes Implementing the OwlStatement Interface

Empty

Used in combination with intentions which do not require an OWL statement. The OWL modeldoes not contain any (main or supporting) RDF triples.

SingleClass

Defines a single class c. The OWL model contains a main RDF triple of the form

• c rdf:type owl:Class

Supporting RDF triples only give labels and comments about c if present at all. The natural lan-guage representation is a label (or the class ID if the label is missing).

SingleDatatypeProperty

Defines a single datatype property p without a domain or range. The main RDF triple is of theform

• p rdf:type owl:DatatypeProperty

Supporting RDF triples only give labels and comments about p if present at all. The natural lan-guage representation is a label (or the class ID if the label is missing).

44

Page 46: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4.5 The Statements Package

SingleObjectProperty

Defines a single object property p without a domain or range. The main RDF triple is

• p rdf:type owl:ObjectProperty

Supporting RDF triples and natural language representation is equal to SingleDatatypeProperty.

InstanceOf

Declares that i is an individual of class c. The main RDF triple is of the form

• i rdf:type c

Supporting RDF triples are labels, comments or other annotation for both i and c. The templatewe use for natural language representation is that “i is an example of c”.

SubClass

States that class c1 is a subclass of class c2. The main RDF triple is of the form

• c1 rdfs:subClassOf c2

Supporting RDF triples are annotation information about both c1 and c2. The template for NLrepresentation is “every c1 is also a c2”. Note that the semantics of this OWL statement type areequivalent to the semantics of the SuperClass OWL statement (described below) when c1 and c2

are swapped.

SuperClass

States that class c1 is a superclass of class c2. The main RDF triple is of the form

• c1 rdfs:superClassOf c2

Supporting RDF triples and NL representation are analogous to the SubClass OWL statement.

DatatypeProperty

Defines that resource r has a datatype property p. The main RDF triple is of the form:

• p rdf:domain r

Supporting RDF triples define and annotate property p and resource r which can be a class or in-dividual. The basic template for the NL representation is “r has r”. The range of p is considereda supporting RDF triple and may not be defined at all. The range is not used for natural languagegeneration as it describes a datatype and we choose not to present this information to users (whotacitly know that a name is represented as a string, or a numerical value is represented by aninteger or float, but may not know about the formalizations used in computer systems for thesedatatypes).

45

Page 47: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4. Architecture and Implementation

ObjectProperty

Defines that resource c has an object property p. The main RDF triples are:

• p rdf:domain c

• p rdf:range r

Note that in this case, the range of p is very important as it describes the exact relation betweenc, p and r. Both, c and r can be classes or individuals, so supporting RDF triples define andannotate property p and classes or individuals c and r. The template for the NL representation is“c has r as its p”. This template may change depending on whether the supporting RDF triplesindicate p is functional or inverse functional.

QueryInstance

Declares that i is an individual of class c and that i is a question. The main RDF triples are:

• i rdf:type c

• i rdf:type swaleQ:QuestionResource

Supporting RDF triples define and annotate c and annotate i. The NL template is “an exampleof c”.

QueryClassOfInd

Describes that i is an instance of c and c is a question. The main RDF triples are:

• i rdf:type c

• c rdf:type swaleQ:QuestionResource

Supporting triples annotate i and c.

QueryDtPropDomain

Defines that class c has a datatype property p and c is a question. The main RDF triples are

• p rdf:domain c

• c rdf:type swaleQ:QuestionResource

Supporting triples annotate p and c and may define range and properties for p.

QueryDtPropValue

Individual i has value v for its property p and v is a question. The main RDF triples are

• i p v

• v rdf:type swaleQ:QuestionResource

46

Page 48: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4.6 The Games Package

4.6 The Games Package

Fig. 4.5 shows the Games package. This package defines dialog games (Sect. 3.2.2) and imple-ments four dialog games used in OWL-OLM.Games also implements the dialog agent (Sect. 3.2.1)as DialogGameAgent for managing the dialogs with the user. DialogGameAgent uses aGameAnalyzer for performing analysis on incoming utterances and deciding the next utter-ance of the system. Systems integrating OWL-OLM as an interactive user modeling compo-nent need to implement the DialogGameEnvironment which defines the interface betweenthe DialogGameAgent and the rest of the system. The rest of this section describes theDialogGame interface and the classes which implement it, the DialogGameAgent and theGameAnalyzer.

Figure 4.5: Games package class diagram

47

Page 49: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4. Architecture and Implementation

4.6.1 DialogGame Interface and Implementing Classes

TheDialogGame interface, BaseDialogGame,SingleStatementDG,EndingDG,RapidProbeDGand ExtensiveProbeDG classes implement the dialog game concept explained in Sect. 3.2.2.

The DialogGame interface defines methods all dialog games should provide. We give a shortexplanation of the most important methods:

• getUtterances() returns a set of utterances which achieve postconditions R of thedialog game. This utterances are generated by the set of rules U which are defined by theimplementing classes.

• nextUtterance() and hasNextUtterance() can be used as an iterator over the setof utterances generated by the dialog game.

• hasMainTopic() indicates whether a given concept or relation—represented by a resource—is a member of the set C of domain concepts and relations targeted by the game.

• getScope() returns an OwlScope (see Sect. 4.7) which corresponds with set S of thedialog game. This is the set of all concepts and relations which appear in all the utterancesgenerated by this dialog game.

• gerundExplanation() returns a string representation explaining what the goal is ofthe dialog game. This is used for explaining to users what the dialog agent is doing.

BaseDialogGame

BaseDialogGame is an abstract class which gives a partial implementation of the DialogGameinterface. This partial implementation is reused by classes which extend BaseDialogGame.This implementation defines the following attributes:

• domainModel is a link to the domain ontology. This model is needed for generating theutterances.

• uts is the Vector which contains the generated utterances.

• utItBaseDialogGame implements the nextUtterance()and hasNextUtterance()methods using this iterator.

• scope contains the OwlScope for the dialog game.

• mainTopics contains the main topics (set C) for this dialog game.

• o2nl defines an OwlToNaturalLanguage object which can be used to generate naturallanguage sentences of the generated utterances.

EndingDG

EndingDG is a very simple dialog game which generates only one utterance with intentionEND_DIALOG.

48

Page 50: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4.6 The Games Package

SingleStatementDG

SingleStatementDG is another very simple dialog game. It generates a single utterance byaccepting the components of the utterance: an originator, intention and owl statement. Thisdialog game type is used in OWL-OLM for answering questions.

ExtensiveProbeDG

This dialog game takes a resource —an OWL class or individual— and generates a set of utter-ances which ask questions about that class or individual. For classes, it generates questions aboutthe following categories of relations:

• class specific properties. That is, properties which are defined for this class but not for aparent class.

• the parent classes of this class.

• the properties inherited from parents of this class.

• children classes of this class.

• instances of this class (individuals).

For each of this categories open questions and yes/no questions can be asked. Open questionsare always generated. Yes/no questions are composed by searching the domain ontology forresources which are related to the class.

For individuals, a similar set of categories is used:

• classes the individual is a direct instance of. If the individual is declared to be an instanceof a class, we call this a direct class. The individual is also an instance of parent classes ofthis direct class, but these are not taken into account here.

• properties specific to the individual or a direct class.

• properties not specific to the individual or a direct class. Thus, properties defined for parentclasses of the direct class.

RapidProbeDG

This dialog game is a variation on the ExtensiveProbeDG. The generation process is the same,but RapidProbeDG provides methods for setting a maximum number of generated utterances.It allows sets a maximum number of generated utterances of each category of questions asked.This results in a smaller number of generated utterances.

4.6.2 DialogGameAgent

The DialogGameAgent class implements the concept described in Sect. 3.2.1.

49

Page 51: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4. Architecture and Implementation

Attributes

We first explain the most important attributes and defined in this class:

• cg is the current dialog game. At any time during a dialog, a dialog game is being played.This attribute indicates which dialog game is being played.

• activeGames is a set of dialog games which need to be played. It functions as a stack;when a dialog episode starts, the dialog agent determines which dialog games need to beplayed in order to achieve the dialog episode goal and add these games to the activeGamesset. Also, when a dialog game is interrupted by another dialog game, the interrupted gameis added to this set. When the current dialog game finishes, a game from this set is selectedto become the new current dialog game.

• finishedGames is the set of games which have been finished. A dialog game finishes,when all the utterances generated by the dialog game have been sent to the user.

• utHistory is the set of all utterances which have been sent or received during the dialogepisode. This set is used to avoid repetition of utterances and questions which have beenstated previously.

• gAnalyzer is an instance of the GameAnalyzer (see Sect. 4.6.3), used to analyze incom-ing utterances and deciding the next move of the dialog agent.

• dm is the domain model or domain ontology used by OWL-OLM. This model is neededfor generating utterances and checking whether utterances are supported by the domainontology.

• fullDm is the domain ontology, but filtered through the Jena reasoner. This results in thesame ontology extended with inferences which can be made from the original ontology.Reasoning on ontologies is computational expensive and most reasoners cannot providefast reasoning about a small subset of the model as needed for checking whether a utteranceis supported by the domain ontology. To make this reasoning faster, we choose to calculate afull domain model at the beginning of the episode dialog and reuse this model to determinedomain support of a utterance.

• env is the DialogGameEnvironment interface. To allow for easy integration of OWL-OLM inside semantic web systems, this interface determines which methods those sys-tems need to implement to integrate OWL-OLM. Among others, dialog game environmentsshould provide the used domain ontology and LTCS of the user.

• stcs is the short term conceptual state maintained during the dialog episode.

• updatedLtcs is the LTCS which results by combining the original LTCS with the ex-tracted STCS.

• dgUI is the main user interface component defined by the UI package.

• o2nl is an instance of OwlToNaturalLanguage to generate natural language represen-tations of utterances.

50

Page 52: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4.6 The Games Package

Methods

Next, we explain the most important methods defined by the DialogGameAgent class:

• init() initializes the dialog agent, generating the full domain ontology and calculatingwhich dialog games should be created to achieve the dialog episode goal.

• receiveUtterance() receives a utterance produced by the user, analyzes it and returnsa utterance produced by the dialog agent.

• endDialog() ends the dialog episode.

• getNextUtterance() returns the next utterance from the current dialog game.

• getName() returns the name of the dialog agent. This name is used in the user interfaceas the name of the system that is taking part in the dialog with the user.

• handleAnalysisResult() takes an analysis result and decides whether the corre-sponding utterance should result in an update to the STCS or a change of the currentdialog game. See Sect. 4.6.3 for a list of analysis results.

• updateSTCS() updates the STCS using the given utterance.

• getGameOnTopic() returns a dialog game which has the same scope as the given topic.If no active dialog game can be found, a default dialog game is created.

• getActiveGameOn() returns a dialog game, from the set of active dialogs, which hasthe given resource as one of its main topics.

• createDefaultGameOnTopic() Every dialog agent should be able to create a defaultdialog game given a certain resource. This dialog game is used when all active dialog gameshave been played but the dialog episode goal has not been achieved yet.

• getDefaultGame() analogous to createDefaultGameOnTopic() except that thismethod searches for a suitable topic in the domain ontology first.

• answerQuestionGame() takes a utterance from the user asking a question —which hasalready been analyzed and verified as a question— and looks for an answer to the question.If a question could be found, the answer is presented in a new dialog game.

• updateLtcs() is called at the end of the dialog episode. This method uses the built STCSto update the LTCS as described in Sect. 4.8.3.

As the explanations of the attributes and methods in DialogGameAgent show, this classperforms several tasks. For easier reference we dedicate the rest of this section to explain how theattributes and methods are combined to perform the main tasks.

51

Page 53: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4. Architecture and Implementation

Receiving and Sending Utterances

Fig. 4.6 shows how incoming utterances are handled. The actual analysis of the incoming ut-terance is performed by a GameAnalyzer (see Sect. 4.6.3). The analysis result is handled asfollows:

• OK: The STCS is updated with the contents of the utterance. The current game is notchanged.

• IS_END_REQUEST: A EndingDG instance is set as the current game. This ends the dialogepisode.

• IS_A_SKIP_TURN: nothing is done. The current game provides a new utterance.

• IS_A_SUGGESTION: a dialog game related to the suggested concept is set as the currentgame.

• IS_A_QUESTION: A dialog game is created which answers the asked question.

• NOT_IN_GAME_SCOPE: The STCS is updated with the contents of the utterance. A mes-sage is shown to the user to indicate that OWL-OLM was not expecting this utterance as itis not in the scope of the current dialog game.

• NOT_IN_REPLY_SCOPE: The STCS is updated with the contents of the utterance. A mes-sage is shown to the user to indicate that the utterance is not related to the previous utter-ance (although it is related to the current game).

• WRONG_REPLY_INTENTION: The STCS is updated with the contents of the utterance.A message is shown to indicate the intention of the received utterance cannot follow theintention of the previous utterance.

• CONTAINS_MISMATCH: The STCS is updated with the mismatch found in the utterance.The agent decides whether to create a dialog game for clarifying the found mismatch or tocontinue with the current dialog game.

Updating the STCS

Updating the STCS is done by calling method updateSTCS. This method is implemented asshown in Listing 4.1. Depending on the intention of the received utterance, the OWL statementis added or removed from the STCS. See Sect. 4.8.1 for information about the conceptual statemethods for adding or removing OWL statements.

Listing 4.1: updateSTCS()method implementation

/∗∗∗ Use < code > U t t e r a n c e </ code > < i >ut </ i > t o u p d a t e t h e u s e r ’ s∗ s h o r t t e rm c o n c e p t u a l s t a t e .∗/

p r i v a t e v o i d updateSTCS ( Ut te rance ut ) {s w i t c h ( ut . g e t I n t e n t i o n ( ) ) {

case I n t e n t i o n . INFORM_WHAT_I_THINK_YOU_THINK:/ / don ’ t u p d a t ebreak ;

52

Page 54: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4.6 The Games Package

case I n t e n t i o n . INFORM_WHAT_I_THINK:/ / add w h a t e v e r s t a t e m e n t s u s e r makes .s t c s . add ( ut . getSta tementModel ( ) ) ;break ;

case I n t e n t i o n . INFORM_I_DISAGREE:/ / r emove w h a t e v e r s t a t e m e n t s u s e r makess t c s . rem ( ut . getSta tementModel ( ) ) ;break ;

case I n t e n t i o n . INFORM_I_AGREE :/ / add / r e i n f o r c e u s e r s t a t e m e n t ss t c s . add ( ut . getSta tementModel ( ) ) ;break ;

case I n t e n t i o n . INFORM_IGNORANCE :/ / r emove s t a t e m e n t ss t c s . rem ( ut . getSta tementModel ( ) ) ;break ;

case I n t e n t i o n .CHALLENGE_DOUBT :/ / do n o t h i n g ?s t c s . rem ( ut . getSta tementModel ( ) ) ;break ;

case I n t e n t i o n . SUGGEST_TOPIC :/ / do n o t h i n gbreak ;

case I n t e n t i o n .INFORM_IGNORANCE_BOUNDARY :/ / do n o t h i n gbreak ;

case I n t e n t i o n . END_DIALOG :/ / do n o t h i n gbreak ;

case I n t e n t i o n . INQUIRE_BELIEF :/ / do n o t h i n gbreak ;

case I n t e n t i o n . INQUIRE_BELIEF_OPEN :/ / do n o t h i n gbreak ;

case I n t e n t i o n . INQUIRE_MORE_INFO :/ / do n o t h i n gbreak ;

case I n t e n t i o n . INQUIRE_CONFIRMATION:/ / do n o t h i n gbreak ;

case I n t e n t i o n .CHALLENGE_REASON :/ / t o n o t h i n gbreak ;

d e f a u l t :Log . l o g ( " Cannot update STCS with t h i s i n t e n t i o n ! ! " ) ;

}}

53

Page 55: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4. Architecture and Implementation

Maintaining the Dialog Focus

The dialog focus is maintained by combining dialog games and scopes of concepts. Each dialoggame defines a small set of main topics(concepts) and a larger set of all concept related to thedialog game. All utterances in the same dialog game already maintain the focus because they areall related to the main topics of the dialog game.

The dialog agent still needs to try to maintain focus when there is a transition from onedialog game to the next. Whenever cg, the current dialog game ends, a new one needs to beselected. Usually there are enough dialog games in the stack of active dialog games. Whenchoosing a dialog game from this stack, the order of the dialog games (first-in first-out) alreadymaintains some coherence. To be absolutely sure, the scopes of the dialog games in the stackcan be compared to the dialog game which just finished. When the active dialog games stack isempty, a default active game can be created using a concept related to the previous cg.

Ensuring Dialog Continuity

This is achieved by the correct handling of incoming resources as described above. Furthermore,the dialog agent sends messages to users informing them about changes in dialog games, mis-matches found, utterances which are out of scope, etc.

Avoiding Repetition of Utterances

The utHistory attribute is used to repeating questions or asking questions that the user hasalready answered. When method nextUtterance() is called, the utterances in the dialoggame are compared to the utterances in utHistory—using the isEquivalent()method ofthe Utterance class. If an equivalent utterance is found, the question is skipped and the nextutterance in the dialog game is tried.

Answering Questions

Questions are implemented as utterances with an ASK intention (see Sect. 4.4.3). There are twotypes of questions: open question and yes/no questions. The amount of information provided bythe OWL statement in the utterance determines what type of question is being asked. In yes/noquestions, all the needed information is provided in the OWL statement and the question iswhether the OWL statement is true or not according to the system. In open questions, the modelmisses one or more resource definitions or relations. The OWL statement does not contain ameaningful OWL model in the sense that you cannot determine whether the OWL model depictssomething which can be verified in the real world. The semantics in this case is whether thereceiver of the question can fill in a concept or relationship in the model for which the OWLstatement becomes meaningful.

To implement open questions we define them as utterances which contain OWL statementswith a special resource type —a question resource. A question resource is any resource in an OWLmodel which is marked as being the same as the resource swaleQ:Question_Resource4.This means we can mark any resource R in an OWL statement as being a question resource byadding the following RDF triple to the OWL model:

(R rdfs:sameAs swaleQ:Question_Resource)

4This resource has been defined within the SWALE project and can be found athttp://wwwis.win.tue.nl/˜swale/swale_question

54

Page 56: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4.6 The Games Package

Questions can be answered by transforming them to queries and feeding these to any OWLquery answering mechanism. OWL-OLM uses Jena [9] to find triples in the domain ontologywhich match the query. Listing 4.2 shows the implementation of the answerQuestionGame()method.

Listing 4.2: answerQuestionGame()method implementation

/∗∗∗ R e t u r n s a < code >DialogGame </ code > which c o n t a i n s a r e p l y t o∗ t h e q u e s t i o n s t a t e d in < code > U t t e r a n c e </ code >∗ < i >ut </ i > . The r e p l y can be e i t h e r an answer t o t h e∗ q u e s t i o n o r a < code > U t t e r a n c e </ code > s t a t i n g t h a t t h e∗ s y s t e m c a n n o t answer t h i s q u e s t i o n .∗/p r i v a t e DialogGame answerQuestionGame ( Ut te rance ut ) {

i f ( Quest ionResource . hasQuest ionResource ( ut . getSta tementModel ( ) ) ) {/ / make a q u e r y b a s e d on t h e s t a t e m e n t model ,/ / g e t an answer f rom t h e domain mode l and/ / r e p l y w i t h t h a t answerOntModel answer = Quest ionResource . answerQuestion (dm,

ut . getSta tementModel ( ) ) ;i f ( answer = = n u l l ) {

Log . l o g ( " \ t No answer cou ld be found f o r t h i s q u e s t i o n " ) ;r e t u r n new SingleStatementDG ( t h i s ,

I n t e n t i o n . INFORM_IGNORANCE ,ut . getSta tementModel ( ) ) ;

} e l s e {Log . l o g ( " \ t An answer was found f o r t h i s q u e s t i o n . . . " ) ;r e t u r n new SingleStatementDG ( t h i s ,

I n t e n t i o n . INFORM_WHAT_I_THINK,answer ) ;

}} e l s e {

/ / c h e c k w h e t h e r t h e s t a t e m e n t mode l i s/ / s u p p o r t e d by t h e domain mode lOwlModelComparator omc = new OwlModelComparator ( fullDm ) ;i f ( omc . i sSuppor ted ( ut . getSta tementModel ( ) ) ) {

Log . l o g ( " \ t q u e s t i o n i s suppor ted by domain model . . . " ) ;r e t u r n new SingleStatementDG ( t h i s ,

I n t e n t i o n . INFORM_WHAT_I_THINK,ut . getSta tementModel ( ) ) ;

} e l s e {Log . l o g ( " \ t q u e s t i o n i s not suppor ted by domain model . . . " ) ;r e t u r n new SingleStatementDG ( t h i s ,

I n t e n t i o n . INFORM_IGNORANCE ,ut . getSta tementModel ( ) ) ;

}}

}

If the utterance is a yes/no question, the agent checks whether the utterance is supported bythe domain ontology. This is done using the isSupported()method of the OwlModelComparatorclass described in Sect. 4.7.4. If the model is supported, the agent answers that the OWL state-

55

Page 57: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4. Architecture and Implementation

ment of the utterance is correct; otherwise, the user is told that his question cannot be answered.If the case of an open question, the same is attempted, with the difference that the system

has to first find a suitable resource in the place of the question resource. If no resources can befound, the user’s question cannot be answered. The job of finding an answer is performed by theanswerQuestion()method of the QuestionResource class described in Sect. 4.6.2.

4.6.3 GameAnalyzer

The dialog agent needs to analyze incoming utterances to decide the next move in the dialog.This analysis is implemented by the GameAnalyzer class.

Analysis Results

GameAnalyzer defines a set of possible analysis results. These results form the interface be-tween the GameAnalyzer and the DialogGameAgent. The defined analysis results are:

• OK: nothing wrong could be found about the utterance. This is the default analysis result ifnone of the other analysis results apply.

• NOT_IN_GAME_SCOPE: the scope of the utterance does not correspond with the scope ofthe current dialog game. This means the dialog agent may need to change the currentdialog game.

• NOT_IN_REPLY_SCOPE: the scope of the utterance does not correspond with the scope ofthe previous utterance. In coherent dialogs, each utterance is a continuation of the previousutterance. This analysis result indicate a change in topic.

• WRONG_REPLY_INTENTION: the intention of the utterance does not correspond with theintention of the previous utterance. Again, this indicates incoherency in the dialog whichthe dialog agent may need to address.

• IS_A_SUGGESTION: the utterance suggest a new topic(scope) for the future of the dia-log. This is different than NOT_IN_REPLY_SCOPE in the sense that the intention of theutterance already state the sentence is a suggestion. This means it does not matter if thedomain-related content of the utterance is not in the same scope as the previous utterance.

• IS_A_QUESTION: the utterance asks a question. The dialog agent should probably try toanswer the question.

• IS_END_REQUEST: the utterance indicates the originator of the utterance wants to endthe dialog episode.

• IS_A_SKIP_TURN: the originator of the utterance wants to skip his/her/its turn of pro-ducing a utterance. The dialog agent can take the initiative of the dialog.

• CONTAINS_MISMATCH: the analyzed utterance is not supported by the domain ontologyand contains a mismatch.

56

Page 58: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4.7 The OwlUtils Package

Attributes

TheGameAnalyzerdefines three attributes needed for performing the analysis. ThecurrentGameis needed for determining whether the analyzed utterance respects the scope of the current dialoggame. The previousUtterance is used to determine whether the analyzed utterance respectsthe dialog coherence. The omc is a OwlModelComparator—described in Sect. 4.7.4— whichdetermines whether the utterance is supported by the domain ontology; if this is not the case, theomc can return a set of mismatches between the analyzed utterance and the domain ontology.

Methods

The main methods defined by the GameAnalyzer are:

• analyze(ut, dm): analyzes utterance ut and compares it with the domain ontologydm. It returns an analysis result.

• analyzeMismatch(): returns the set of mismatches found between the analyzed utter-ance and the domain ontology.

Most other methods displayed in Fig. 4.5 perform analysis and comparisons between the an-alyzed utterance and the previous utterance or the current dialog game. Features analyzed andcompared are the intention and scope of the utterance. We refer to the source code documenta-tion or the source code itself for more details. A method which performs more complex analysisis isDomainSupported(). This method calls the isSupported() method of the omc at-tribute and is explained in Sect. 4.7.4.

4.7 The OwlUtils Package

The OwlUtils package contains classes which perform OWL related computations which arenot necessarily dialog-specific and which could be useful to other OWL-based applications. Thispackage can be seen as an extension to the Jena library. Figure 4.7 shows the classes defined bythis package. First we give a small overview of the classes and the rest of this section expands onthe implementation of the most important classes.

• OwlUtils (the class, not the package) contains methods often needed in the rest of OWL-OLM which did not belong in any of the other classes defined in this package and whichwere also not already provided by the Jena package. Examples of methods provided by thisclass are:

– copyLabels() copies all rdf:label values from one OWL resource to another.

– subClassRelationDeclared(Vector classes)determines whether there isan rdf:subClassOf relationship between two of the classes in a set of classes.

– getInstanceOf(OntClass oc) returns an individual of the class oc.

• SubModelMaker contains methods for creating an OWL model based on a small set ofRDF triples. This class is used to create OWL statements from the domain ontology.

• OwlToNaturalLanguageprovides some basic natural language generation for resourcesrepresented in OWL.

57

Page 59: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4. Architecture and Implementation

• OwlScope defines an OWL scope, a set of OWL resources. It defines methods for creatingsuch a set and for determining whether a certain resource is a member of the scope. Theimplementation is trivial so we refer to the source code for implementation details.

• OwlResourceComparator provides methods for testing whether two OWL resourcesare equivalent to each other.

• OwlModelComparator provides methods for comparing two OWL models. The maincomparison performed is checking whether an OWL model is supported by the other OWLmodel. When this is not the case, it provides methods for listing the set of differences(mismatches) between the models.

• OwlMismatchdefines a set of possible differences there can be between two OWL models.It further provides methods for classifying an RDF triple into one of the defined categoriesof mismatches.

4.7.1 SubModelMaker

Sect. 4.5.1 describes how OWL statements are implemented and how OWL statements are linkedto OWL models with specific properties. That section further explains that those OWL modelsare composed of main RDF triples —those which state the main semantics of the OWL model—and supporting RDF triples —triples that define the resources used in the main RDF triples.Sect. 4.5.1 does not mention how these OWL models are created. When users create utterances,they use a graphical user interface to create these OWL models (as Sect. 4.9 explains). WhenOWL-OLM creates utterances, the OWL models need to be created by specifying the meaning ofthe utterance (the main RDF triples) based on the domain ontology. The SubModelMaker classprovides methods for building OWL models suitable for OWL statements based on RDF triplesfrom the domain ontology.

We define a submodel as an OWL model suitable for an OWL statement based on a largermodel. In order to build such a submodel, OWL-OLM chooses a set of main RDF triples fromthe domain ontology. These triples are added to SubModelMaker using the addTriple()method. Once all the main RDF triples have been added to the SubModelMaker OWL-OLMcalls the getSubModel()method, which returns the required OWL model.

In terms of main and supporting RDF triples, SubModelMaker accepts a set of main RDFtriples and calculates the supporting RDF triples by determining which RDF triples are neededand copying these from the domain ontology. The resulting OWL model fully describes thesemantics of the main RDF triples because it declares all the resources it uses.

4.7.2 OwlToNaturalLanguage

The OwlToNaturalLanguage class defines methods for generating (very basic) natural lan-guage representations of OWL resources.

OwlToNaturalLanguageuses the rdf:label (which is part of the OWL standard) to getthe preferred representation of a given resource. Some resources do not declare a label, in whichcase the local ID of the resource node is used. This is not ideal as the value of the ID may notreflect the semantics of the resource itself.

This class provides three kinds of representation for each resource, the basic representationis returned by the resourceToString()method, which just returns the label (or ID) for the

58

Page 60: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4.7 The OwlUtils Package

Figure 4.6: Receiving and sending utterances

Figure 4.7: Class diagram for the OwlUtils package

59

Page 61: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4. Architecture and Implementation

node. Method a() returns a string of the form “a resourceName”, thus adding the “a” literalbefore the name of the resource. Method plural() returns the plural of the resource name;this method adds an “s” at the end of the label and takes exceptions into account.

Note that this class is a very simple implementation and very often does not produce adequatenatural language representations. It also is based on the assumption that OWL ontologies providerdf:label values for the defined resources, which may not always be the case. Unfortunately,the OWL standard does not impose any restrictions to the semantics of the rdf:label property.We assume the rdf:label value contains the single (not plural) name of the resource. Also ifa resource defines more than one label, OwlToNaturalLanguage chooses one of the labels.See the source code documentation or the source code itself for more implementation details.

4.7.3 OwlResourceComparator

In OWL, the same concept can be defined multiple times at different places (different URIs).A disadvantage of this is that deciding whether two resources are the same is not trivial sinceit is not enough to just check the URI of the resources. Deciding whether two resources referto the same concept involves looking for rdf:sameAs and owl:equivalentClass relationsbetween resources. Jena does not provide any methods for doing this quickly and automati-cally. We can and do use the reasoner in Jena since it can follow long chains of rdf:sameAsor owl:equivalentClass relations, but, as we stated earlier, this reasoner also makes severalother inferences which are not relevant and is not very efficient. OwlResourceComparatordefines method isEquivalent(OntResource r1, OntResource r2)which returns whethertwo resources are equivalent.

4.7.4 OwlModelComparator

The OwlModelComparator class defines methods for comparing two OWL models. When youcreate an instance of this class you need to pass it an OWL model, which we call the base modelof the OwlModelComparator —in OWL-OLM this is usually the domain ontology. You canthen use the isSupported(OntModel om) method to determine whether the first modelsupports om. If the model is not supported, there must have been one or more mismatches.These mismatches can be retrieved by calling the getMismatches()method.

The isSupported(OntModel om) method checks whether each RDF triple in om is sup-ported by the OWL model linked to the OwlModelComparator. An RDF triple is supportedby OWL model M if M defines the RDF triple or if the RDF triple can be inferred from thetriples in M. Since making inferences is computationally expensive, OwlModelComparatordoes not perform any inference and only checks whether the RDF triple is present in the basemodel. In OWL-OLM we always use the domain ontology which has already been filtered throughthe Jena reasoner to avoid making the same inferences multiple times (see attribute fullDM inSect. 4.6.2).

Whenever an RDF triple is not supported by the base model of the OwlModelComparatorthis represents a mismatch between the models. The RDF triple and the base model are used todetermine the type of the OWL mismatch as explained below.

60

Page 62: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4.8 The UM Package

4.7.5 OwlMismatch

We define an OWL mismatch as any RDF triple in an OWL model M1 which cannot be found inanother OWL model M2. The OwlMismatch class can be used to represent such a mismatchbetween two OWL models. OwlMismatch defines methods for defining and classifying mis-matches given an RDF triple and a model (M2 from the definition).

By defining mismatches at the RDF triple level we can determine the type of mismatch byusing the basic building blocks in OWL. The semantics of a mismatch is that a resource or re-lationship in the OWL statement is not supported by the domain ontology. Table 4.3 shows thetypes of mismatches which are defined in OwlMismatch.

Table 4.3: Types of mismatches defined in OwlMismatchType CommentUNKNOWN_MISMATCH None of the other types applyONTCLASS_UNSUPPORTED A class cannot be foundINDIVIDUAL_UNSUPPORTED An individual cannot be foundOBPROP_UNSUPPORTED An object property cannot be foundDTPROP_UNSUPPORTED A datatype property cannot be foundSUBCLASS_UNSUPPORTED A subclass relationship cannot be foundSUPERCLASS_UNSUPPORTED A superclass relationship cannot be foundINSTANCEOF_UNSUPPORTED The link between an individual and its class cannot be

foundOBPROP_REL_UNSUPPORTED The domain ontology does not suggest that two resources

are related by this object propertyDTPROP_REL_UNSUPPORTED The domain ontology does not suggest that two resources

are related by this datatype propertySAME_AS_UNSUPPORTED The domain ontology does not suggest two resources are

the sameLABEL_UNSUPPORTED The same resource in the domain ontology is not associ-

ated with this labelCOMMENT_UNSUPPORTED The same resource in the domain ontology is not associ-

ated with this commentDOMAIN_UNSUPPORTED The domain ontology does not suggest that a resource is

the domain of a propertyRANGE_UNSUPPORTED The domain ontology doesn’t support that a resource is

the range of a propertyOBPROP_VALUE_UNSUPPORTED The domain ontology does not support that a resource

has this value for this object propertyDTPROP_VALUE_UNSUPPORTED The domain ontology does not support that a resource

has this value for this datatype property

4.8 The UM Package

TheUM package implements the user model representation or conceptual state described in Sect. 3.3.1.Fig. 4.8 shows that the UM package consists of two classes: ConceptualStateMerger and

61

Page 63: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4. Architecture and Implementation

ConceptualState. Both classes are discussed below.

4.8.1 ConceptualState

The ConceptualState class implements both the STCS and LTCS described in Sect. 3.3.1.The model attribute contains the conceptual model linking the user’s conceptualization to thedomain ontology in attribute domainModel. Attribute aimsUserModel contains a link to theuser model maintained by OntoAIMS (which represents more information beside the user’s con-ceptualization).

ConceptualState defines several methods for adding and removing new concepts to theconceptual model. It further contains methods for determining whether a certain concept orrelationship between concepts is known to the user. We shortly describe the most importantmethods:

• compatibleWith() can be used to determine whether two conceptual states are com-patible with each other. This is the case when both conceptual states refer to the same user(equal aimsUserModel) and are based on the same domain ontology (equal domainModel).

• add() adds an OntModel to the conceptual state. In OntoAIMS, the OntModel comesfrom Utterances generated by the user; they contain domain-related concepts and rela-tionships which can be added to the conceptual state either because they are supported bythe domain ontology or because the user has specified his belief on the semantics expressedby the OntModel.

• rem() removes an OntModel from the conceptual state. This removes a the concepts orrelationships in the OntModel from the conceptual state or decreases the belief_valueof the concepts.

• resourceIsKnown() indicates whether a given resource is known by the user accordingto the conceptual state. Currently this is implemented using the belief_value relatedto the given resource.

• increaseBeliefValue() increases the belief_value related to a given resource,indicating the user has shown she knows about the concept or relationship represented bythe resource.

• decreaseBeliefValue() decreases the belief_value related to a given resource,indicating the user has shown he lacks some knowledge about the concept or relationshiprepresented by the resource.

• getKnowledgeScore() returns a knowledge score for a set of resources which indicateswhether the user knows about the related concepts or relationships according to the datafrom the conceptual state.

Adding and Removing Resources to the Conceptual State

The add() and rem() methods take care of adding or removing concepts to and from theConceptualState. These methods accept an OntModel parameter —see Listing 4.3— whichis more practical than having to add each of the concepts from the model one at a time. In OWL-OLM this method is called passing the OntModel corresponding to a given Utterance.

62

Page 64: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4.8 The UM Package

As Listing 4.3 shows, we follow the OWL primitives to add concepts and relations to theuser’s conceptual state: we add classes, object properties, datatype properties and individuals andwe add possible relations between these resources like domains, ranges, subclass relations andrelations between individuals and classes.

Listing 4.3: add(OntModel om) method implementation

/∗∗∗ Add o r r e i n f o r c e t h e s t a t e m e n t s in < code >OntModel </ code >∗ < i >om</ i > .∗/

p u b l i c v o i d add ( OntModel om ) {I t e r a t o r i t ;/ / add a l l c l a s s e s in omf o r ( i t = om. l i s t C l a s s e s ( ) ; i t . hasNext ( ) ; ) {

OntClass oc = ( OntClass ) i t . nex t ( ) ;addClass ( oc ) ;addSubClasses ( oc ) ;addSuperClasses ( oc ) ;

}

/ / add a l l o b P r o p sf o r ( i t = om. l i s t O b j e c t P r o p e r t i e s ( ) ; i t . hasNext ( ) ; ) {

O b j e c t P r o p e r t y op = ( O b j e c t P r o p e r t y ) i t . nex t ( ) ;a d d O b j e c t P r o p e r t y ( op ) ;addObPropDomain ( op ) ;addObPropRange ( op ) ;

}

/ / add a l l d t P r o p sf o r ( i t = om. l i s t D a t a t y p e P r o p e r t i e s ( ) ; i t . hasNext ( ) ; ) {

D a t a t y p e P r o p e r t y dp = ( D a t a t y p e P r o p e r t y ) i t . nex t ( ) ;a d d D a t a t y p e P r o p e r t y ( dp ) ;addDtPropDomain ( dp ) ;addDtPropRange ( dp ) ;

}

/ / add a l l i n d i v i d u a l sf o r ( i t = om. l i s t I n d i v i d u a l s ( ) ; i t . hasNext ( ) ; ) {

I n d i v i d u a l ind = ( I n d i v i d u a l ) i t . nex t ( ) ;a d d I n d i v i d u a l ( ind ) ;addClassOfIns tan ce ( ind ) ;addInd iv idua lObPropVa lues ( ind , om ) ;addInd iv idu a l D tPr opV a l ue s ( ind , om ) ;

}}

4.8.2 Linking History Properties to Concepts and Relations

As described in Sect. 3.3.1, we keep track of a set of history properties about the conceptual state.The RDF specification of these properties can be found at http://wwwis.win.tue.nl/˜swale/aimsUM.

63

Page 65: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4. Architecture and Implementation

The properties can be related to classes, individuals, and properties in the conceptual model. List-ing 4.4 shows and excerpt of a conceptual state annotating class Filesystem_node.

Listing 4.4: Excerpt from a STCS annotating a class

< r d f : D e s c r i p t i o n r d f : about =" b lo : F i l esys tem_node ">< r d f s : comment r d f : d a t a t y p e =" xmls : s t r i n g ">

Any s e t o f d a t a t h a t has a pathname on thef i l e s y s t e m .

</ r d f s : comment>< r d f s : l a b e l > f i l e </ r d f s : l a b e l >< r d f : t y p e r d f : r e s o u r c e =" owl : C l a s s "/ ><aimsUM : t imes_used r d f : d a t a t y p e =" xmls : long ">

12 </aimsUM : t imes_used ><aimsUM : t i m e s _ u s e d _ c o r r e c t l y

r d f : d a t a t y p e =" xmls : long ">10 </aimsUM : t i m e s _ u s e d _ c o r r e c t l y >

<aimsUM : t imes_used_wronglyr d f : d a t a t y p e =" xmls : long ">2</aimsUM : t imes_used_wronlgy >

<aimsUM : t i m e s _ a f f i r m e d r d f : d a t a t y p e =" xmls : long ">3 </aimsUM : t imes_a f f i rmed >

<aimsUM : t imes_denied r d f : d a t a t y p e =" xmls : long ">1 </aimsUM : t imes_denied >

</ r d f : D e s c r i p t i o n >

The example in Listing 4.4 shows that the student has used the class Filesystem_nodea total of 12 times; 10 times supported by the domain ontology and twice not supported. Hehas stated 3 times that he knows the concept Filesystem_node and once that he does not.Classes, individuals, object properties, and datatype properties are annotated in the same way.

We also need to capture relations the user builds between concepts. In order to associatethe history properties with these relations we use reified statements —defined in Sect. 4.3. Forinstance, if a user states that class Move_file_operation is a subclass of Command. We cancreate a reified statement referring to this relationship and add the aimsUM:times_used_wronglyproperty to this reified statement, as shown in Listing 4.5.

Listing 4.5: Excerpt from a STCS showing reified statements

< r d f : D e s c r i p t i o n r d f : nodeID =" A273">< r d f : t y p e

r d f : r e s o u r c e =" r d f : Sta tement "/ >< r d f : s u b j e c t

r d f : r e s o u r c e =" b lo : M o v e _ f i l e _ o p e r a t i o n "/ >< r d f : p r e d i c a t e

r d f : r e s o u r c e =" r d f s : subClassOf "/ >< r d f : o b j e c t r d f : r e s o u r c e =" b lo : Command"/ ><aimsUM : t imes_used

r d f : d a t a t y p e =" xmls : long " >1</aimsUM : t imes_used ><aimsUM : t imes_used_wrongly

r d f : d a t a t y p e =" xmls : long " >1</aimsUM : t imes_used_wrongly >

</ r d f : D e s c r i p t i o n >

64

Page 66: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4.9 The UI Package

The above effectively annotates that the student has used the rdfs:subClassOf relation-ship between Move_file_operation and Command. This is more detailed than only associ-ating properties with the individual classes, as in the first excerpt.

Determining the belief_value of concepts

As indicated in Sect. 3.3.1, conceptual states keep track of a set of history properties like the numberof times a resource is used correctly or is affirmed by the user. History properties are then inter-preted to calculate the belief value about concepts in the conceptual state, which indicates whethera user knows the concept or not according to the data in the conceptual state.

The calculation of the belief value can be implemented in several ways, in OWL-OLM we usethe following algorithm: the first time the concept is used its belief value is assigned to 50 (out of100). From then on, Belie f _value(c) = x + (100 − x)/2, if there is evidence for knowing c andBelie f _value(c) = x/2, if there is evidence for not knowing c, where x is the current belief valueof c and evidence is calculated depending on the situation. The advantage of this implementationis that the resulting belief value is strongly influenced by the last couple of times the user hasshown (lack of) knowledge. Since OWL-OLM aims at rapidly extracting conceptual states, thisapproach is preferred over other implementations which may be more accurate but would needlonger interactions with users.

4.8.3 ConceptualStateMerger

The ConceptualStateMerger class provides methods for updating the LTCS using a STCSrelated to a recent user interaction with OWL-OLM. After each dialog episode, an instance of theConceptualStateMerger is invoked to update the LTCS in the user model. The algorithmfor merging the conceptual states is based on the idea of keeping the LTCS sound. Before andafter each update, the LTCS should be sound. The STCS resulting from a dialog episode maycontain conflicting data which need to be sorted out before merging its data with the LTCS. Aslong as the data introduced does not generate conflicts (affecting soundness of the conceptualstate) it is merged with the LTCS.

The merging algorithm first checks the soundness of the STCS by calling the validate()method provided by Jena. If the model is sound, all updates from the STCS are merged intothe LTCS. If the model is not sound, OWL-OLM attempts to determine the reason generatingthe conflict and partially merges the updates into the LTCS maintaining the soundness of theLTCS. Jena does not provide easy methods for pinpointing the cause of conflicts, so OWL-OLMoften discards parts of the STCS which could otherwise be integrated. More research is neededto improve this algorithm. On the other hand, current dialog episodes seldomly result in STCSwhich are not sound, so this problem usually does not affect the OWL-OLM approach.

4.9 The UI Package

The UI package implements the user interface of OWL-OLM. The design for this package wasdiscussed in Sect. 3.2.5. Fig. 4.9 shows the class diagram for this package. A short list of the mainclasses and what they do:

• OwlOlmPanel defines the main window which integrates the other user interface classes.Applications integrating OWL-OLM need to call this class to provide users with an interface

65

Page 67: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4. Architecture and Implementation

Figure 4.8: Class diagram for the UM package

Figure 4.9: Class diagram for the OWL-OLM UI package

66

Page 68: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4.9 The UI Package

to OWL-OLM.

• OwlOlmPanelEnvironment defines which methods should be defined by the environ-ment where OwlOlmPanel is used. Applications integrating OWL-OLM should imple-ment this interface.

• OLMPanel defines the main part of the user interface: the area where the user can view,create and edit a Utterance in a graphical manner.

• OLMPanelListener defines the interface between OLMPanel and OwlOlmPanel.

• DialogHistoryTextPane implements a text area for showing the history of the dialog.It allows presenting text in different styles for easier separation and discovery of differenttypes of messages (depending on who produced a certain utterance or the importance of acertain message).

• IntentionChoiceDialog implements a window which allows users to choose betweenseveral sentence openers given an OwlStatement and an Intention.

Figure 4.10: Screenshot of the OwlOlmPanel showing a Utterance asking a yes/no-question

As stated in Sect. 3.2.5, the design and implementation of the user interface is beyond thescope of this project and was mainly done by Michael Pye of the University of Leeds. The userinterface is a re-implementation of the STyLE-OLM user interface (see Fig. 2.3) based on OWLinstead of conceptual graphs. Michael Pye developed the uk.ac.leeds.swale.olm packagewhich implements a user interface component for showing OWL models in a graphical way (see

67

Page 69: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4. Architecture and Implementation

Figures 4.10 and 4.11 for screenshots of the user interface). OWL models are presented as dia-grams using the JGraph5 package. Refer to the source code (documentation) for implementationdetails of the user interface.

4.10 Task Recommendation

This section describes how the OWL-OLM implementation is used to enhance the adaptive taskrecommendation in OntoAIMS as described in Sect. 3.4. An important problem with the taskrecommendation is that the course model in OntoAIMS contains descriptions of tasks, but thesedescriptions are partial implementation of the design described in Sect. 3.4 and in [2]. As a resultof this partial implementation, the actual adaptation of OntoAIMS is determined at design timeby the course designer by specifying a hierarchy of tasks (see Appendix C for an example). We usethis hierarchy to enhance the adaptation of task recommendation as we describe in this section.

Fig. 4.12 shows the classes involved in the task recommendation adaptation. DialogGameAgentwas described in Sect. 4.6.2 and ConceptualStatewas described in Sect. 4.8.1.

4.10.1 UserProfileAgent

OntoAIMS defines the UserProfileAgent class, which defines an agent for handling all userprofile related data. Among others, the UserProfileAgent keeps track of which tasks theuser has visited and finished and which tasks the user still needs to perform. For this project, weextended UserProfileAgentwith conceptual states and a DialogGameAgent. We discussthe attributes and methods of UserProfileAgent:

Attributes

• courseAgent points to the agent in OntoAIMS which handles the course model. Thisagent provides information about the set of tasks available in the course.

• visitedTasks is the set of tasks the user has already visited but not finished yet.

• finishedTasks is the set of tasks the user has already finished.

• curTask is the task the user is currently performing.

• userLtcs is the Long Term Conceptual State of this user related to the domain ontologyfor this course.

• userStcs is the Short Term Conceptual State of the current dialog episode between theuser and a DialogGameAgent (dga).

• dga is a DialogGameAgent which can be used to extract a STCS for determining theuser’s knowledge about the domain.

• nextProposedTask is the task the system recommends the user to perform next. Thevalue for this attribute is calculated by comparing the userLtcs with the specification ofthe tasks in the course. See the explanation about the getUnfinishedTask() methodfor more information.

5http://www.jgraph.com/

68

Page 70: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4.10 Task Recommendation

Figure 4.11: Screenshot of the OwlOlmPanel handling a mismatch

Figure 4.12: Classes involved in adaptive task recommendation

69

Page 71: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4. Architecture and Implementation

Methods

• initDialogGameAgent() initializes the dga by specifying which domain ontology isbeing used. Furthermore, the dga is informed about who is the user which is taking partof the dialog episode.

• isFinished() indicates whether a task has already been finished by the user.

• isCurrent() indicates whether a given task is currently being performed by the user.

• isVisited() indicates whether a given task has already been visited by the user.

• getNextProposedTask() returns the next proposed task by the UserProfileAgent.See method getUnfinishedTask() for an explanation of how this task is calculated.

• userCanSkip() indicates whether a user can skip a given task. This is done by cal-culating a knowledge score the user has about the concepts related to the given task. List-ing 4.6 shows the implementation of this method. Method getKnowledgeScore() ofthe ConceptualState class is used to calculate the knowledge score of the user on allconcepts related to task t. If the score is higher than a certain threshold, the user is allowedto skip task t.

Listing 4.6: userCanSkip()method implementation

/∗∗∗ R e t u r n s < code > t r u e </ code > i f t h e u s e r knows enough abou t∗ t h e c o n c e p t s in < i > t </ i > t o s k i p < code >Task </ code >∗ < i > t </ i > .∗/

p r i v a t e boolean userCanSkip ( Task t ) {Vec tor concepts = t . ge tConceptVec tor ( ) ;i n t s c o r e = 0 ;Log . l o g ( "Domain i s o n t o l o g y based , c o n s u l t i n g s t c s f o r "+

" t a s k knowledge s c o r e " ) ;Vec tor conceptURIs = new Vec tor ( ) ;f o r ( I t e r a t o r i t = concepts . i t e r a t o r ( ) ; i t . hasNext ( ) ; ) {

C o n c e p t S t r u c t cs = ( C o n c e p t S t r u c t ) i t . nex t ( ) ;S t r i n g resURI = cs . getBasedOnResURI ( ) ;i f ( ( resURI ! = n u l l ) &&

( ! resURI . e q u a l s ( " " ) ) ) {conceptURIs . add ( resURI ) ;

}}s c o r e = dga . getLTCS ( ) . getKnowledgeScore ( conceptURIs , 6 0 ) ;

Log . debug ( " Knowledge Score o f t a s k "+ t . getName ( ) +" i s " + s c o r e ) ;

t a s k P r o p o s a l T e x t + = " Your knowledge s c o r e f o r t a s k \" "+t . getName ( ) + " \" i s " + s c o r e + " out o f 100.\ n" ;

i f ( s c o r e > = 7 5 ) {Log . l o g ( " User can s k i p t a s k " + t . getName ( ) ) ;t a s k P r o p o s a l T e x t + = " \ tYou may s k i p t h i s t a s k . \ n" ;t r y {

f i n i s h ( t ) ;

70

Page 72: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

4.10 Task Recommendation

} c a t c h ( Genera lExcep t i on ge ) {Log . l o g ( " Couldn ’ t mark t a s k "+ t . getName ( ) +

" as f i n i s h e d . "+ge . t o S t r i n g ( ) ) ;ge . p r i n t S t a c k T r a c e ( ) ;

}r e t u r n t r u e ;

} e l s e {Log . l o g ( " User cannot s k i p t a s k " + t . getName ( ) ) ;t a s k P r o p o s a l T e x t + = " \ tYou should f o l l o w t h i s t a s k "+

" as i t c o n t a i n s i n f o r m a t i o n you should "+" l e a r n . \ n" ;

r e t u r n f a l s e ;}

}

• getUnfinishedTask() returns the next task the user has not finished yet. This is doneby following the task hierarchy as defined in the course model and checking whether eachtask has been already visited or finished. This method further checks whether the user mayskip a task as described by method userCanSkip(). The task returned by this methodhas not been viewed or finished and the user lacks some knowledge which he may learn byperforming the task.

71

Page 73: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

5. Evaluation

Chapter 5

Evaluation

The evaluation of user-adapted systems is usually conducted in several stages to examine thefunctionality of system components and evaluate the overall interaction [43]. Empirical studiesare needed to estimate the potential and discover pitfalls of user- adapted approaches [11]. In thisline, we have conducted initial evaluation studies with OntoAIMS to: (a) verify the functionality ofits components (user modeling, task recommendation, and resource browser); (b) examine howusers accept the integrated environment and its adaptive behavior; (c) identify how the systemcan be improved with additional adaptive features.

5.1 Experimental settings

Two user studies were conducted with an instantiation of OntoAIMS in a domain of Linux. Ini-tially, six users, postgraduate students and staff from the universities of Leeds and Eindhoven,took part. An improved version of the system was used in a second evaluative study with ten firstyear Computing undergraduates at Leeds University. It followed a two-week introduction to Linuxcourse. All participants had passed the course but their expertise in Linux varied. Half of themhad significant practical experience and had worked with resources on their own, while the othersdid not practice Linux outside the course and had gaps in their knowledge. The students’ partic-ipation was voluntary and offered as an opportunity to gain practical experience of user testingtaught at their Human-Computer Interaction module.

Some of the resources from the Linux course were used to populate OntoAIMS. A variety ofresources were provided as URLs, including locally stored files and external links. The resourceswere then annotated and linked to a Linux domain ontology that was built and verified by twoLinux experts. A course task ontology and resource ontology were also built by one of the experts.

Both studies followed the same procedure. Each user attended an individual session, whichlasted about an hour and was video recorded and monitored by an observer. OntoAIMS did nothave data about the users prior to their logon to the system to ensure realistic cold start conditions.The users were asked to study resources on Linux, which would be recommended by the system.At times, the observer interrupted the users to clarify their behavior with the system. At theend, the users were given a questionnaire related to their satisfaction and suggestions for systemimprovements.

72

Page 74: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

5.2 Results and Discussion

5.2 Results and Discussion

Extensive data from both studies, including user log files, video recordings, observer’s notes, and userquestionnaire is being analyzed. Initial results show that OntoAIMS was regarded as helpful forboth tuning one’s knowledge in Linux and learning more about domain concepts.

5.2.1 Interaction sequence

Every user appreciated the integrated functionality and worked with all components. They couldgo through course tasks on their own, or follow tasks, recommended by the systems. The latterwas chosen by all participants. Because the system did not have any information about the users,cold start, it directed them to OWL-OLM, where users pent an average of about half an hour. OWL-OLM probed their domain knowledge following the topics defined in the task ontology. Basedon the dialog, the users were suggested tasks suitable for their level (as described in Section3.4). They then spent about half an hour with the OntoAIMS resource browser, exploring theconceptual space and reading resources offered by the system.

5.2.2 Interactive User Modeling

The interaction behavior with OWL-OLM differed. Expert users followed the dialog and answeredthe system’s questions quickly and only asked questions to confirm a domain fact they alreadyknew. These users were pleased that the system discovered their familiarity with topics andrecommended them to go to more advanced tasks. Less knowledgeable users struggled to an-swer the system’s questions and often passed the question back to the system. These users ex-plored a variety of dialog moves, e.g. they disagreed with system’s statements, composed newconcepts and links, and asked several types of questions. There were occasions when discrep-ancies with the domain ontology were shown, e.g. similar concepts were mixed (e.g. fileoperation, command, and program), more general concepts were used instead of specificones (e.g. program instead of system program), relationships were placed wrongly. Thesediscrepancies were marked in the user models as mismatches, see Section 3.2.4.

The evaluation showed strong potential of OWL-OLM to deal with the cold start problem, as auser commented “it judged my level of expertise and gave me a starting point when it came to findingout further information”. In addition, less knowledgeable users used the dialog to study about thedomain, as one of them commented “the dialog makes me think about my knowledge and helps meimprove my understanding”. Indeed, as discussed in [17], reflection is one of the potential benefitsof such interactions. The users felt that the OWL-OLM interaction was part of their learningprocess, which was seen both as unobtrusive and complying with the overall goal.

The study pointed at further improvements, such as maintaining the dialog coherence (e.g.the tested system did not follow new topics suggested by users), improving the interface (whichwas cumbersome and idiosyncratic at times), polishing the textual forms of the sentences (someof the text generation templates did not produce good English). Although most users liked thegraphic as it was “simpler” and helped them “see the main concepts and the links between them”, fourusers said they would prefer to interact in a textual way, which will require sophisticated languageprocessing algorithms.

73

Page 75: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

5. Evaluation

5.2.3 Task recommendation

The user models generated with OWL-OLM were used to inform the task proposal. Some userswere offered to skip tasks, as OWL-OLM found that they already knew quite a bit about that topic,while less knowledgeable users were directed to introductory topics. Most users agreed that thetask recommended by the system was appropriate for their level, two students disagreed withthis. Students who disagreed gave as a reason that the resources for the recommended topicswere insufficient. All students were pleased that the system could recommend them a task,followed the recommended tasks, and regarded them as compliant with their learning goals. Allstudents, except for one, said that they were aware why the task recommendation was made. Thestudy confirmed that the task recommendation is a highly desirable adaptive feature, which usersappreciate and follow.

Although it is natural to integrate the task proposal within the dialog (as a description in thedialog history), there were occasions when the users would skip through the history and couldmiss the suggested task. This was regarded as an interface problem that needs to be addressedin a future implementation of OntoAIMS.

5.2.4 Resource browsing

The resource browser was regarded as “a flexible way of looking at resources”. The users found itintuitive and easy to use,

They spent considerable time using the star diagram (see Figure 2.1). All users agreed thatthe graphical representation gave them an overview of the conceptual space. Some commentsinclude: “it allows to map your path through sequences of topics” and “demonstrated exactly where Iwas in relation to the suggested topic”. From the diagram, the users checked the definitions ofconcepts or explored the space for related concepts. All experts and some novices tried to lookfor concepts they did not know, while most novices preferred to start from concepts they alreadyknew. Most resources were found appropriate, with few exceptions when experts thought thatmore advanced material should have been provided.

Once searching for a concept, the users were offered a set of resources that were rankedaccording to their applicability to the task. All users were pleased with the ranking option but,again all of them, suggested that the ranking should be done not according to the appropriatenessto the task (as at the moment) but following the user’s preferences, knowledge, and currentgoal. The goals of users when opening a resource differed, e.g to learn more about a concept(both experts and novices), to check the syntax of a command (mostly experts), to confirm theirknowledge (mostly novices). The users wanted to be able to clarify their goals when searching fora concept, which could be done via menus or confirmation dialog and will be addressed in thenext version of OntoAIMS.

5.2.5 Improvements

The first evaluation round revealed several user interface problems which were improved for thesecond evaluation round. Among others, improvements were made to the layout of the userinterface: the dialog history was moved from the bottom of the screen to the top of the screen.The way users build utterances was simplified by showing intentions instead of just sentenceopeners. Sentence openers alone did not reflect the intentions of the users correctly and were acause of confusion. The output generated by the dialog agent was expanded by showing sentencesto indicate the reasons of the dialog agent. The dialog history now also shows explanations of why

74

Page 76: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

5.2 Results and Discussion

Table 5.1: Additional adaptive features in OntoAIMS as pointed out by the users in the secondstudy (the numbers are out of 10 and show how many students support the feature).

Feature NoI want the resources ranked according to my preferences and knowledge 10I want the resources ordered according to my preferences and knowledge 8I want the resources filtered according to my preferences and knowledge 4I would like to be able to choose when the system should be adaptive 10I would like to know what the system’s thinks about my knowledge 10I would like to know how my interaction with the system is used to form the system’sopinion about my knowledge

8

I would like to know how the system’s behavior is affected by its opinion about me 9I would like to be able to inspect and change what the system thinks of me 10

the dialog agent changes to a new dialog game using the goal description of each dialog game.Without this type of explanation, users were often confused as to why the system reacted in someway.

The second evaluation round pointed at improvements needed with regard to the integra-tion between OWL-OLM and the resource browser. All students wanted to be able to switchbetween both modes in a flexible manner. They stressed that this should be the user’s choice, notsomething imposed by the systems, and pointed at ways to implement this in OntoAIMS, e.g.enabling the users to go to a resource on a concept when asking a question in OWL-OLM or todiscuss domain concepts after reading resources in the browser. These suggestions will be takeninto account in the next version of OntoAIMS.

The users in the second study were asked about additional adaptive features, see table 5.1. Theusers strongly supported future extension of adaptive features and wanted them combined withlearner control [12].

75

Page 77: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

6. Conclusions

Chapter 6

Conclusions

This chapter gives a short summary of the presented work, relates and discusses our work withinthe relevant research (Sect. 6.2), states questions for further research which emerged from thisproject and discusses future work based on OntoAIMS (Sect. 6.3). Finally, Sect. 6.4 discusses theresults of this research project for the research questions proposed in Sect. 1.2.

6.1 Summary

We have proposed an ontology-based approach for integrating interactive user modeling in adap-tive web information systems to enable them to deal with typical adaptation problems on theSemantic Web such as cold start, unreliability of user interaction for building conceptual UMs,and dynamics of a user’s knowledge. In Sect. 1.1.2 we have listed the most common researchproblems. In Chapter 2 we have presented an analysis of the potential of our approach to tacklethose problems in the context of two existing systems: AIMS and STyLE-OLM.

In Chapter 3 we presented the design of OntoAIMS, an adaptive information system withintegrated interactive user modeling. In the same section we also discuss detail the design ofthe interactive user modeling component OWL-OLM and the OWL-based representation of theuser’s conceptual state.

We exemplified the approach in the integrated learning environment OntoAIMS for adaptivetask recommendations and resource browsing on the Semantic Web. The implementation detailsof OntoAIMS and OWL-OLM were discussed in Chapter 4. Initial results from the two userstudies were discussed in Chapter 5. In overall we have shown a promising approach for dealingwith adaptation on the Educational Semantic Web that contributes to this newly emerging strand.

6.2 Contribution of This Work

In order to highlight the contribution of this work, we compare it with relevant approaches. Dif-ferent perspectives of enabling personalization on the Semantic Web are being addressed re-cently. Our work on OntoAIMS is situated within the field of User-adaptive Web-based Informa-tion Systems [8] that capitalize on adaptation techniques to provide user-oriented services and tomeet the high demands to supply individual users with the right service at the right time and inthe right way.

Our interactive approach for eliciting user models is similar to the interactive sessions usedin HUMOS —a user modeling tool in an adaptive web-based information filtering systems [32].

76

Page 78: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

6.2 Contribution of This Work

However, while HUMOS uses a very simple probing dialog to elicit initial, fairly basic informationabout the user, which is then combined with machine learning techniques for stereotype assign-ments, OWL-OLM considers more in-depth interactions that extract enhanced user models (asopposed to stereotypes which fail to capture individual user models [7]).

By using Semantic Web technologies, enriched semantics for the user data can be achievedto allow for more efficient reasoning and thus, more accurate user modeling results. An exampleof an ontological approach to user profiling is given in [33] with the Quickstep and Foxtrot systemsfor recommending of on-line academic research papers. User profiling is used to identify topicsa user is interested in. Ontological knowledge is used to improve the user profiling, and tosuccessfully bootstrap the recommender system. The OWL-OLM user model elicitation approachsuggests another perspective for using ontology to deal with the cold start problem, namely tomaintain a diagnostic dialog that extracts a deeper user model. The open user modeling approachwe presented is somewhat similar to the idea of using visualizations to tune a user’s profile, assuggested in [33].

Relevant to the educational context illustrated in this work, is the work on EducaNext [38] —an educational mediator that supports the collaborative creation, exchange, and reuse of learningand knowledge resources. EducaNext highlights the need to provide personalization and adap-tation in the recommendation of educational materials and services for the Semantic Web. Ourwork shows an example of such personalization. Moreover, EducaNext points out the benefit ofan unobtrusive alignment, discussed in our work too. In EducaNext, the alignment is between re-source metadata and is used for semantic and user-oriented enrichment of resource annotations.

A strong argument is being formed recently to stress the importance of sharable and reusableuser models, as well as personalization methods on the Semantic Web[23]. Web systems movespeedily towards the adoption of Semantic Web standards, and it is crucial that the user model-ing approaches are also Semantic Web compliant. Our research on OWL-OLM and OntoAIMScontributes to an on-going work on dealing with reusability as a key-capability for successful per-sonalization functionality for the Semantic Web [5, 19].

The work we present on eliciting a user’s conceptualization based on an existing domain on-tology relates to the research on aligning and reconciling ontologies, reviewed in [25, 20]. How-ever, there is a crucial difference between the user-expert alignment considered in OWL-OLM andthe alignment of two (or more) expert ontologies considered in existing, widely used tools, suchas PROMPT [36]. As pointed earlier in the paper, the user’s conceptual model shows some part ofa user’s conceptualization that is not necessary complete or consistent. The approach we proposeis a rather simplified version of ontology alignment, because of the limited scope captured withinan OWL statement (see Sect. 4.5.1), and the fairly simplified model of mismatches we consider(see Sect. 3.2.4). We believe, though, that empirical studies on aligning expert user’s ontologies,e.g. [36, 21], can be very useful to extend the OWL-OLM mismatch patterns. Furthermore, exist-ing, robust ontology aligning algorithms, e.g. using word concordances and synonyms [36], canbe employed to significantly enhance the fairly simplified detection of mismatches in OWL-OLM.

From another point of view, the research on aligning expert ontologies shows the importanceof a dialog to enable the clarification of misalignment points. Empirical studies with PROMPT,for example, have shown that when mismatches occur, discussions are needed to clarify the mis-alignment between different experts (indeed, some knowledge elicitation tools, e.g. APECKS[42], already provide such facilities). Furthermore, most of the methodologies applied for build-ing shared ontologies have a dialog part at some stage to enable experts to clarify aspects of theirconceptualizations. This once again confirms that approaches like OWL-OLM are viable for cap-turing a user’s conceptualization. This claim is supported by empirical justification as shown in

77

Page 79: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

6. Conclusions

Chapter 5.Finally, the user modeling dialog in OWL-OLM is similar, to an extent, to ontology negotiation

between agents who share knowledge and clarify meaning. For instance, [10] have developeda protocol that allows agents to discover ontology conflicts and establish a common basis forcommunicating with each other. It is important to stress though, that the clarification dialog inOWL-OLM includes two agents one of whom is a human. Hence, in-advance dialog planningis not applicable in this case. We found that the fairly simple dialog games model we adopt,adequately handles the initiative in user modeling dialogs.

6.3 Future Work

6.3.1 Improvements to OWL-OLM and OntoAIMS

This section first discusses points of improvement for both OWL-OLM and OntoAIMS. Some ofthese were pointed out in the evaluation of OntoAIMS with users. Other improvements came upduring the research and analysis phases of this project but fell outside the scope of this work.

Chapter 5 proposed some OntoAIMS improvements for:

• dialog coherence: handling user’s suggestions and change the topic of the dialog.

• user interface: improving the user interface, which is now useful, but cumbersome andidiosyncratic at times. Also, the user interface does not fully support OWL individuals.

• natural language generation: more robust algorithms could be researched and implemented.

• better integration of OWL-OLM and OntoAIMS: to allow users to jump freely between theOWL-OLM dialog and the OntoAIMS resource browser. Also, a more uniform interfaceseems to be desirable.

Next to the desired improvements with respect to the dialog (user interface and the interactionintegration) we have suggested as future work the following enhancements of the adaptation inOntoAIMS. Some of those have already been mentioned in Sect. ref:tackling-open-issues but theirimplementation is out of the scope of this work.

• Validating task and resource effects: This is very similar to the current use of OWL-OLM.The difference is that OWL-OLM would try to find a measure and compare the users’ con-ceptualization before and after performing a task or viewing a learning resource. Whilethis may give a very accurate view on the user’s conceptual state, it is questionable whetherusers would be willing to take part in the dialog after each task or resource viewed.

• Extracting other parts of the user model besides the conceptual state; for instance it couldprobe the user’s goals and preferences. OWL-OLM would need to be updated with a modelof user goals and preferences which would be used instead of the domain model. Further, agoal-preferences-model for each user would need to be generated. So instead of the dialogsbeing domain-centered, they would become goal and preferences-centered. A big differ-ence is that the domain ontology contains concepts which are accepted by most people (thedefinition of an ontology) while a goals and preferences are different for every individual.In OWL terminology, domain-centered dialogs focus on concepts and relationships and thetruth value of the utterances; dialogs about goals and preferences focus on the particularinstance values of preference and goal concepts for each user.

78

Page 80: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

6.3 Future Work

• Answering simple queries about the domain. Related to this, OWL-OLM could be used tohelp formulate searches for learning resources. Searches in OntoAIMS are now performedusing keywords. In large ontologies, keywords may be ambiguous and a dialog game inOWL-OLM may be used to clarify this.

6.3.2 Related Future Research

During the course of this project, we came across several problems which could be further re-searched (some in the near future, some in the far future). This list is intended as a suggestionof topics which could benefit from the results of this work.

Natural language generation using OWL

During the research phase of this project we could not find easily reusable research or librariesfor generating natural language. A reason for this may be that OWL is a relatively new ontol-ogy language. Cooperation with researchers from the computational linguistics community is apossibility.

Mismatch analysis and categorization

This project resulted in a set of mismatches which can be detected by OWL-OLM. This categoriza-tion is based on OWL primitives. Further research is needed to detect patterns of mismatches andto use the semantics of these mismatches to determine the reasons such a mismatch can happen.This will allow better definitions for clarification dialogs and may result in better user models, asthe system may find out that some users are more prone to a certain type of mismatches.

OWL-OLM as Web-based User Modeling Service

The design and implementation of OWL-OLM makes is suitable to function as a web-based usermodeling service. Information systems needing to extract a user model based on a domain on-tology could use such a service by passing it the domain ontology and some specification of thetype of depth of the user model desired. Research could be done on the best way to implementsuch a service, defining a suitable protocol which could be used by adaptive information systems.Related to this research would be how to integrate the conceptual states extracted by OWL-OLMwith other user modeling sources and services.

Improved domain independent reasoning

OWL already provides a large number of primitives for describing the world. Each primitivemakes the language more expressive but also imposes new conditions to the inferences a reasonercan make from a given ontology. Much research can be done to determine which primitives canbe added to OWL and what effects this has on the amount of reasoning on the resulting ontology.Luckily, OWL is well suited to be the basis for plug-in extensions to OWL. Each extension canthen define a set of primitives and a reasoner. For instance, a plug-in for OWL could be a basicdefinition of primitives for agents with goals, capabilities and behaviors. A reasoner could bedefined which provides sound inferences from these primitives to determine how agents canchange the world around them given a certain amount of time.

79

Page 81: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

6. Conclusions

Autonomous agents

This is related to the reasoning with OWL. The dialog agent in OntoAIMS is independent insome ways but not truly autonomous. The dialog agent is domain independent: the current im-plementation can “talk” about Linux, but it can also talk about any other domain given the ap-propriate domain ontology. The dialog agent is not autonomous in the sense that its behavior hasbeen completely specified in Java and the dialog agent has no possibility to change this behavior.Comparing the dialog agent to animals you could say the dialog agent is run completely by its“instincts” which are defined in Java; the difference is that animals got their instincts throughevolution and the dialog agent through the people who designed it.

Currently, to achieve more autonomy in agent behavior there are options available like Bayesiannetworks or neural networks which would allow agents to synthesize input from its environmentand take a decision different from one specified a priori. This solutions have the disadvantage ofbeing hard to inspect, design and implement: it is very difficult to understand why a certain inputresults in a corresponding output. This lack of inspectability makes them unsuitable for imple-mentation in complex environments by humans since we cannot easily determine how to changethese networks to achieve a desired output.

As an extension to our approach to achieve more autonomy in agents we could endow agentswith a model of their environment and of themselves (what options do they have to change and affecttheir environment). OWL is particularly well suited for defining such a model and reasoners canbe used to allow agents to take decisions. Special (agent-based) reasoning mechanisms need to beresearched for this to work. Research also needs to be done on how to allow agents to incorporatereal-world information into their environment models. Environment information from the realworld need to be incorporated as individuals which form the basis of every ontology. Using theseindividuals and reasoning about the concepts defined in the environment model, the agents couldtake decisions about actions they should take.

Building OWL models

Considering the autonomous agents mentioned in the previous paragraph, you could rightly ar-gue that the agents are still not autonomous because they still need a model of their environmentprovided by some designer and the design of this model influences the possible behavior of theagent. The solution is allowing agents to build OWL models themselves. Research to make thispossible is still very early in its development. Good primitives and reasoners for OWL are neededas well as methods for translating the agent’s environment into OWL models.

An upside about the type of reasoning needed for autonomic building of OWL models, isthat we may not need reasoners which can verify the soundness of large ontologies. Humans areusually only capable of soundly reasoning about small chunks of reality given a small amount oftime. The reason we can build large models of reality is because we are able to divide these largemodels in smaller models which we can verify (much like OWL-OLM extracts an OWL statementfrom the domain ontology). Even then, most of us are prone to overlook some flaw in the modelswe compose. Luckily for us, there are usually other people who point out this flaws (mismatches)and allow us to build a better model of reality. If you think about it, the whole of science isnothing else than people building models of reality and testing these models both through peer-review and experimental tests. This brings us to research on suitable environments for agents. Itis possible that environments where agents can (semantically) communicate with each other mayinfluence the complexity of the needed reasoners.

80

Page 82: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

6.4 Research Questions Results

Concluding, to allow automatic building of OWL models, research has to be done on reason-ing mechanisms and on improving communication —between agents and between agents andhumans— to discover and clarify mismatches resulting in better models. OWL-OLM itself couldbe used to build OWL models in collaboration with humans (in fact, building a conceptual modelalready is an example of this). The biggest obstacle for automatic building of OWL models is thatcomputers cannot relate the real-world with their ontologies. Without this input, it is not possibleto verify the ontology or to verify conclusions based on the ontology. The reasoner may be able tobuild some hypothesis about the world, but it will not be able to determine whether it is true ornot.

6.4 Research Questions Results

This section shortly discusses the results of this project related to the research questions proposedin Sect. 1.2. We first look at the main research questions.

• Dealing with cold start.

– How can we use domain and course knowledge with interactive ontology-based user modelingto rapidly obtain a user model from scratch?

Chapters 3 and 4 describe the design and implementation of a system which can be usedto extract a user model based on a domain ontology. This research focuses on modelingthe domain knowledge of the user, but the proposed framework could be extended to han-dle other modeling information about users. The course knowledge is used as a guide todetermine which parts of the domain ontology are important to extract.

• User models based solely on analysis of learner-system interaction are inaccurate.

– Can we improve user modeling accuracy using interactive ontology-based dialogs for rapiduser modeling? If we can, what is the improvement?

OWL-OLM can extract conceptual states based on a domain ontology. The extracted con-ceptual state appears to be accurate according to the evaluations we have performed so far(Sect. 5). OWL-OLM has the potential to extract very accurate user models, but this comesat the price of longer dialogs which may not be desirable. The capability in OWL-OLMof discovering and clarifying mismatches is an improvement over other methods for usermodeling. Another advantage of OWL-OLM is the capability to extract a targeted section ofthe user conceptualization. Evaluations comparing OWL-OLM with other user modelingmethods should be performed to find quantitative proof of improvements in accuracy ofthe UM.

• Semantics of the course author and the learner may differ.

– How can we capture semantic differences between course author and learner?

This work uses the concept of mismatches (see Sect. 3.2.4) to annotate differences betweenthe conceptual states and the domain ontology. While mismatches as implemented inOWL-OLM can warn about semantic differences between authors and users, mismatchesare defined at the level of OWL primitives and this makes it very hard to do something

81

Page 83: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

6. Conclusions

useful with the detected mismatches. Further research is needed for analyzing and cate-gorizing found mismatches. The resulting categories of mismatches can then be used todetermine the reason of the mismatch which can lead to better adaptivity of the system.

– How can we use this information to align the semantics?

OWL-OLM does this by defining clarifying dialog games for each type of mismatch. If themismatch cannot be clarified OWL-OLM concludes that the user’s conceptualization needsto be improved by referring to learning resources which explain the relevant part of thedomain. In OntoAIMS this is implemented by advising the user to perform a certain taskor to view a certain learning resource.

Note that in OWL-OLM all alignment is done by trying to convince the user that the domainontology is the right model of reality. Since users of OntoAIMS are usually still learningabout the domain and the domain ontology is usually provided by domain experts, thisassumption is expected to be correct. In other systems besides OntoAIMS, this assumptionwill not always be true and OWL-OLM needs to be improved to have a more flexible domainontology: if the user is right, she should not have to change her conceptualization but thedomain ontology should be changed to become aligned with the user’s conceptualization.This requires OWL-OLM to be able to verify that the user’s reasoning is correct and isconsidered as future research.

Further, we discuss other research questions which are not considered main research ques-tions for this project but are related to this work.

• Use of previous knowledge and experience.

– How can we involve prior user knowledge for selection of relevant resources?

In OntoAIMS this is solved indirectly by first assessing prior user knowledge with OWL-OLM and then recommending a suitable task for the user to perform. The task is then usedby OntoAIMS to recommend relevant resources to the user. An improvement to OntoAIMScould be to take the extracted conceptual state into account when recommending learningresources to users.

• The course author’s goals and the learner’s goals may differ.

– How can we express goals in the user model?

– How can we use the tasks model of AIMS to infer the course author’s goals?

– How can we use extract the learner’s goals?

– How can the system adapt to the learner’s goals?

All these goal-related questions were largely left unanswered by this project as they werenot relevant to the design and implementation of OWL-OLM and adaptive task recommen-dation in OntoAIMS. Goals are only relevant to OWL-OLM when defining dialog episodegoals and subgoals for the different dialog games. In all of these cases though, the goalswere defined at design time and the strategy to achieve the goals was hard-coded in Java. Aswe mentioned in the section about future research, to achieve more adaptive and indepen-dent agents, models about the agent’s environment are needed. In the case of OntoAIMS,

82

Page 84: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

6.4 Research Questions Results

models about users (learners) and their possible goals should be provided as well as meth-ods to reason about these models. These models and methods can then be used by thesystem to adapt to the users’ specific goals as the system adapts now to the user’s specificknowledge of the domain.

• Learner’s goals, preferences and knowledge evolve.

– How can we model episodic and long-term user properties?

OWL-OLM uses the STCS and LTCS to model episodic differences. The STCS modelsthe user’s conceptualization based on the current dialog episode. OWL-OLM then updatesthe LTCS using the STCS which contains the most recent user conceptualization. Thisframework for dealing with episodic differences requires rules for updating the LTCS fromthe STCS as discussed in Sect. 3.3.1.

– How can we use this information for adaptation purposes?

OWL-OLM does not directly use episodic conceptualizations for adaptation. It could bepossible to use this type of information by keeping a track of which conceptualizationschanged due to a dialog episode. Depending on the change, the system could recommendsome further reading for either a reinforcement of previous knowledge or to an overview ofthe domain and how the newly achieved conceptualization fits therein.

83

Page 85: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

Bibliography

Bibliography

[1] L. Aroyo and D. Dicheva. Aims: Learning and teaching support for www-based educa-tion. Int. Journal for Continuing Engineering Education and Life-long Learning (IJCEELL),11(1/2):152–164, 2001.

[2] Lora Aroyo, Stanislav Pokraev, and Rogier Brussee. Preparing scorm for the semantic web.International Conference on Ontologies, Databases and Applications of Semantics, 2003.

[3] Franz Baader, Diego Calvanese, Deborah L. McGuinness, Daniele Nardi, and Peter F. Patel-Schneider, editors. The Description Logic Handbook: Theory, Implementation, and Applications.Cambridge University Press, 2003.

[4] Tim Berners-Lee, James Hendler, and Ora Lassila. The semantic web. Scientific American,2001.

[5] P. De Bra, L. Aroyo, and V. Chepegin. The next big thing: Adaptive web-based systems.Journal of Digital Information, 5(1), 2004.

[6] P. Brusilovsky and C. Paylo. Adaptive and intelligent web-based educational systems.Int.Journal of Artificial Intelligence in Education, 13(2):159–172, 2003.

[7] P. Brusilovsky and C. Tasso. Users are individuals: individualizing user models. Interna-tional Journal of Human-Computer Studies, 51:323–338, 1999.

[8] P. Brusilovsky and C. Tasso. Special issue on user modelling for web information retrieval.User Modeling and User Adapted Interaction, 14(2-3), 2004.

[9] Jeremy J. Carroll, Ian Dickinson, C. Dollin, Dave Reynolds, Andy Seaborne, and K. Wilkin-son. Jena: Implementing the semantic web recommendations. In Int. World Wide WebConference, WWW’04 (Alternate track papers and posters), pages 74–83, 2004.

[10] Sidney C.Bailin and Walt Truszkowski. Ontology negotiation between intelligent informa-tion agents. The Knowledge Engineering Review, 17:7–19, 2002.

[11] David Chin. Empirical evaluation of user models and user-adapted systems. User Modelingand User-Adapted Interaction, 11:181–194, 2001.

[12] L. Cimolino and J. Kay. Verified concept mapping for eliciting conceptual understanding.In L. Aroyo and D. Dicheva, editors, ICCE Workshop on Concepts and Ontologies in Web-BasedEducational Systems, pages 11–16, 2002.

84

Page 86: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

[13] R. Denaux, L. Aroyo, and V. Dimitrova. An approach for ontology-based elicitation of usermodels to enable personalization on the semantic web. Technical report, SWALE Project(Submitted to the WWW 2005 conference), 2004.

[14] R. Denaux, L. Aroyo, and V. Dimitrova. Integrating open user modeling and learning contentmanagement for the semantic web. Technical report, SWALE Project (Submitted to the UM2005 conference), 2004.

[15] R. Denaux, V. Dimitrova, and L. Aroyo. Interactive ontology-based user modeling for per-sonalized learning content management. In AH 2004: Workshop Proceedings Part II, pages338–347, 2004.

[16] Ian Dickinson. The semantic web and software agents. AgentLink News, 15:3–6, Sept 2004.

[17] V. Dimitrova. Style-olm: Interactive open learner modelling. Int. Journal of Artificial Intelli-gence in Education, 13(1):35–78, 2003.

[18] Vania Dimitrova. Interactive Open Learner Modelling. PhD thesis, Computer Based LearningUnit, Leeds University, 2001.

[19] Peter Dolog. Identifying relevant fragments of learner profile on the semantic web. InSWEL2004 Workshop @ ISWC Conference, pages 37–42, 2004.

[20] Marc Ehrig and York Sure. Ontology mapping – an integrated approach. In 1st EuropeanSemantic Web Symposium, 2004.

[21] Adil Hameed, Derek Sleeman, and Alun Preece. Detecting mismatches among experts’ontologies acquired through knowledge elicitation. Knowledge-Based Systems, 15:265–273,2002.

[22] D. Heckmann and A. Krueger. A user modeling markup language (userml) for ubiquitouscomputing. In F. de Rosis P. Brusilovsky, A. Corbett, editor, Int. Conference on User Modeling,pages 393–397, Berlin, Heidelberg, 2003. Springer.

[23] Nicola Henze. Personalization functionality for the semantic web: Identification and de-scription of techniques. Technical report, REWERSE project: Reasoning on the Web withRules and Semantics, 2004.

[24] Anthony Jameson. User-adaptive systems. Technical report, UM03 Tutorial, 2003.

[25] Michel Klein. Combining and relating ontologies: an analysis of problems and solutions. InAsuncion Gomez-Perez, Michael Gruninger, Heiner Stuckenschmidt, and Michael Uschold,editors, Workshop on Ontologies and Information Sharing, IJCAI’01, 2001.

[26] Alfred Kobsa. User modeling in dialog systems: Potentials and hazards. Artificial Intelligenceand Society, 1:214–240, 1990.

[27] R Lecoeuche, C. Mellish, C. Barry, and D. Robertson. User-system dialogues and the notionof focus. The Knowledge Engineering Review, 13(4), 1998.

[28] J.A. Levin and J.A. Moore. Dialogue games: Meta-communication structures for naturallanguage interaction. Cognitive Science, 1978.

85

Page 87: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

Bibliography

[29] Massimo Marchiori. Metalog - towards the semantic web. Technical report, W3C SemanticWeb Activity, 2004.

[30] Brian McBride, Daniel Boothby, and Chris Dollin. An introduction to rdf and the jena rdfapi. Technical report, Jena Documentation, 2004.

[31] Deborah L. McGuinness and Frank van Harmelen. Owl web ontology language overview.Technical report, W3C Recommendation, 2004.

[32] A. Micarelli and F. Sciarrone. Anatomy and empirical evaluation of an adaptive web-basedinformation filtering system. User Modeling and User-Adapted Interaction, 14:159–200, 2004.

[33] S. E. Middleton, N. R. Shadbolt, and D. C. De Roure. Ontological user profiling in recom-mender systems. ACM Transactions on Information Systems, 22(1), pages 54–88, 2004.

[34] E. Miller and F. Manola. Rdf primer. http://www.w3c.org/TR/, 2004.

[35] R. Morales, H. Pain, S. Bull, and J. Kay, editors. Workshop on Open, Interactive,and OtherOvert Approaches to Learner Modelling. at AIED99, 1999.

[36] Natalya Fridman Noy and Mark A. Musen. PROMPT: Algorithm and tool for automated on-tology merging and alignment. In IJCAI–01 Workshop on Ontologies and Information Sharing,pages 63–70, 2000.

[37] Helmut Prendinger and Mitsuru Ishizuka, editors. Life-Like Characters: Tools, Affective Func-tions and Applications. Cognitive Technologies. Springer, 2004.

[38] J. Quemada, G. Huecas, T. Miguel, J. Salvachúa, B. Rodríguez, B. Simon, K. Maillet, andE. Law. Educanext: A framework for sharing live educational resources with isabel. In Int.World Wide Web Conference, pages 11–18, 2004.

[39] J.R. Searle and D. Vanderveken. Foundations of Illocutionary Logic. Cambridge UniversityPress, 1985.

[40] Bernd Simon, Peter Dolog, Zoltán Miklós, Deniel Olmedilla, and Michael Sintek. Concep-tualising smart spaces for learning. Journal of Interactive Media in Education, 2004(9).

[41] J. F. Sowa. Relating diagrams to logic. In G. W. Mineau, B. Moulin, and J. F. Sowa, editors,Conceptual Graphs for Knowledge Representation: Proc. of the First International Conference onConceptual Structures ICCS’93, pages 1–35. Springer, Berlin, Heidelberg, 1993.

[42] Jeni Tenninson, Kieron O’Hara, and Nigel Shadbolt. APECKS: using and evaluating a toolfor ontology construction with internal and external ka support. Int. J. of Human-ComputerStudies, 56:372–422, 2002.

[43] Stephan Weibelzahl. Evaluation of adaptive systems. In Mathias Bauer, Piotr J. Gmy-trasiewicz, and Julita Vassileva, editors, Int. Conference on User Modeling, pages 292–294,2001.

86

Page 88: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

Appendix A

Source Code Documentation

All the source code developed for OWL-OLM has been documented using javadoc. The docu-mentation can be found at http://swale.comp.leeds.ac.uk:8080/javadoc/index.html.

87

Page 89: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

B. Basic Linux Ontology

Appendix B

Basic Linux Ontology

To demonstrate the implementation of OWL-OLM in OntoAIMS, the Basic Linux Ontology (BLO)was developed as part of this project. BLO describes basic Linux concepts like users and files. Theontology was designed with the Learning about Linux course (see Appendix C) in mind, so itcovers most of the concepts treated in that course. BLO was created using Protégé1 and can bedownloaded from http://wwwis.win.tue.nl/˜swale/blo.

1See http://protege.standford.edu

88

Page 90: Eindhoven University of Technology MASTER Ontology-based ...6.3.1 Improvements to OWL-OLM and OntoAIMS . . . . . . . . . 78 ... Personalization functionality on the Semantic Web has

Appendix C

Learning about Linux Course

The Learning about Linux course used in OntoAIMS to demonstrate the implementation of OWL-OLM is loosely based on the introductory Linux course given to first year students at the Univer-sity of Leeds. Most of the resources used in the course were suggested by a Linux expert fromthe University of Leeds. The course model describing the task hierarchy was designed as partof this project. As part of the preparation of this course, the course model and all the resourceswere linked to the concepts described in the Basic Linux Ontology (see Appendix B) using theauthoring interface provided by OntoAIMS. This interface can be used to view the set of useddocuments and to view the designed course model. The OntoAIMS course authoring interfacecan be accessed at http://swale.comp.leeds.ac.uk:8080/staims/editor.html.

89