A Methodology for the Development and Verification of ... · A Methodology for the Development and...

144
A Methodology for the Development and Verification of Expressive Ontologies by Megan Katsumi A thesis submitted in conformity with the requirements for the degree of Masters of Applied Science Graduate Department of Mechanical and Industrial Engineering University of Toronto Copyright c 2011 by Megan Katsumi

Transcript of A Methodology for the Development and Verification of ... · A Methodology for the Development and...

A Methodology for the Development and Verification ofExpressive Ontologies

by

Megan Katsumi

A thesis submitted in conformity with the requirementsfor the degree of Masters of Applied Science

Graduate Department of Mechanical and Industrial EngineeringUniversity of Toronto

Copyright c© 2011 by Megan Katsumi

Abstract

A Methodology for the Development and Verification of Expressive Ontologies

Megan Katsumi

Masters of Applied Science

Graduate Department of Mechanical and Industrial Engineering

University of Toronto

2011

This work focuses on the presentation of a methodology for the development and ver-

ification of expressive ontologies. Motivated by experiences with the development of

first-order logic ontologies, we call attention to the inadequacies of existing develop-

ment methodologies for expressive ontologies. We attempt to incorporate pragmatic

considerations inspired by our experiences while maintaining the rigorous definition and

verification of requirements necessary for the development of expressive ontologies. We

leverage automated reasoning tools to enable semiautomatic verification of requirements,

and to assist other aspects of development where possible. In addition, we discuss the

related issue of ontology quality, and formulate a set of requirements for MACLEOD - a

proposed development tool that would support our lifecycle.

ii

Acknowledgements

I would like to thank my supervisor, Professor Michael Gruninger, to whom I owe my

decision to pursue graduate studies. The interest and the excitement that I have found

for research is a direct result of his mentorship. Our meetings were and continue to be

a great source of inspiration and insight not only for my thesis, but for future research.

The encouragement and guidance that he has shown me have been invaluable to my

academic growth. I would also like to extend my gratitude to the other members of my

thesis committee - Dr. Li Shu and Dr. Tamer El-Diraby, for taking the time to review

my work and for all of the feedback they provided.

I would like to thank Siemens Research, in particular Dr. Sonja Zillner, for providing

financial support for my research.

In addition, I would like to thank my colleagues at the Semantic Technologies Lab

for the helpful feedback they have provided over the past two years.

Finally, I would like to thank my family for their confidence in me, and the values

that they have instilled in me. I am extremely grateful for the tolerance that they have

demonstrated with me, and I feel so fortunate for their continued support of my academic

endeavours.

iii

Contents

1 Introduction 1

1.1 Ontologies and their Applications . . . . . . . . . . . . . . . . . . . . . . 2

1.2 Expressive Ontologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.3 Ontology Development in Literature . . . . . . . . . . . . . . . . . . . . 4

1.4 The Ontology Lifecycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Case Studies 8

2.1 Case Study: An Ontology for Flow Modelling . . . . . . . . . . . . . . . 8

2.1.1 The PSL Ontology . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.1.2 Subactivity Occurrence Orderings . . . . . . . . . . . . . . . . . . 12

2.2 Case Study: An Ontology for Modelling 3-Dimensional Shapes . . . . . . 14

3 Requirements 17

3.1 Semantic Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

3.3 Intended Models as Semantic Requirements . . . . . . . . . . . . . . . . 21

3.3.1 Representation Theorems . . . . . . . . . . . . . . . . . . . . . . 23

3.4 Complete Characterization . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3.4.1 Relative Interpretation . . . . . . . . . . . . . . . . . . . . . . . . 25

3.5 Partial Characterization . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.5.1 Competency Questions . . . . . . . . . . . . . . . . . . . . . . . . 28

iv

3.5.2 Ontological Stance . . . . . . . . . . . . . . . . . . . . . . . . . . 30

3.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

4 Design 33

4.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4.2 Prototype Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

4.2.1 Iterative Refinement . . . . . . . . . . . . . . . . . . . . . . . . . 36

4.2.2 Model Exploration for Refinement . . . . . . . . . . . . . . . . . . 38

4.2.3 Reuse with Repositories . . . . . . . . . . . . . . . . . . . . . . . 40

4.3 Post-Verification Design . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

4.3.1 Modularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

4.3.2 Model Exploration for Diagnosis . . . . . . . . . . . . . . . . . . . 44

4.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

5 Verification 47

5.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

5.2 Consistency and Inconsistency Checking . . . . . . . . . . . . . . . . . . 49

5.3 Verification of Semantic Requirements . . . . . . . . . . . . . . . . . . . 49

5.3.1 Case 1: Unintended proof found . . . . . . . . . . . . . . . . . . . 51

5.3.2 Case 2: No proof found . . . . . . . . . . . . . . . . . . . . . . . . 52

5.3.3 Case 3: All requirements met . . . . . . . . . . . . . . . . . . . . 55

5.4 Verification Assistance with Model Generation . . . . . . . . . . . . . . . 56

5.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

6 Tuning 59

6.1 Subset Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

6.2 Lemma Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

6.3 Goal Simplification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

6.4 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

v

6.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

7 Quality 68

7.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

7.1.1 Quality as Correctness . . . . . . . . . . . . . . . . . . . . . . . . 70

7.1.2 Quality as User Opinion . . . . . . . . . . . . . . . . . . . . . . . 70

7.1.3 Quality as a Numerical Approximation . . . . . . . . . . . . . . . 71

7.2 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

7.2.1 Usability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

7.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

8 An Environment for Expressive Ontology Development 76

8.1 System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

8.1.1 Phase-Specific Requirements . . . . . . . . . . . . . . . . . . . . . 77

8.1.2 System Metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

8.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

8.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

9 Conclusion 86

9.1 Open Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

9.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

A Glossary 90

B Axioms for Subactivity Occurrence Orderings 91

C Axioms for BoxWorld Ontology 93

C.1 Axioms for sorts and part. . . . . . . . . . . . . . . . . . . . . . . . . . . 93

C.2 Axioms for solids and surfaces. . . . . . . . . . . . . . . . . . . . . . . . . 93

C.3 Axioms for edges, surfaces, and solids. . . . . . . . . . . . . . . . . . . . 94

vi

C.4 Axioms for points, edges, surfaces, and solids. . . . . . . . . . . . . . . . 96

C.5 Axioms for the cyclic ordering of the edges in a surface. . . . . . . . . . . 97

C.6 Axioms for the cyclic ordering of the border edges in a solid. . . . . . . . 97

C.7 Axioms for the cyclic ordering of concurrent ridges. . . . . . . . . . . . . 98

C.8 Axioms for the linear ordering of concurrent edges. . . . . . . . . . . . . 99

C.9 Axioms for connectedness of surfaces. . . . . . . . . . . . . . . . . . . . . 100

C.10 Axiom for Tconn−smo. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

C.11 Axiom for Tclosed−smo. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

C.12 Definitions for above axioms. . . . . . . . . . . . . . . . . . . . . . . . . . 101

D MACLEOD Use Cases 103

D.1 Use Case Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

D.2 Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

Bibliography 130

vii

Chapter 1

Introduction

The development of an ontology is a complicated process. It is quite difficult to construct

an ontology that is consistent, and it is even more difficult to construct one that we can

be certain accurately captures the semantics intended by the designer. Given the current

state of ontology development methodologies, the necessary rigor for ensuring such accu-

racy may often be impractical. These challenges are further aggravated when designing

expressive ontologies. The lifecycle we propose is motivated by the challenges of ontology

development; we present an approach to the development of expressive ontologies that is

driven by the semiautomatic verification of requirements, the central task of the lifecycle.

We briefly introduce the topic of ontologies, and more specifically the notion of expres-

sive ontologies. We then discuss some existing methodologies for ontology development,

and provide motivation for the methodology presented here. This work is a result of

experiences in the development of an extension of the PSL ontology [6, 28] and initial

efforts toward the development of the BoxWorld ontology; we will use these ontologies

to provide specific examples of some of the concepts and techniques presented here. The

PSL ontology is sufficient to represent flow modelling notations such as UML, BPMN,

and IDEF3, and the BoxWorld ontology was intended to represent 3-dimensional shapes

for use in manufacturing and assembly domains. We provide a brief overview of each

1

Chapter 1. Introduction 2

ontology in the following chapter, which will provide context for the examples provided in

later chapters. A glossary is supplied in Appendix A for some terms from mathematical

logic; the first appearance of such terms will be indicated in boldface. For additional

background in this subject, see [12, 4].

1.1 Ontologies and their Applications

The definition of an ontology has been presented numerous times in the literature; one

of the most well-known from [23] is:

An ontology is an explicit specification of a conceptualization.

This definition has justifiably been criticized for lack of clarity [35], however it is not

the focus of this work to dispute this, or to argue for or against any other definitions.

Therefore, we present the following slightly augmented definition that we find sufficient

for our use of the term:

Definition 1 An ontology is a theory that formally defines the semantics of a collection

of concepts associated with a particular domain of interest.

The “formal” stipulation of the definition is key in that ontologies should be able to be

written in machine-readable languages, thereby creating the opportunity for a variety

of useful applications. Two prominent examples of such applications are semantic inte-

gration and decision support systems. In semantic integration an ontology defines the

agreed-upon semantics of a domain (some software application) and translations are writ-

ten between two or more such ontologies. This allows for intercommunication between

systems that may use different sets of terms, or attribute different meanings to the same

terms. When applied for decision support systems, the ontology answers queries posed by

the user regarding the domain it defines, with the use of an automated theorem prover.

These queries can not only provide information about the domain in general but also

Chapter 1. Introduction 3

about a particular instances of the domain. For example, an ontology for manufacturing

processes could provide information regarding a specific process that is occurring on a

particular day. The formal nature of ontologies allows them to be used with automated

reasoning procedures in order to provide answers to such queries.

1.2 Expressive Ontologies

Ontologies are written in a variety of logical languages. The expressive potential when

creating an ontology is a result of the language it is written in. In general, when selecting

a language there is a tradeoff between expressivity and performance. Ontologies written

in languages with very strict limitations on the concepts that can be expressed often

exhibit much better performance than those written in more expressive languages. For

example, an ontology written in RDF 1 can only express properties as binary relations,

but would potentially return the answer to a query much faster than an ontology written

in a language able to express n-ary relations. The potential performance issues can be

seen as a barrier to the use and development of more expressive ontologies. However by

restricting the development of ontologies to less-expressive languages, we also restrict the

potential application of ontologies in areas where we lack the ability to express certain

concepts. For this reason, the development of expressive ontologies such as those in first-

order logic is an important issue to consider. Although the ontology lifecycle described

here was developed to address issues that arise as a result of the semidecidable nature

of first-order logic, it is also applicable to ontology development with less expressive

languages.

1http://www.w3.org/RDF/

Chapter 1. Introduction 4

1.3 Ontology Development in Literature

Lifecycle methodologies for ontology development tend to cover the breadth of the de-

velopment process, however they do not provide techniques at the more detailed level.

The On-To-Knowledge Management (OTKM) [59] methodology covers the full lifecycle

of ontology development. It provides useful insights into the steps required both pre-

and post- application, but it does not provide exact details on how activities like testing

should be performed, or what is to be done if an ontology fails to satisfy a requirement.

METHONTOLOGY [16] covers areas of the lifecycle similar to what is presented in the

OTKM methodology, with a focus on the explanation of the concepts at each stage in

development. DILIGENT [53] provides guidance to support the distributed (geographi-

cally or organizationally) development of ontologies; in particular, the authors present a

method to enable the adaptation of an ontology from different parties.

The evaluation of an ontology is generally accepted as a key process in its development.

The main efforts in this area focus on taxonomy evaluation [20], with little focus on

methodologies that are semantically deeper than this. In addition, lifecycle methodologies

typically do not specify the requirements in a verifiable form. This is possibly due to

the issue of semidecidability first-order logic and the inherent intractability of theorem

provers that presents a challenge for the evaluation of test results. In any case, the existing

methodologies do not sufficiently account for the issues of ontology development for first-

order logic ontologies. Our goal is to address the shortcomings of existing methodologies

by presenting a novel ontology development lifecycle for expressive ontologies.

1.4 The Ontology Lifecycle

In this thesis, the scope of the lifecycle that we consider is restricted to the development

and verification of the ontology; it does not include other tasks such typically associated

with development lifecycles, such as planning, analysis, application, or maintenance. The

Chapter 1. Introduction 5

lifecycle we propose (see Figure 1.1) is intended to serve as a structured framework that

provides guidance during the ontology development process. The development phases

identified for ontologies parallel those commonly found in development lifecycles for soft-

ware engineering [10]. We draw an analogy between software engineering and ontology

development to motivate the purpose of each phase. The development lifecycle we present

addresses the limitations of existing methodologies by providing development phases that

rigorously specify and verify the semantic requirements for the ontology, accompanied by

guidance regarding pragmatic considerations for each phase. It has also been designed

to employ the use of automated reasoners (theorem provers and model generators) to

assist the tasks required in each phase, most notably allowing for the semiautomatic ver-

ification of semantic requirements. The semantic requirements are based on the notion

of intended models for an ontology. We emphasize that in this work, the term model

is used in its logical sense (as defined in the glossary provided in Appendix A) - this is

distinct from its other uses such as in the natural or applied sciences. Our experiences are

primarily restricted to the use of Prover9 and Mace4 2 for automated theorem proving

and model generation, respectively; however our lifecycle methodology does not make

considerations specific to these tools, so its use is not restricted to them.

We define five phases in the ontology lifecycle:

• The Design phase produces an ontology to axiomatize the class of models that cap-

ture the requirements. The feedback loop that is shown in Figure 1.1 between the

Design phase and the Requirements phase occurs with the process of Iterative Re-

finement in the early stages of development. The requirements cannot be specified

formally until the design of the ontology and the understanding of its requirements

has matured sufficiently. Beginning development with the Design phase rather than

the Requirements phase is useful when a deep understanding of the requirements

is not known.

2http://www.cs.unm.edu/ mccune/prover9/

Chapter 1. Introduction 6

RequirementsDesign

Verification

Tuning Application

Figure 1.1: The Ontology Lifecycle.

• The Requirements phase produces a specification of the intended models for the

ontology.

• The Verification phase guarantees that the intended models for the ontology which

are specified in the Requirements phase are equivalent to the models of the axioms

which are produced in the Design phase.

• The Tuning phase addresses the pragmatic issue of dealing with cases in which the

theorem provers used in the Verification phase fail to return a definitive answer.

• The Application phase covers the different ways in which the ontology is used,

such as decision support systems, semantic integration, and search. This phase

is included only for contextual purposes; as noted earlier, the tasks involved in

ontology application are outside of the scope of the lifecycle presented here.

Each phase in the ontology lifecycle is associated in some way with the set of reasoning

problems that are defined in the Requirements phase to specify the intended models for

the ontology. In general, these reasoning problems are entailment problems of the form:

Tonto ∪ Σdomain |= Φ

Chapter 1. Introduction 7

Design︷ ︸︸ ︷Tonto ∪ Σdomain |=

Requirements︷︸︸︷Φ︸ ︷︷ ︸

V erification

Figure 1.2: Reasoning problems and stages of the ontology lifecycle.

defined with respect to the axioms of the ontology Tonto, a domain theory Σdomain that

uses the ontology, and a query Φ that formalizes some aspect of the intended models,

specified in the Requirements phase. This allows for the use of automated theorem

provers to verify that the axiomatization of the ontology satisfies its requirements. The

relationships between the stages of the ontology lifecycle and the different aspects of the

reasoning problems are shown in Figure 1.2.

The Design phase provides the axioms Tonto ∪ Σdomain that form the antecedent of

the reasoning problem. In the Verification phase, we use theorem provers to determine

whether or not the sentences that capture the requirements are indeed entailed by the

axioms of the ontology.

In the chapters that follow we will discuss each phase of the lifecycle in more detail,

presenting not only the methodology but pragmatic considerations associated with it, as

well as some examples from our development experiences with the PSL and the BoxWorld

ontologies. Following this we provide a definition for ontology quality, and discuss how

it might be evaluated in practice. Finally, we discuss the need for a Common Logic

development environment to support this lifecycle - MACLEOD, A Common Logic Envi-

ronment for Ontology Development (MACLEOD), and propose some initial requirements.

Some of the results presented here have been previously published in [42, 43, 33].

Chapter 2

Case Studies

As mentioned in Chapter 1, this work is a result of our experiences with the develop-

ment of two ontologies - an extension to the PSL ontology, and the BoxWorld ontology.

The PSL ontology was already well-established at this point, and we possessed a good

understanding of the requirements for the extension. The BoxWorld ontology, on the

other hand, was approached from scratch with only an intuitive idea of what the require-

ments might be. In this way the two case studies are examples of development at very

different levels of maturity, an important factor in the lifecycle. Experience with both

cases provided greater insight into the necessary lifecycle tasks - how the maturity of the

ontology being developed determines the confidence in the correctness of its axioms and

affects how certain phases should be performed. Both case studies are introduced here;

we refer to them to provide specific examples to illustrate the methodology presented in

the following chapters.

2.1 Case Study: An Ontology for Flow Modelling

The first case study results from an effort to provide a process ontology for flow modelling

formalisms such as UML activity diagrams, BPMN, and IDEF3. For example, consider

the UML activity diagram in Figure 2.1; any process ontology that captures the intended

8

Chapter 2. Case Studies 9

A

B F

D C

E

Figure 2.1: Example of a UML activity diagram.

semantics of such a diagram needs to be able represent the constraints that are implicit in

the notions of split/merge and fork/join nodes. Intuitively, there are three possible ways

in which the process described by the activity diagram can occur; in all three occurrences

of the process, there is an initial occurrence of the subactivity A and a final occurrence of

the subactivity E. One occurrence of the process contains an occurrence of the subactivity

F , while the other two contain occurrences of the subactivities B, C, and D. In order

to develop such an ontology, we began with the Process Specification Language (PSL),

but in order to capture the intended semantics of the UML constructs, we also needed

to develop an extension to the ontology that explicitly captured the associated ordering

and occurrence constraints.

In this section, we give a brief introduction to the PSL Ontology and an overview of

the relations for the extension to the PSL Ontology.

Chapter 2. Case Studies 10

2.1.1 The PSL Ontology

The Process Specification Language (PSL) [6, 28, 25] has been designed to facilitate

correct and complete exchange of process information.1 The primary purpose of PSL

is to enable the semantic interoperability of manufacturing process descriptions between

manufacturing engineering and business software applications such as process planning,

scheduling, workflow, and project management. Additional applications of PSL include

business process design and analysis, and enterprise modelling.

The PSL Ontology is a modular set of theories in the language of first-order logic.

All core theories within the ontology are consistent extensions of a theory referred to as

PSL-Core, which introduces the basic ontological commitment to a domain of activities,

activity occurrences, timepoints, and objects that participate in activities. Additional

core theories capture the basic intuitions for the composition of activities, and the rela-

tionship between the occurrence of a complex activity and occurrences of its subactivities.

In order to formally specify a broad variety of properties and constraints on complex

activities, we need to explicitly describe and quantify over (in the logical sense of variable

binding) complex activities and their occurrences. Within the PSL Ontology, complex

activities and occurrences of activities are elements of the domain and the occurrence of

relation is used to capture the relationship between different occurrences of the same

activity.

A second requirement is to specify composition of activities and occurrences. The PSL

Ontology uses the subactivity relation to capture the basic intuitions for the composition

of activities. For example, the activity of making coffee might be composed of a number

of subactivities such as: remove the old coffee grounds, add new coffee grounds, add

water, and so on. Complex activities are composed of sets of atomic activities, which in

1PSL has been published as an International Standard (ISO 18629) within the International Organi-sation of Standardisation. The full set of axioms (which we call Tpsl) in the Common Logic InterchangeFormat is available at http://www.mel.nist.gov/psl/ontology.html.

Chapter 2. Case Studies 11

turn are either primitive (i.e., they have no proper subactivities) or they are concurrent

combinations of primitive activities.

Corresponding to the composition relation over activities, there is subactivityoccurrence,

the composition relation over activity occurrences. Given an occurrence of a complex ac-

tivity, subactivity occurrences are occurrences of subactivities of the complex activity.

Finally, to specify ordering constraints over the subactivity occurrences of a complex

activity, the PSL Ontology uses the min precedes(s1, s2, a) relation to denote that subac-

tivity occurrence s1 precedes the subactivity occurrence s2 in occurrences of the complex

activity a. Note that there could be other subactivity occurrences between s1 and s2.

We use next subocc(s1, s2, a) to denote that s2 is the next subactivity occurrence after

s1 in occurrences of the complex activity a.

The basic structure that characterises occurrences of complex activities within models

of the ontology is the activity tree; the relation root(s, a) denotes that the subactivity

occurrence s is the root of an activity tree for a. Elements of the tree are ordered by

the min precedes relation; each branch of an activity tree is a linearly ordered set of

occurrences of subactivities of the complex activity. The same tree(s1, s2, a) is used to

denote that s1 and s2 are both elements of the same activity tree for a. Finally, there is

a one-to-one correspondence between occurrences of complex activities and branches of

the associated activity trees.

In a sense, an activity tree is a microcosm of the occurrence tree, in which we consider

all of the ways in which the world unfolds in the context of an occurrence of the complex

activity. For example, the activity of making coffee described earlier could be performed

by emptying the old coffee before or after adding water. Different subactivities may

occur on different branches of the activity tree — different occurrences of an activity

may have different subactivity occurrences or different orderings on the same subactivity

occurrences. The relation mono(s1, s2, a) indicates that s1 and s2 are occurrences of the

same subactivity on different branches of the activity tree for a.

Chapter 2. Case Studies 12

2.1.2 Subactivity Occurrence Orderings

A partial ordering over a set of subactivity occurrences is used to represent flow models

in this extension of PSL. This partial ordering is embedded into an activity tree such that

the branches of the activity tree correspond to a set of linear extensions for suborderings

of the partial ordering. For example, the subactivity occurrence ordering corresponding

to the UML activity diagram in Figure 2.1 can be found in Figure 2.2, while the activity

tree into which this partial ordering is embedded can be found in Figure 2.3.

oA1 oB

2

oC3

oD4 oE

5

oF6

Figure 2.2: Example of a subactivity occurrence ordering.

oA1

oB2

oC3 oD

4 oE5

oD6 oC

7 oE8

oF9 oE

10

Figure 2.3: An activity tree corresponding to the subactivity occurrence ordering in Figure 2.2.

Three relations were introduced to specify the relationship between the partial order-

ing (referred to as the subactivity occurrence ordering) and the activity tree:

• soo(s, a): denotes that the activity occurrence s is an element of the subactivity

occurrence ordering for the activity a.

Chapter 2. Case Studies 13

• soo precedes(s1, s2, a): captures the ordering over the elements.

• preserves(s1, s2, a): holds whenever the mono relation preserves the min precedes

relation; that is, if s1 min precedes s2, then there is no element of the activity tree

that is mono to s2 that min precedes an element of the activity tree that is mono

to s1.

For example2, the partial ordering in Figure 2.2 can be defined by:

soo(oA1 , a) ∧ soo(oB2 , a), soo(oC3 , a) ∧ soo(oD4 , a)

∧ soo(oE5 , a) ∧ soo(o6, F ) ∧ soo precedes(oA1 , oB2 , a)

∧ soo precedes(oB2 , oC3 , a) ∧ soo precedes(oC3 , oE5 , a)

∧ soo precedes(oB2 , oD4 , a) ∧ soo precedes(oD4 , oE5 , a)

∧ soo precedes(oA1 , oF6 , a) ∧ soo precedes(oF6 , oE5 , a) (2.1)

and the activity tree in Figure 2.3 can be specified by

min precedes(oA1 , oB2 , a) (2.2)

∧min precedes(oB2 , oC3 , a) ∧min precedes(oC3 , oD4 , a)

∧min precedes(oD4 , oE5 , a) ∧min precedes(oB2 , oD6 , a)

∧min precedes(oD6 , oC7 , a) ∧min precedes(oC7 , oE8 , a)

∧min precedes(oA1 , oF9 , a) ∧min precedes(oF9 , oE10, a)

Two branches of the activity tree correspond to linear extensions of the subordering of

the partial ordering in Figure 2.2 consisting of the elements {oA1 , oB2 , oC3 , oD4 , oE5 } while the

remaining branch corresponds to the subordering consisting of the elements {oA1 , oF6 , oE5 }.

The complete set of axioms for the subactivity occurrence ordering extension to the

PSL Ontology can be found in Appendix B.

2Each activity occurrence is an occurrence of a unique activity but activities can have multipleoccurrences, so we label activity occurrences using the following convention: for activity occurrence oji , iis a unique label for the occurrence and j denotes the activity of which it is an occurrence. For example,oC3 and oC7 are two occurrences of the activity C.

Chapter 2. Case Studies 14

2.2 Case Study: An Ontology for Modelling 3-Dimensional

Shapes

The second case study is an ontology that still under development - the BoxWorld on-

tology is being developed to represent classes of 3-dimensional shapes. It is motivated

by earlier work that used an ontology of 2-dimensional shapes to develop an ontology for

cutting processes by describing the associated transformations of the shapes [29]. Once

developed, the BoxWorld ontology could be extended to represent a wider range of man-

ufacturing and assembly processes, including both folding and cutting. The result could

have many useful applications in industry, such as for sheet metal processes.

The following primitive classes of objects were been identified, they are represented

as fluents in the ontology:

• point

• edge

• surface

• solid

Points are part of edges, edges are part of surfaces, and surfaces are part of solids. We

acknowledge that the name “solid” is somewhat counterintuitive; it is in reference to the

object having 3-dimensions, not the property of not being hollow.

The ontology includes the following relationships between objects:

• part(x, y)

• meet(x, y, z)

• connected(x, y)

Chapter 2. Case Studies 15

f1

v1 v2

v4v3

e1

e2

e3

e4

e5

e6

e7

f2

v5

v6

Figure 2.4: A model of the BoxWorld ontology

For example, the solid shown in figure 2.4 would be specified as follows:

Chapter 2. Case Studies 16

point(p1) ∧ point(p2) ∧ point(p3) ∧ point(p4) ∧ point(p5) ∧ point(p6)

∧ edge(e1) ∧ edge(e2) ∧ edge(e3) ∧ edge(e4) ∧ edge(e5) ∧ edge(e6) ∧ edge(e7)

∧ surface(f1) ∧ surface(f2) ∧meets(f1, f2, e4)

∧ part(p1, e1) ∧ part(p2, e1) ∧ part(p1, e2) ∧ part(p3, e2) ∧ part(p3, e3) ∧ part(p4, e3)

∧ part(p2, e4) ∧ part(p4, e4) ∧ part(p2, e5) ∧ part(p5, e5) ∧ part(p4, e6) ∧ part(p6, e6)

∧ part(p5, e7) ∧ part(p6, e7) ∧ part(e1, f1) ∧ part(e2, f1) ∧ part(e3, f1) ∧ part(e4, f1)

∧ part(e4, f2) ∧ part(e5, f2) ∧ part(e6, f2) ∧ part(e7, f2) ∧meets(e1, e2, p1)

∧meets(e2, e3, p3) ∧meets(e3, e4, p4) ∧meets(e4, e1, p2) ∧meets(e4, e5, p2)

∧meets(e5, e7, p5) ∧meets(e7, e6, p6) ∧meets(e6, e4, p4) (2.3)

The current version of the axiomatization of BoxWorld is included in Appendix C.

Chapter 3

Requirements

Returning to our earlier analogy to the software lifecycle, let us consider the Requirements

phase in more detail. In general, the purpose of this phase is to produce some form of

software specification document [41] detailing what the software must do, and how it

must do it, (i.e. functional and non-functional requirements [10]). In the context of

our lifecycle, we aim to define the requirements for a semantically correct ontology with

the relationship between the intended models for the ontology, and the actual models

of its axiomatization. An axiomatization is semantically correct if and only if it does

not include any unintended models, and it does not omit any intended models. This is

well-illustrated by the popular depiction from [37], that has been reproduced in Figure

3.1.

In the remainder of this chapter, we introduce the notion of semantic requirements,

and briefly discuss existing methods for specifying requirements. We then present our

approach to specifying requirements with intended models, and show how they may be

expressed as reasoning problems. Later, this formalization of semantic requirements will

allow for semi-automatic verification with an automated theorem prover.

17

Chapter 3. Requirements 18

conceptualization C

logical language L

models

M(L)

intended models

I(L)

ontology’s models

unintended models

omitted models

Figure 3.1: The relationship between intended models for an ontology and the models of the

ontology’s axioms (adapted from [37]). We refer to the ontology’s models as Mod(Tonto), and

we refer to the intended models I(L) as Mintended.

Chapter 3. Requirements 19

3.1 Semantic Requirements

Formally, we refer to the set of intended models for an ontology as Monto, whileMod(Tonto)

refers to the models of the axiomatization of the ontology, which is denoted Tonto. These

notions of intended and actual models are key to the definition of semantic requirements:

Definition 2 Semantic requirements specify the conditions for semantic correctness on

the intended models for the ontology, and/or models of the ontology’s axioms. There are

two types of such conditions:

M∈Mod(Tonto)⇒M ∈Monto

and

M∈Monto ⇒M ∈Mod(Tonto)

The violation of these conditions is a semantic error ; there are two types of such errors,

defined formally as follows:

Definition 3 An error of unintended models is present in the ontology if and only if

there exists any model such that

∃M ∈Mod(Tonto),M 6∈Monto

Definition 4 An error of omitted models is present in the ontology if and only if there

exists any model such that

∃M ∈Monto,M 6∈Mod(Tonto)

Note that the notions of intended, unintended, and omitted models represented in the

areas in Figure 3.1 from [37] are equivalent to Monto and the definitions of unintended

models and omitted models (respectively) presented here.

Chapter 3. Requirements 20

3.2 Related Work

In general, requirements serve to scope a project’s development and provide a means for

its verification. It is important that they are specified in as much detail as possible. As

noted in Chapter 1, we find that the level of guidance is generally not satisfactory for

the development of such rigorous requirements.

The methodology that was used in the design of the TOVE Ontology for enterprise

modelling [24] introduced the notion of competency questions to define the requirements

of an ontology and to guide the formal definition of a set of axioms for the ontology. This

allowed for a more precise, potentially testable means for requirements specification;

however it lacked guidance in terms of integration with a complete development method-

ology. Although lifecycle methodologies, most notably METHONTOLOGY [16], would

later prescribe the use of competency questions for the specification of requirements, de-

tails regarding the implementation within the lifecycle are still lacking. The NeOn project

1 attempted to address this lack of detail with their Ontology Requirements Specifica-

tion Document (ORSD) methodology [57]. The authors provide thoughtful guidelines,

templates and examples for support. Nevertheless, we find that the prescribed method

fails to take advantage of the model-theoretic properties related to competency ques-

tions, consequently it relies too heavily on human decision-making for the development

of requirements.

Another major issue we find with existing work is that the type of requirements

necessary for expressive ontologies have generally been neglected. This is an issue because

the type of requirements that might be specified for a less expressive, taxonomy-style

ontology (e.g. OntoClean [38]) are not semantically deep enough to fully express the

requirements of a more complex, first-order logic ontology; in other words, they do not

completely specify the requirements for semantic correctness. Complex and expressive

1http://www.neon-project.org

Chapter 3. Requirements 21

ontologies are naturally susceptible to deeper semantic errors. In order to detect and

prevent such flaws, we need the requirements to provide a complete characterization of all

models we require the ontology to have, up to elementary equivalence. Although such

a characterization is possible, to the best of our knowledge it has not been incorporated

in any development methodologies to date.

3.3 Intended Models as Semantic Requirements

In current ontology research, the languages for formal ontologies (such as RDFS, OWL,

and Common Logic) are closely related to mathematical logic, in which the semantics

are based on the notion of an interpretation. If a sentence is true in an interpretation, we

say that the sentence is satisfied by the interpretation. If every axiom in the ontology is

satisfied by the interpretation, then the interpretation is called a model of the ontology.

With a formal ontology, the content of the ontology is specified as a theory, so that a

sentence is consistent with that theory if and only if there exists a model of the theory

that satisfies the sentence; a sentence can be deduced if and only if it is satisfied by

all models of the theory. Therefore, the semantics of the ontology’s terminology can be

characterized by this implicit set of models.

`f1

f2

f3e2

e3

e4

e1

e5

e8

e9

e7

e6

v4

v1

v7

v2

v3

v5

v6

Figure 3.2: A model of the BoxWorld ontology.

Chapter 3. Requirements 22

Example 1 Consider the following sentence, regarding the BoxWorld ontology:

(∀x)edge(x) ⊃ (∃y, z)surface(y) ∧ surface(z) ∧ (y 6= z) ∧ part(x, y) ∧ part(x, z)

This states that every edge is a part of at least two different surfaces. The solid shown

in Figure 3.2 represents a polygon that is a model of the BoxWorld ontology. This model

satisfies the above sentence, therefore we can say that the sentence is consistent with the

ontology. However if we consider the solid shown in Figure 2.4 (which we know is also

a model of the ontology) we see that it does not satisfy the above sentence, since some of

the edges are only part of one surface; therefore, this sentence cannot be entailed by the

BoxWorld ontology.

We build on this observation and propose that the requirements necessary for the

development of an expressive ontology can be specified by defining the set of models that

we intend the ontology being developed to have, and expressing this intention in the form

of the semantic requirements defined in Section 3.1. In other words, we specify a set of

intended models and require that the models of the axioms we develop be elementarily

equivalent to them. Intended models are specified with respect to some well-understood

theories (typically for classes of mathematical structures such as partial orderings, graph

theory, and geometry). The extensions of the relations in the model are then specified

with respect to properties of these well-understood theories.

Definition 5 We consider a theory to be well-understood if it has a representation the-

orem, meaning that its models have been characterized up to elementary equivalence.

Example 2 For example, the intended models for the subactivity occurrence ordering

extension to the PSL Ontology are intuitively specified by two properties:

• the partial ordering over the subactivity occurrences;

• the mapping that embeds the partial ordering into the activity tree.

Chapter 3. Requirements 23

Formally, the intended models Msoo, for the subactivity occurrence ordering extension are

defined as follows:

Let Msoo be the following class of structures such that for any M∈Msoo,

1. M is an extension of a model of Tpsl (i.e. the PSL Ontology);

2. for each activity tree τi, there exists a unique partial ordering %i = (Pi,≺) and a

mapping

θ : τi → %i such that

(a) 〈s1, s2, a〉 ∈min precedes⇒ θ(s2) 6≺ θ(s1)

(b) 〈θ(s), s, a〉 ∈mono;

(c) comparable elements in % are the image of comparable elements in τ .

3. 〈s, a〉 ∈ soo iff s ∈ Pi;

4. 〈s1, s2, a〉 ∈ soo precedes iff s1 ≺ s2.

From a mathematical perspective these semantic requirements are formalized by the

notion of representation theorems.

3.3.1 Representation Theorems

Representation theorems specify the (intended) models of an axiomatization up to ele-

mentary equivalence. Representation theorems are proven in two parts – we first prove

every structure in the class is a model of the ontology and then prove that every model

of the ontology is elementary equivalent to some structure in the class.

Example 3 For the new extension Tsoo to the PSL Ontology, the representation theorem

is stated as follows:

Any structure M∈Msoo is isomorphic to a model of Tsoo ∪ Tpsl.

Any model of Tsoo ∪ Tpsl is isomorphic to a structure in Msoo.

Chapter 3. Requirements 24

The characterization up to elementary equivalence of the models of an ontology through a

representation theorem corresponds to the conditions for semantic correctness presented

in Definition 2; it has several distinct advantages. First, unintended models are more eas-

ily identified, since the representation theorems characterize all models of the ontology.

We also gain insight into any implicit assumptions within the axiomatization which may

actually eliminate models that were intended. Second, any decidability and complexity

results that have been established for the classes of mathematical structures in the repre-

sentation theorems can be extended to the ontology itself. Finally, the characterization

of models supports the specification of semantic mappings to other ontologies, since such

mappings between ontologies preserve substructures of their models.

Representation theorems are distinct from the notion of the completeness of an on-

tology. A theory T is complete if and only if for any sentence Φ, either T |= Φ or T 6|= Φ.

The ontologies that we develop are almost never complete in this sense. Nevertheless,

we can consider representation theorems to be demonstration that the ontology Tonto is

complete with respect to its requirements (i.e. set of intended models Monto). This allows

us to say that Tonto |= Φ if and only if Monto |= Φ (that is, the ontology entails a sentence

if and only if the class of intended models entails the sentence).

The typical way to prove the Representation Theorem for an ontology is to explicitly

construct the models of the ontology in the metatheory and then show that these models

are equivalent to the specification of the intended models of the ontology using classes of

mathematical structures.

3.4 Complete Characterization

To achieve the greatest possible detail and accuracy in development, we must completely

characterize the intended models with our requirements. The previous section discussed

how representation theorems could provide such a characterization of models, however a

Chapter 3. Requirements 25

primary challenge for someone attempting to prove representation theorems is that it can

be quite difficult to characterize the models of an ontology up to elementary equivalence.

In what follows, we show how a theorem about the relationship between the class of

the ontology’s models and the class of intended models can be replaced by a theorem

about the relationship between the ontology (a theory) and the theory axiomatizing the

intended models (assuming that such axiomatization is known). Later in the lifecycle, we

can use automated reasoners to prove the latter relationship and thus verify an ontology

in a (semi-)automated way.

3.4.1 Relative Interpretation

We will adopt the following definition from [12]:

Definition 6 An interpretation π of a theory TA with language LA into a theory TB with

language LB is a function on the set of parameters of LA s.t.

1. π assigns to ∀ a formula π∀ of LB in which at most the variable v1 occurs free, such

that TB |= (∃v1) π∀

2. π assigns to each n-place relation symbol P a formula πP of LB in which at most

the variables v1, ..., vn occur free.

3. For any sentence σ in LA, TA |= σ ⇒ TB |= π(σ)

Example 4 Consider the theory of linear ordering (see Figure 3.3) and the theory of

linear time presented in [40] (see Figure 3.4). The theory of linear ordering can by

interpreted into the theory of linear time, with the following formulae:

(π∀) (∀x)timepoint(x)

(πlt) lt(x, y) ≡ before(x, y)

Chapter 3. Requirements 26

Any sentence that is entailed by the theory of linear ordering will have a corresponding

sentence, constructed with the interpretation (∀ as π∀, and lt as πlt), that will be entailed

by the theory of linear time.

(∀x)¬lt(x, x) (3.1)

(∀x, y, z)lt(x, y) ∧ lt(y, z) ⊃ lt(x, z) (3.2)

(∀x, y)lt(x, y) ∨ lt(y, x) ∨ (x = y) (3.3)

Figure 3.3: The Theory of Linear Ordering.

(∀x)timepoint(x) ⊃ ¬before(x, x) (3.4)

(∀x, y, z)timepoint(x)∧timepoint(y)∧timepoint(z)∧before(x, y)∧before(y, z) ⊃ before(x, z)

(3.5)

(∀x, y)timepoint(x) ∧ timepoint(y) ⊃ before(x, y) ∨ before(x, y) ∨ (x = y) (3.6)

Figure 3.4: The Theory of Linear Time [40].

Thus, the mapping π is an interpretation of TA if it preserves the theorems of TA.

We will say that two theories TA and TB are definably equivalent iff they are mutually

interpretable, i.e. TA is interpretable in TB and TB is interpretable in TA. Interpretability

requires translations to exist:

Definition 7 Let π be an interpretation of a theory TA with language LA into a theory

TB with language LB, such that LA and LB have disjoint nonlogical lexicons.

Chapter 3. Requirements 27

Translation definitions for π are sentences in the language LA ∪ LB of the form

(∀x) pi(x) ≡ ϕ(x)

where pi(x) is a relation symbol in LA and ϕ(x) is a formula in LB.

Intuitively, translation definitions can be considered to be an axiomatization of the

interpretation of TA into TB.

Reducibility

The key to representing the requirements for a relative interpretation as entailment prob-

lems is the following theorem of reducibility from [31]:

Theorem 1 A theory T is definably equivalent with a set of theories T1, ..., Tn iff the

class of models Mod(T ) can be represented by Mod(T1) ∪ ... ∪Mod(Tn).

The necessary direction of a representation theorem (i.e. if a structure is intended,

then it is a model of the ontology’s axiomatization) can be stated as

M∈Monto ⇒M ∈Mod(Tonto)

If we suppose that the theory that axiomatizes Monto is the union of some previously

known theories T1, ..., Tn, then by Theorem 1 we need to show that Tonto interprets

T1 ∪ ... ∪ Tn. If ∆ is the set of translation definitions for this relative interpretation,

then the necessary direction of the representation theorem is equivalent to the following

reasoning task:

Tonto ∪∆ |= T1 ∪ ... ∪ Tn (Rep-1)

The sufficient direction of a representation theorem (any model of the ontology’s axiom-

atization is also an intended model) can be stated as

M∈Mod(Tonto)⇒M ∈Monto

Chapter 3. Requirements 28

In this case, we need to show that T1∪...∪Tn interprets Tonto. If Π is the set of translation

definitions for this relative interpretation, the sufficient direction of the representation

theorem is equivalent to the following reasoning task:

T1 ∪ ... ∪ Tn ∪ Π |= Tonto (Rep-2)

By Theorem 1, Mod(Tonto) is representable by Monto iff T1 ∪ · · · ∪ Tn is definably

equivalent to Tonto, which we can show by proving both of the above reasoning tasks.

Therefore, to completely characterize the semantic requirements for an ontology we must

prove both directions (Rep-1 and Rep-2) of the relative interpretation between the

ontology and a well-understood theory.

3.5 Partial Characterization

Often, we lack a complete understanding of the intended models necessary to define a

representation theorem, this is especially the case with less mature ontologies. In this

case, we cannot directly derive the requirements, as presented in the previous section.

Rather than proving the Representation Theorem directly, we can specify the semantic

requirements for the ontology as competency questions and Ontological Stance scenarios,

as they implicitly specify properties of the intended models. Note that these semantic

requirements are also formulated for evaluation by automated reasoners so that regardless

of the maturity of the ontology, the requirements that we specify will be implemented in

the same way in the lifecycle.

3.5.1 Competency Questions

Following [24, 30, 61], competency questions are queries that impose demands on the

expressiveness of the underlying ontology. Intuitively, the ontology must be able to rep-

resent these questions and characterize the answers using the terminology. The relation-

ship between competency questions and the semantic requirements is that the associated

Chapter 3. Requirements 29

query must be provable from the axioms of the ontology alone. Since a sentence is prov-

able if and only if it is satisfied by all models, competency questions implicitly specify

properties of the ontology’s intended models.

Example 5 Competency questions for the subactivity occurrence ordering extension of

PSL include the following:

Which subactivities can possibly occur next after an occurrence of the activity a1?

(∀o, s1) occurrence(s1, a1) ∧ occurrence(o, a) (3.7)

∧ subactivity occurrence(s1, o)

⊃ (∃a2, s2) occurrence(s2, a2) ∧ next subocc(s1, s2, a))

Does there exist a point in an activity tree for a after which the same subactivities

occur?

(∃a, a1, s1) subactivity(a1, a) ∧ occurrence of(s1, a1)

∧ ((∀o1, o2) occurrence of(o1, a) ∧ occurrence of(o2, a)

∧ subactivity occurrence(s1, o1)

∧ subactivity occurrence(s1, o2)

∧min precedes(s1, s2, a)

⊃ (∃s3) subactivity occurrence(s3, o2)

∧min precedes(s1, s3, a) ∧mono(s2, s3, a) (3.8)

In addition to the traditional method of identifying competency questions (use cases),

a direct approach to specifying a partial characterization of the intended models is to

develop competency questions that describe known properties of the required intended

models. For example, the subactivity occurrence ordering % introduced in the definition

of the intended models Msoo is a partial ordering, and this corresponds to the competency

Chapter 3. Requirements 30

question which asserts that the soo precedes relation is also a partial ordering, and hence

is a transitive relation:

(∀s1, s2, s3, a) soo precedes(s1, s2, a) (3.9)

∧ soo precedes(s2, s3, a) ⊃ soo precedes(s1, s3, a)

Recall that sentences such as these constitute the consequent Φ of a reasoning problem

(see Figure 1.2), and that they are supposed to be entailed by the axioms of the ontology

Tonto together with a domain theory Tdomain (which in this case is a process description

that formalizes a specific UML activity diagram). It is in this sense that competency

questions are semantic requirements – recall the second condition for semantic correctness

presented in Definition 2:

M∈Monto ⇒M ∈Mod(Tonto)

The competency question Φ is a partial characterization of the intended models such

that Monto |= Φ. Therefore by Definition 2 we should require that Mod(Tonto) |= Φ; we

can verify this by proving Tonto |= Φ.

3.5.2 Ontological Stance

When developing an ontology for semantic integration, in addition to competency ques-

tions, the Ontological Stance can be used as a source of requirements. We can say that

two software systems are semantically integrated if their sets of intended models are

equivalent. However, systems cannot exchange the models themselves – they can only

exchange sentences in the formal language that they use to represent their knowledge.

We must be able to guarantee that the inferences made with sentences exchanged in this

way are equivalent to the inferences made with respect to the system’s intended models –

given some input the application uses these intended models to infer the correct output.

This leads to the notion of the Ontological Stance [26] in which a software application is

Chapter 3. Requirements 31

described as if it were an inference system with an axiomatized ontology which is used

to predict the set of sentences that the inference system decides to be satisfiable, as

illustrated in Figure 3.5 taken from [26].

Figure 3.5: The Ontological Stance [26]: Tdomain ∪ Tonto |= Φ

Therefore the antecedent for the reasoning problem that corresponds to the Onto-

logical Stance is slightly different – Tonto is the axiomatization of the ontology for the

software application while Tdomain is the axiomatization of the input to the software ap-

plication and Φ is the axiomatization of the output. As we saw earlier with competency

questions, these sentences will constitute the reasoning problems that define the semantic

requirements of the ontology with respect to its intended models; they also correspond

to the conditions in Definition 2 in a similar way. Given some input to the application,

Tdomain, the resulting output, Φ should be entailed by the application’s intended models,

in other words Monto∪domain |= Φ. According to Definition 2, if the axiomatization is

semantically correct then we must also have Mod(Tdomain ∪ Tonto) |= Φ. This is true if

Chapter 3. Requirements 32

the semantic requirement Tdomain ∪ Tonto |= Φ is satisfied.

3.6 Discussion

Our methodology revolves around the application of model-theoretic notions to the spec-

ification of an ontology’s requirements. With this approach we achieve a specification of

requirements that is both semantically deep, and semiautomatically verifiable.

One of the biggest challenges in the Requirements phase is the level of understanding

required for complete or even partial characterization. We attempt to address this diffi-

culty with the process of Refinement presented in the following chapter. Admittedly, the

investment required to develop a representation theorem to achieve a complete character-

ization of the intended models may be a deterrent to its use in practice. Well-structured

ontology repositories may be a solution to aid in the search for useful mathematical

theories, however this remains to be investigated.

Chapter 4

Design

In the Design phase of the development lifecycle, we are concerned producing an axiom-

atization of the ontology to satisfy the semantic requirements presented in the previous

chapter. Recall that as illustrated in Figure 1.1 the Design phase is visited from both the

Requirements and the Verification phases. Each path to the Design phase corresponds

to a different type of axiom design process; this is because the source of the resulting

axiomatization is dependent on the maturity of the development process. Early on in

ontology development the Prototype Design process produces the axiomatization of the

ontology as the semantic requirements are refined. This process is shown in the feed-

back loop between Requirements and Design in Figure 1.1 and it results in an initial

prototype of the ontology that will be evaluated with respect to its semantic require-

ments in the Verification phase. Later in ontology development we are often required to

perform Post-Verification Design, where the failure to satisfy a semantic requirement in

the Verification phase necessitates some redesign of the ontology. This process requires

a diagnosis of the verification results to determine what changes must be made to the

current axiomatization in order to satisfy the semantic requirement in question. In this

chapter we present both types of design and discuss how each type of design process may

be assisted with model exploration techniques (originally presented in [34]).

33

Chapter 4. Design 34

4.1 Related Work

Ontology design is traditionally a relatively ad-hoc process driven by the intuitions of

the designer. Consequently, it is a difficult to prescribe a specific method for this aspect

of development. This can be recognized in the lack of detail provided by existing life-

cycle methodologies, as discussed in Chapter 1. A variety of design-specific techniques

for ontologies have been presented to address this lack of detail over the years, most

notably earlier work such as the Enterprise Ontology and more recently, work towards

design by reuse and by ontology learning. These existing techniques aim to assist in

the development of what we consider to be the initial prototype of the ontology, as they

generally lack consideration for the evolutionary nature of ontology development. With-

out consideration for the semantic requirements presented in the previous chapter, these

techniques are unable to take advantage of the tightly coupled relationship between the

Requirements and Design phases that exists in our lifecycle.

The Enterprise Ontology [63] presents an approach for ontology capture that empha-

sizes the notion of “middle-out” design, giving equal weight to top-down issues (such as

requirements) and bottom-up issues (such as reuse). Ontology learning refers to tech-

niques that apply data extraction and analysis techniques to construct an ontology of

a target domain; see [44] for an example. Currently we observe that the application of

ontology learning is restricted to lightweight ontology construction, however even this

case requires human interaction to filter and interpret results; it should not be viewed

as a silver bullet for ontology construction. To the best of our knowledge there has not

been any work done towards ontology learning for the more expressive ontologies that are

the focus of this work, however we speculate that the same techniques could be applied

within our lifecycle to assist in the design process. Reuse has been acknowledged as an

important means of ontology design [62]. However to the best of our knowledge, the

only efforts towards methodologies for ontology reuse that exist are specific to ontology

design patterns [54] [5] [1], and are relatively preliminary. Inspired from software engi-

Chapter 4. Design 35

neering, ontology design patterns are concerned with identifying reoccurring aspects of

ontology implementation; existing work focuses primarily on design patterns for com-

mon concepts. For example a participation pattern would provide a general pattern for

the axiomatization of the concept; this pattern could then be applied in the ontology’s

design whenever the notion of participation was required. There has been considerable

interest in this area, and yet design patterns are only one area of reuse. We find that

no methodologies exist to guide the process of ontology design with the reuse of entire

ontologies or ontology modules.

4.2 Prototype Design

Prototype design refers to the design process that occurs at the early stages of ontol-

ogy development. Here it is the goal of the Design phase to produce an axiomatization

of the ontology that is mature enough that it can be evaluated with respect to its se-

mantic requirements in the Verification phase. Implicit in this is that to proceed to the

Verification phase following Design, the semantic requirements must also be sufficiently

developed such that they provide an accurate specification of a semantically correct goal

ontology.

Initially the semantic requirements may be implicit in the goal application, however

they are not formally specified. This is most often the case because at the onset of devel-

opment our understanding of the semantic requirements is not sufficient to characterise

them precisely, as described in the previous chapter. In addition, when initially consider-

ing the semantic requirements, (prior to any design work) a version of the ontology does

not exist yet so there is no language with which we might formally specify the semantic

requirements. If these issues arise it is best to begin the process of ontology development

not with the Requirements phase, but with the Design phase. In this approach, we first

develop some “baseline”, an initial version of the axiomatization. This version will form

Chapter 4. Design 36

the starting point for the Iterative Refinement process; it may be produced in a some-

what ad-hoc fashion, or using any existing design methods such as those mentioned in

the previous section. Iterative Refinement involves the incremental development of both

the semantic requirements and design of the ontology by way of a feedback loop between

the Requirements and Design phases (see Figure 4.1).

Requirements Prototype Design

Iterative Refinement

Figure 4.1: Prototype Design

4.2.1 Iterative Refinement

Iterative Refinement refers to the feedback loop between the Requirements and Design

phases that typically must take place before we are able to develop both the semantic

requirements and the axiomatization to a degree that we can proceed to Verification.

At the earliest stages of development, even a partial characterization of the ontology’s

semantic requirements might not be known. As with any development project, we assume

that we possess some understanding of the intended application and the domain, however

it is often the case that our understanding is not deep enough to be able to explicitly

specify the semantic requirements and the axioms of the ontology.

To address this, Iterative Refinement employs model exploration techniques as a

means of highlighting design decisions that will motivate changes to either (or both) the

Chapter 4. Design 37

specification of semantic requirements and the axiomatization of the ontology. Model

exploration provides insight into the types of models that are entailed by the current

design. Reviewing the results of model exploration, we must determine if some revision

to the design and/or requirements is necessary. This can be accomplished by posing the

following types of questions:

1. Is this a type of model that we intend for the axiomatization of our ontology to

have?

2. What general features can be used to describe this type of model (as opposed to

the particular instance of this type)?

3. If it is a desired model, is this represented in the current version of our semantic

requirements?

4. If it is not a desired model, how might the axiomatization be revised to exclude

this type of model? And is this exception specified in the current version of our

semantic requirements?

Example 6 In the development of the BoxWorld ontology, model exploration led us to the

identification of a “fan”-type solid that was entailed by the axioms (illustrated in Figure

4.2). This lead to the realization that there was an entire class of “fan” solids where

more than two surfaces shared a single edge. Given the intended eventual application

with folding processes, it was determined that such a solid was not an intended model of

the ontology. Consequently, the design of the axioms was modified to prevent more than

two surfaces from sharing the same edge. Had the focus been on other manufacturing

processes, such as welding, the same model may have provoked an addition to the semantic

requirements instead.

The idea is that this continued process of refinement will eventually result in not only

an axiomatization that we are confident in, but an improved understanding of what the

Chapter 4. Design 38

f1

v1 v2

v4v3

e1

e2

e3

e4

e5

e6

e7

f2

v5

v6

e8e9

v7

f3

Figure 4.2: An unintended model of the BoxWorld ontology.

intended models are so that we are better able to provide a clear and thorough partial

(at minimum) characterization of the semantic requirements. The process of refinement

continues until we are sufficiently confident in both the design of the axioms, and the

accuracy of the semantic requirements that we proceed to the Verification phase. In

the following section we introduce the model exploration techniques that support this

process.

4.2.2 Model Exploration for Refinement

In the following, we describe two model exploration techniques that employ an automated

model generator and can be used to generate models of the ontology that will motivate

Chapter 4. Design 39

the process of refinement, as discussed previously. Initially these techniques may be

employed in a somewhat undirected fashion, however as we gain more insight into both

the intended structures and the axiomatization that we are designing, model exploration

will be performed with a more specific purpose: to examine particular types of models

in order make decisions to include or exclude them from the design.

Trivial Consistency Checking

Any time that changes are made to the design of the ontology it is important to ensure

that we have not introduced any inconsistencies into the axiomatization. This may be

determined with the use of an automated model generator - by generating at least one

model we demonstrate the consistency of the axioms. This type of undirected model

generation can also be a helpful starting point if we are unsure of how to proceed with

Iterative Refinement.

Non-trivial Consistency Checking

To achieve improved understanding of an axiomatization, it is important to not just

generate any model showing consistency, but to generate models with specific properties

– so-called ‘non-trivial models’. In its simplest form, this includes models in which the

extensions of sortals and relations are non-empty as well as models of certain domain sizes.

We can help facilitate this by partially instantiating a model through an existentially

quantified sentence, specifying a set of domain elements and a set of relations which hold

amongst them. A model finder and a theorem prover can then be employed to complete

the model or to show that the model cannot be completed. This technique can show that

a theory is not only consistent, but is ‘non-trivially consistent’; thus being the natural

next step after verifying general consistency of an ontology. It can be used to test whether

some specific (intended) model is or is not entailed by the theory. We adopt the following

definition of non-trivial consistency, from [34]:

Chapter 4. Design 40

Definition 8 Let Tonto be the satisfiable axiomatization of an ontology with some n-ary

predicate P (a1, a2, . . . , an) with finite n ≥ 1 in the language of Tonto.

If T∃ is satisfiable, then P is non-trivially consistent in Tonto.

T∃ = Tonto ∪ {∃x1, x2, . . . , xn[all xk distinct ∧ P (x1, x2, . . . , xn)]

In some cases, we might need to weaken or completely remove the ‘all distinct’ re-

quirement when checking for non-trivial consistency of a particular predicate.

The aim of showing non-trivial consistency is to generate models that serve to illus-

trate the semantics of the ontology’s axioms. This task plays an especially important

role when the ontology’s axioms are less refined, and when its semantic requirements

(intended models) are less understood. Instead of searching for any satisfying model,

we generate models with specific sizes and/or properties. In this search for non-trivial

models, we specify properties of interest by asserting the existence of elements satisfying

certain relations in the ontology. These models must be evaluated, and design decisions

must be made as to whether they are intended or unintended - often resulting additions

or modifications to the semantic requirements and/or the design of the ontology.

4.2.3 Reuse with Repositories

Another source for the axioms developed in Prototype Design is ontology repositories.

One of the major benefits of ontologies is that they are reusable. However as noted

earlier, guidance for ontology design via reuse is relatively limited. We propose that this

should be addressed by focusing on the task of search in ontology repositories. Even if the

ontology is completely reused from existing sources, development must be performed in

the same phases in order to ensure that semantic requirements are met; what distinguishes

reuse from traditional ontology design is process of obtaining the appropriate theories.

Here we propose methods of search that should be enabled in ontology repositories in

order to facilitate reuse in the design of ontologies: keyword search, model-based search,

Chapter 4. Design 41

and a partial characterization search. We do not consider search based on the complete

characterization of semantic requirements to be useful, as axiomatization of the ontology

would most likely be completed before these requirements were identified.

The type of search that can be performed is dependent on the maturity of the se-

mantic requirements. With the most basic understanding of semantic requirements, a

keyword search to locate theories with matching terms in their vocabulary would provide

the developer with a selection of potential modules that represent one or many of the con-

cepts identified in the semantic requirements. With a more mature understanding of the

intended models for the ontology, the repository search procedure presented by [39] could

identify useful theories. Here, the authors presented a procedure to search a repository

for axioms that satisfy particular models specified by the user. This procedure would be

especially useful to facilitate reuse when no characterization of the intended models is

known. If the understanding of the semantic requirements is even further developed, a

search based on any identified partial characterization of semantic requirements may be

more effective. We saw in the previous chapter that these requirements could be specified

in the form of entailment problems. Ideally then, this second type of search would allow

the developer to search the repository with the consequence of an entailment problem,

for example a competency question, and return any modules that entail the particular

sentence. Potential barriers affecting the success of this type of search are efficiency -

the time required to search all modules may not be practical, and the search vocabulary

bias - there may be multiple modules in the repository that satisfy a particular semantic

requirement, however if the axioms are written with a different vocabulary, is approach

to search will not be able to identify them without any translation axioms.

Chapter 4. Design 42

4.3 Post-Verification Design

At later stages in development, the design process is more restricted and directly guided

because it is driven by a set of explicitly stated semantic requirements and the results

of the Verification phase. This Post-Verification Design process refers to a redesign of

the axiomatization necessitated by its failure to satisfy some semantic requirement, as

illustrated in Figure 4.3. We will see more detail in the following chapter regarding the

results of the Verification phase and how some outcomes may require the Design phase to

be revisited. In this section we introduce the notion of modularity and its relevance for

Post-Verification Design. We also revisit model exploration techniques, as they provide

a means of diagnosing the Verification results.

Verification

Prototype Design

Figure 4.3: Post-Verification Design

4.3.1 Modularity

In this section we introduce the concept of modularity and its associated notation. Mod-

ularity is a well-known technique for managing large artefacts such as software. Although

the modularity of ontologies is still a relatively young area of research, its potential value

is not disputed [22] [51] [55]. In the context of this development lifecycle, modularity

Chapter 4. Design 43

may be exploited for use with model exploration techniques (presented in the following

section), as well as ontology tuning techniques (presented in Chapter 6).

In order to discuss the uses of modularity, we adopt the following notation, definitions,

and results from [33]:

A module consists of a set of axioms and a set of imports which in turn

define a transitive import relation <, similar to modules in Common Logic

[9]. Given that an ontology must be consistent, we interpret the transitive

import relation < to indicate that one module is the consistent extension of

another.

Definition 9 A module M = (SM , IM) is a set of axioms SM together with

a set of imported modules IM .

We say module M = (SM , IM) imports module N = (SN , IN) and write

N < M if there is a chain of modules M0,M1, . . . ,Mk with k ≥ 1, M = M0,

and N = Mk so that Mi ∈ IMi−1for all 1 ≤ i ≤ k.

If a module has an acyclic transitive import closure, it is a modular ontology:

Definition 10 Let M be the set of all modules reachable from a module M ,

that is, N ∈M iff N = M or N < M .

We call the structure (M, <) a modular ontology iff M is a finite set and <

is irreflexive, i.e., N ≮ N for all N ∈M. We say the module M defines the

ontology (M, <). The theory TM =⋃

N∈M SN axiomatizes (M, <).

Each module in a modular ontology defines itself a modular ontology.

For the modular structure of an ontology to be effectively exploited in our development

lifecycle we assume there is little coupling between modules of the ontology, and that

each module should be a relatively small axiomatization of one or a few concepts.

Chapter 4. Design 44

4.3.2 Model Exploration for Diagnosis

The model exploration techniques presented for Iterative Refinement may also be useful

for Post-Verification Design. However as discussed earlier, this design process is different

- it is driven by specific verification results. In this case the purpose of model exploration

is to determine what design changes to the ontology will fix the error found in the

Verification phase. To determine what design changes must be made, we must diagnose

the cause of the error.

Although we do not prescribe a method for designing modular ontologies, we have

noted their benefits and advised towards the development of a modular structure dur-

ing design. Once development has reached Post-Verification Design, the axiomatization

should be relatively mature; at this point we assume the axioms will be organized into

a modular structure if it is the end intent to do so. Here, we introduce some additional

considerations that leverage modularity to assist the model exploration techniques.

Trivial Consistency Checking

Determining consistency of the axioms is also a necessary step at this stage of develop-

ment, as we are still making revisions to the axiomatization. In this case, a straight-

forward technique taking advantage of the ontology’s modular structure can be applied

to assist the task of determining the satisfiability of its axiomatization. If we are able

to prove that a module is consistent, all its imported modules are also consistent. Con-

versely, if a module is inconsistent, all modules importing this specific module are incon-

sistent as well. Formally, this is stated as follows:

Let (M, <) be a modular ontology with N,K ∈ M as two modules so that K < N . If

TK is inconsistent, then TN is inconsistent. On the other hand, if TN is consistent, TK is

consistent.

This leads to a straightforward approach of theory weakening to show consistency

or inconsistency of the axioms. We begin by attempting to demonstrate consistency or

Chapter 4. Design 45

inconsistency for the weakest (atomic) modules in an ontology (M, <), i.e. the modules

N ∈ M so that for all K ∈ M, K ≮ N . If no inconsistency is found, we can then

proceed with consistency checking for each module that is successively greater in the

partial order. In parallel, to potentially accelerate this process we may also attempt

consistency checking beginning with the most restricted theory TM (the theory defining

the ontology) and then moving down the partial order until a model can be generated.

Even if we cannot show consistency of the whole ontology, consistency of a weaker module

establishes that all weaker modules thereof are also consistent. If either of these parallel

tasks demonstrates an inconsistency, then the ontology is inconsisten and we can examine

the module and its imports to find the problem in their axiomatization.

Non-trivial Consistency Checking

Using the same definition for non-trivial consistency presented in Section 4.2.2, we can

extend the module-based technique for consistency checking presented above. Recall

that a semantic error indicates the presence of an unintended or omitted model; we

can employ non-trivial consistency checking to investigate whether or not any suspected

models or classes of models, unintended or intended, are in fact consistent or inconsistent

(respectively) with the axioms.

4.4 Discussion

In this chapter, we focused on the Design phase as a means of both developing a prototype

of the ontology for verification and redesigning the ontology as a result of verification. We

used model exploration techniques to assist in the process of refinement - to develop both

the prototype and our understanding of the semantic requirements. These techniques may

also be of use in later stages of development if the Verification phase results require the

Design phase to be revisited; this case will be discussed in more detail in the following

Chapter 4. Design 46

chapter.

One important issue that has not been addressed is that of documentation. Docu-

mentation is an important task throughout the development process; specifically in the

Design phase the rationale for the addition and modification of axioms should be doc-

umented not only to facilitate reuse of the ontology, but to aid in decision making that

may be required later in the Verification phase. This will be discussed further in Chapter

8 where we propose an environment for the development of ontologies.

As seen in this chapter, modularity is a property of ontologies that can be leveraged for

techniques in both the Design and (later) Tuning phases. A key aspect of design that we

have neglected in this chapter is the task of designing a modular ontology. Modularization

of ontologies is still a young area of research lacking agreed-upon best practices on how to

modularize an ontology. For our module-based techniques, we require modularization of

the ontology into fairly coherent modules with little coupling between modules. Recent

work towards this [32] is promising in that it presents a semi-automated procedure for

the decomposition of a theory into modules. However, we should also consider whether

or not it would be more effective to initially design the axioms in a modular structure

(rather than later decomposing the complete axiomatization into modules), and if it is

feasible for a methodology of this nature to be developed.

Chapter 5

Verification

As in software development, the purpose of the Verification phase in ontology develop-

ment is to evaluate whether or not the system satisfies the requirements we have defined.

In the context of our lifecycle, the Verification phase is concerned with evaluating the

relationship between the intended models of an ontology and the actual models of the

axiomatization of the ontology. In this sense, Verification is similar to the process of

Iterative Refinement (presented in Chapter 4), however at this stage in development we

possess a mature set of axioms and requirements. Recall that from a mathematical per-

spective, the relationship between the intended and actual models is formalized by the

notion of representation theorems. From a reasoning problem perspective, verifying this

relationship amounts to evaluating the entailment problems developed in the Require-

ments stage; we are therefore able to perform verification semiautomatically with the use

of a theorem prover.

In this chapter we present a brief review of existing techniques for verification, followed

by the tasks required to perform verification in our lifecycle framework. We focus on the

evaluation of entailment problems as semantic requirement verification and discuss the

possible outcomes, accompanied by examples and pragmatic guidelines.

47

Chapter 5. Verification 48

5.1 Related Work

Appearing in numerous methodologies, the most widely used techniques for ontology ver-

ification are consistency checking, OntoClean [38], and competency question evaluation

[30]. As discussed earlier, first-order logic ontologies are susceptible to deeper, semantic

errors than their less expressive counterparts. While consistency checking is a useful and

necessary technique, it cannot stand alone to verify an ontology. Strictly speaking, we

only need to show that a model exists in order to demonstrate that a first-order theory

is consistent. However, constructing a single model runs the risk of having demonstrated

satisfiability for a restricted case; for example, one could construct a model of a pro-

cess ontology in which no processes occur, without realizing that the axiomatization

might mistakenly be inconsistent with any theory in which processes do occur. Simi-

larly, taxonomy-oriented methods of verification such as OntoClean may be sufficient for

less expressive ontologies, but they are unable to verify the semantic correctness of an

ontology, as presented in Chapter 3.

Competency questions, as noted in Chapter 3, are a means of specifying a partial

characterization of the semantic requirements. However, they are traditionally employed

to focus on scoping the ontology and verifying its expressiveness, rather than the cor-

rectness of its semantics. In addition, when competency question evaluation is used as a

verification technique, no guidance is provided regarding results interpretation. Answers

to questions such as: What if an incorrect answer is returned? What if the ontology

does not return an answer to the competency question? are necessary details for this

approach to be used effectively.

We propose a more rigorous and detailed approach to verification that is based on

the semantic requirements presented earlier.

Chapter 5. Verification 49

5.2 Consistency and Inconsistency Checking

Due to our lack of complete confidence in any ontology, we should never assume con-

sistency. Therefore, even when proving properties we should always test in parallel for

consistency to avoid producing ‘unintended proofs’ of properties that are in reality in-

consistency proofs. It is advisable that checks for trivial and non-trivial consistency

(as in Chapter 4) be performed prior to each instance of requirement verification. The

frequency of such checks may decrease as we develop greater confidence in the axioma-

tization, however it is advantageous to perform them as often as is practical in a given

development situation. The effort required is minimal when considering the potential

consequences of an undetected inconsistency.

5.3 Verification of Semantic Requirements

Recall that in the Requirements phase we characterized the ontology’s requirements in

the form of reasoning problems, allowing for the use of automated reasoners to verify the

requirements in a semiautomated procedure. To verify complete and/or partial require-

ments, each entailment problem is evaluated with an automated theorem prover. The

results of this are then interpreted as described by the cases in the following sections.

These results may lead to revisions of either the design or the requirements of the ontol-

ogy. The Design phase is revisited in the case that an error is detected or suspected in

the Verification phase. If an error in the design is identified, it is noted that the devel-

oper must be cautious and consider the entire design when making the correction so that

further errors are not created. Additionally, when a correction is made to the design,

all previous test results should be reviewed, and any tests (proofs) that were related to

the error should be rerun; an error in the design has the potential to positively or nega-

tively impact the results of any tests run prior to its identification and correction. The

Requirements phase is revisited in the case that an error in the requirements is found

Chapter 5. Verification 50

Case 1:

Unintended

Proof

(Section 5.3.1)

Axiom design

error

Requirement

specification

error

Case 2:

No Proof

(Section 5.3.2)

No proof exists

Provable

(intractability

issues)

Requirement too

strong

Axiomatization

too weak

Case 3:

Proof found

(Section 5.3.3)

Proceed to next

entailment

problem

Return to Design

Phase

Return to Design

Phase

Return to

Requirements

Phase

Return to

Requirements

Phase

Tuning Phase

Figure 5.1: Outcomes of Semantic Requirement Verification

or suspected during the Verification phase. It may also be revisited if revisions to the

requirements are necessary because of corrections to the ontology that were implemented

in the Design phase. Note that due to the structure of the entailment problem Rep-2,

verification results may require a slightly different interpretation. When attempting to

diagnose an error, we should take into account that we are likely reasoning with a well-

established axiomatization of some mathematical theory. Similarly, when interpreting

results for both Rep-1 and Rep-2, we must consider the potential for error in the trans-

lation definitions. In both cases, the source of the translation definitions and the theory

axiomatization should be taken into account when determining our confidence in their

correctness.

When verifying any semantic requirement there are three possible outcomes, as illus-

trated in Figure 5.1. Each outcome is discussed in more detail below.

Chapter 5. Verification 51

5.3.1 Case 1: Unintended proof found

In this case, a proof was found of a sentence that contradicts the intended semantics of

the ontology. This is often encountered when the theorem prover finds a proof without

using all, or any clauses from the proposition or query. Given this possibility, a thorough

inspection of all proofs found must be performed; if this case is detected, it is indicative

of an error in the axiomatization or the requirement in question, as described below:

Case 1a An examination of the proof may lead to the identification of some axiom in

the ontology which is not entailed by the intended models; in this case we must

return to the Design phase.

Example 7 One such result with the design of the subactivity occurrence ordering

extension Tsoo to the PSL ontology arose when testing its consistency with the first

addition of Tsoo. A proof was found, and normally this would indicate an error in

the set of definitions that was being tested. However, upon examination we realized

that the definition of the same tree relation made it inconsistent for any model of

the ontology to have an occurrence in the same activity tree as the root of the tree:

Tsoo ∪ Tpsl |= (∀s1, a) root(s1, a)

⊃ ¬(∃s2) same tree(s1, s2, a) (5.1)

This sentence should not be entailed by the intended models of the PSL Ontology

itself, yet it was only identified when using the axioms of Tsoo; it was a hidden

consequence of the PSL Ontology. In particular, it was the axiom below from Tsoo

that played the critical role in deriving the unintended sentence:

(∀s1, a)root(s1, a) ⊃ (∃s2) soo(s2, a)

∧mono(s1, s2, a) ∧ same tree(s1, s2, a) (5.2)

As a result of this discovery, the axiom for same tree was modified.

Chapter 5. Verification 52

It is interesting to see the relationship between this case and the failure of a potential

representation theorem for the ontology. Part of the representation theorem shows

that a sentence that is entailed by axioms is also entailed by the set of intended

models. Hidden consequences such as we have just considered are counterexamples

to this part of the representation, since it is a sentence that is provable from the

axioms, yet it is not entailed by the intended models. One can either weaken the

axioms (so that the sentence is no longer provable), or one can strengthen the

requirements by restricting the class of intended models (so that the sentence is

entailed by all intended models).

Case 1b An examination of the proof may lead to the detection of some error in the

definition of the requirements; in this case we must return to the Requirements

phase. It is important to devote considerable attention to the detection of this

possibility so that the ontology is not revised to satisfy incorrect requirements.

We did not encounter an example of this in our experiences, however it would be

possible for an error in the requirements specification to lead to an unintended

proof. As discussed above, this type of error could be addressed by strengthening

the requirements.

5.3.2 Case 2: No proof found

As a result of the semi-decidability of first-order logic and the inherent intractability of

automated theorem proving, if no proof is found when testing for a particular requirement

then a heuristic decision regarding the state of the ontology must be made. It could be

the case that no proof is found because the sentence really is not provable; there may be

a mistake in the definition of the requirement that we are testing, or there may be an

error in the axiomatization of the ontology, (i.e. we cannot prove that the requirement is

met because it is not met). However, it could be the case that due to the intractability

Chapter 5. Verification 53

of automated theorem provers, a proof exists but the theorem prover is unable to find it

(at least within some time limit). To avoid unnecessary work, some effort must be made

to ensure that we are as informed as possible when making this decision; in particular,

previously encountered errors and the nature of the requirement that we are attempting to

verify must be taken into account. In Section 5.4 we present the use of model generation

to assist decision-making and potentially resolve the uncertainty in this case.

Case 2a If we believe there may be some error in the requirements or the design of

the ontology, then we must revisit the Requirements phase or the Design phase,

respectively. It is recommended that the Requirements phase is revisited first, as

generally a much smaller investment is required to investigate the correctness of a

requirement, rather than that of the ontology.

Example 8 In the course of proving the representation theorem for Tsoo, the theo-

rem prover failed to entail a property regarding the mono relation, namely, that the

mono relation should only hold between different occurrences of the same subactiv-

ity. This requirement was initially expressed as the following proposition:

(∀s1, s2, o, a)mono(s1, s2, a) ∧ occurrence of(o, a)

∧ subactivity occurrence(s1, o)

⊃ ¬subactivity occurrence(s2, o) (5.3)

It seemed clear that the axiomatizations of the mono relation should have restricted

all satisfying models to instances where s1 and s2 were not in the same activity

tree. Returning to the Requirements phase, an examination of the above sentence

led to the realization that the axiomatization of the proposition had been incorrect.

We had neglected to specify the condition that s1 was not the same occurrence as

s2, (in which case the subactivity occurrence relation clearly holds for s2 if it holds

for s1). Once this issue was addressed, we continued to the Verification phase (no

Chapter 5. Verification 54

revisions were required to the ontology’s design) and the theorem prover was able

to show that the ontology entailed the corrected property, shown below.

(∀s1, s2, o, a)mono(s1, s2, a) ∧ occurrence of(o, a)

∧ subactivity occurrence(s1, o) ∧ (s1 6= s2)

⊃ ¬subactivity occurrence(s2, o) (5.4)

The previous example was a case where the error that was corrected resulted from

a misrepresentation of the intended semantics of the requirements. Another in-

teresting situation occurs when we identify the need for revisions to the semantic

requirements.

Example 9 Originally, Tsoo was to be a conservative extension of PSL1. In part,

this meant that the axiomatization of Tsoo had to account for all of the kinds of

activity trees that were represented in PSL. One particular class of activity trees

was represented by the zigzag relation, defined below.

(∀s1, s3, a) zigzag(s1, s3, a) (5.5)

≡ (∃s2)preserve(s1, s2, a)

∧ preserve(s2, s3, a) ∧ ¬preserve(s1, s3, a)

With the inclusion of the zigzag class in Tsoo we were unable to entail the transitivity

of the preserve relation. Review of the inconclusive test results led to the belief that

with the inclusion of the zigzag class of activity trees:

Tsoo ∪ Tpsl 6|= (∀s1, s2, s3, a) preserve(s1, s2, a)

∧ preserve(s2, s3, a) ⊃ preserve(s1, s3, a) (5.6)

1A detailed discussion of conservative extensions and the role that they play in ontology developmentcan be found in [21].

Chapter 5. Verification 55

Careful consideration of the situation led to the decision that the ability of Tsoo to

entail the transitivity of the preserve relation was more important than developing

it as a non-conservative extension of PSL. This decision resulted in a change in

the axiomatization of Tsoo, however this change represented a change in the require-

ments. We were no longer considering the zigzag class, because we had revised the

requirements such that Tsoo did not have to be a conservative extension of PSL. Af-

ter this change was implemented, we were able to successfully prove that the axioms

of the ontology entailed the transitivity of the preserve relation.

Case 2b If we are strongly confident about the correctness of both the requirements and

the design of the ontology, then we consider the possibility that it is the intractable

nature of the theorem prover that is preventing a proof from being found. In this

case, we proceed to the Tuning phase. The purpose of the Tuning phase is to

improve the theorem prover performance with the axioms, without altering the

semantics of the ontology. The techniques applied to achieve this will be presented

in more detail in the following chapter. If our hypothesis regarding the failure

to find a proof is correct and the Tuning phase is successful, the theorem prover

performance is improved to a level where the requirement in question can be verified.

This case illustrates a phenomenon that distinguishes theorem proving with ontologies

from more traditional theorem proving – we are not certain that a particular sentence is

actually provable. Effectively, every theorem proving task with an ontology is an open

problem.

5.3.3 Case 3: All requirements met

If we obtain a proof that a requirement is satisfied, and it is consistent with the intended

semantics of the ontology, then we may proceed with testing the remaining requirements.

Once we obtain such proofs for each requirement, the ontology we have developed satisfies

Chapter 5. Verification 56

our semantic requirements and we can proceed to the Application phase.

5.4 Verification Assistance with Model Generation

We have seen that the core of the Verification phase is the use of theorem provers to eval-

uate the reasoning problems that correspond to our requirements specification. However,

this process is most effective when complemented by model generation. If an ontology’s

axiomatization has unintended models, then it is possible to find sentences ϕ that are

entailed by the intended structures, but which are not provable from the axioms of the

ontology. In particular, this applies to Case 2 of the verification outcomes discussed ear-

lier, which indicates a possible error in the requirements or the design of the ontology.

We can address this ambiguity with a special case of model exploration (see Chapter 4)

by evaluating the satisfiability of:

Tonto ∪ ¬ϕ2

In other words, if we are unable to find a proof for a particular requirement, we search

for a model that is a counterexample of the requirement. The (non-)existence of such a

model will tell us the cause of Case 2 with certainty. If a counterexample is found, it can

then be examined (and evaluated) to determine the course of corrective action required.

In other words if the counterexample is a desired model then the requirements are too

strong should be corrected accordingly, otherwise this indicates that the axiomatization

is too weak and must be corrected to exclude the counterexample. In the latter case, we

may attempt to generate multiple counterexamples to improve our understanding of the

error. In this way, model generation may be used not only to resolve the uncertainty of

Case 2, but to assist in the decision-making process required if a proof does not exist.

Chapter 5. Verification 57

5.5 Discussion

In Section 5.3 we discussed the heuristic decisions in the methodology that result from

the semidecidability of first-order logic, and the intractability of theorem proving in this

case. If we do not obtain a proof when testing a requirement, then we may not be certain

if this is because a proof does not exist (if this is the case, then we know that our current

ontology does not satisfy the requirement, unless it was incorrectly specified) or if a proof

does exist but the theorem prover reaches a specified time limit before it is able to find

it (this would present us with either Case 1 or Case 3, as described above). We suspect

that this issue of uncertain paths resulting from Case 2 that will stimulate some criticism

of our verification methodology and the lifecycle as it is proposed here. We address this

concern with the following remarks:

• In two of the three possible cases that we have identified, we can be certain of the

direction we must proceed in (the cases when a proof is found).

• In Case 2, when a proof is not found and there is uncertainty about the cause, we

can be certain that the requirements for the ontology’s application have not been

met. Applications of the ontology that utilize a theorem prover must be able to

answer a query (competency questions) or entail a proposition (infer a property).

In other words a theorem prover should be able to find a proof of the requirements

in some reasonable amount of time. Therefore, we can say that in all cases the

verification methodology is capable of testing if the requirements are met; the

uncertainty exists in how to proceed in development when a requirement is not

met.

• The uncertainty of which path should be followed when a requirement is not met

may be mitigated. If thorough documentation practices are followed in the devel-

opment process, we may seek out trends to indicate the most likely source of the

error. We also presented an application of model generation to search for coun-

Chapter 5. Verification 58

terexamples; this has the potential to completely resolve any uncertainty for this

case.

• Furthermore, we have demonstrated the feasibility and effectiveness of our method-

ology in practice with examples of each possible case in the Verification phase.

Chapter 6

Tuning

The Tuning phase focuses on the mitigation of theorem prover intractability. As discussed

in Chapter 5, the task of tuning may be required to address inconclusive results in

the Verification phase. However, Tuning may also be performed post-verification as we

may also wish to improve an automated reasoner’s performance with the ontology for

some application. Even when a proof exists, automated theorem provers sometimes have

difficulty finding it.

Typically, automated theorem provers are not designed for use with ontologies, but

rather with mathematical theories. In general, mathematical theories have evolved over

long periods of time, while ontologies are constructed somewhat intuitively in a compa-

rably much shorter time. Mathematical theories are also much more mature and tend

to be much smaller (in terms of the number of axioms) than ontologies. We specu-

late that these differences between ontologies and mathematical theories detract from

theorem prover performance, making reasoning with ontologies especially susceptible to

the problem of intractability. The aim of the Tuning phase is to apply techniques to

streamline the ontology in such a way that the automated theorem prover’s performance

is improved for a specific query or queries. In particular, we look at the techniques of

subset development, lemma generation, and goal simplification. These techniques may

59

Chapter 6. Tuning 60

be applied individually or in combination with one another.

6.1 Subset Development

To develop a subset of the ontology, we remove some of the axioms that are not relevant

to the particular reasoning problem we are considering. The idea behind this technique is

that by reducing the number of axioms input to the theorem prover, there is the potential

to reduce the time required to find a solution, if one exists; this is because the theorem

prover will not be processing and reasoning with an excess of irrelevant axioms. As long

as the theory is consistent, we can guarantee that any conclusions drawn from the subset

can also be drawn from the entire ontology.

Example 10 When working with the PSL ontology, we excluded axioms from PSL-Core

that had to do with timepoints when testing reasoning problems that were related to the

composition of activities. This was possible because of our understanding of the concepts

of the ontology. The size of the PSL ontology makes the use of subsets necessary for most

reasoning problems, however we have observed that these subsets are often successfully

reused for other related reasoning problems. While testing the ontology, we were able to

use the same subset1 to successfully entail 14 of 16 propositions related to the ordering of

subactivity occurrences.

In Chapter 4 we introduced the subject of modularity and discussed its use in the

context of our development lifecycle. In the Tuning phase, modularity has the potential

to play a role in the development of subsets. We can leverage the modularity of an

ontology to guide us in the identification of the appropriate set of axioms for a subset

in the following way: Suppose (M, <) is an ontology and that with the theory TN we

are not able to prove a sentence ϕ. We can hypothesize that the theory TK (a subset)

1This subset can be found at http://stl.mie.utoronto.ca/colore/process/psl-subset-519.clif.

Chapter 6. Tuning 61

defined by a module K < N might still be able to prove ϕ. If we are able to obtain this

proof with our automated theorem prover, we can extend the result to say that we have

TN |= ϕ. This is possible due to the monotonicity of first-order logic.

The challenge, of course, is to select good candidate modules, K, whose theory is

likely strong enough to prove the property but also small enough (in terms of the axiom-

atization length) to prove the property automatically. We want to avoid excessive theory

weakening towards achieving a specific proof; ideally (although this may not always be

possible), we should develop useful subsets that can be reused to obtain proofs for mul-

tiple requirements. Further, we must take care to ensure that all necessary axioms have

been included. If any related axiom is excluded, this could result in an inconclusive test

result (we might not prove a sentence because some of the axioms required to prove it

have been excluded). This is a concern especially if we do not use modules to develop

the subsets, as in this case the relationships between the axioms may be less clear. We

do not provide any techniques to address this; a clear and complete understanding of the

ontology is required to be certain of what axioms (or modules) must be selected to create

a subset for a particular reasoning problem.

6.2 Lemma Generation

We use the term lemma in its traditional, mathematical sense – [56] defines a lemma as:

a preliminary proposition that is used in the proof of a theorem

In other words, a lemma is a result that is used as a kind of “stepping stone” in achieving

a proof of the goal; this is a commonly accepted technique to improve theorem prover

performance. In the Tuning phase, we can apply this technique to assist in obtaining

conclusive test results. Lemmas may be used to improve performance as a means of

reducing the number of steps required to obtain a proof. When adding a lemma, the

aim is to provide the theorem prover with a sentence that will be used in the proof -

Chapter 6. Tuning 62

an intermediate conclusion which must be deduced in order to arrive at the goal. The

addition of such a lemma reduces the number of steps required to obtain the proof since

the theorem prover no longer needs to deduce the sentence before using it.

Lemmas should be developed intelligently, with some idea of how the addition of the

sentence to the ontology will assist in finding a proof. Another point to consider is that

of reusability; some effort should be made to design lemmas that are general enough to

be applied for other reasoning problems. In the event that we have already developed

a lemma for one reasoning problem, we should consider its potential use in the Tuning

phase for related reasoning problems.

Example 11 During the Verification phase in the development of extension to the PSL

ontology, we were unsuccessful in proving a particular property about the min precedes

and the preserve relations. Based on the intended semantics of the two relations, it would

appear straightforward that in any model where min precedes holds for two occurrences,

preserve must hold as well. We attempted to verify this by proving that we could entail

the following proposition from the ontology with the theorem prover, however the results

were inconclusive:

(∀s1, s2, a)min precedes(s1, s2, a)

⊃ preserve(s1, s2, a) (6.1)

Being fairly certain about the correctness of the ontology and specification of the prop-

erty, we moved to the Tuning phase. In consideration of the definition of the preserve

relation and the proposition we were attempting to verify, the reflexivity of the preserve

relation was an intuitively important property. Two lemmas regarding the reflexivity of

the mono relation that had already been shown to satisfy the models of the ontology were:

(∀s1, s2, a)min precedes(s1, s2, a)

⊃ mono(s2, s2, a) (6.2)

Chapter 6. Tuning 63

and

(∀s, o, a)subactivity occurrence(s, o) ∧ legal(s)

∧ occurrence of(o, a) ⊃ mono(s, s, a) (6.3)

The addition of these lemmas to the original reasoning problem aided the theorem prover

sufficiently so that the property was proved and we could continue testing the other re-

quirements.

As discussed earlier, ontology verification sometimes requires finding proofs for both

tasks Rep-1 and Rep-2. We now discuss how the nature of these tasks and the modu-

larity of an ontology can be leveraged to develop potentially useful lemmas.

Consider an ontology, Tonto that we are attempting to verify by proving that it is

definably equivalent to a set of well-understood theories T1, ..., Tn. In other words, we are

attempting to verify its complete characterization, as presented in Chapter 3. Because

these theories are well-understood and established, we can identify results that have been

documented for them in the literature as potentially useful lemmas. Suppose that for

some sentence ϕ, we have Ti |= ϕ, and that ϕonto is the sentence expressed in the language

of Tonto, using the translation definitions. If Tonto |= ϕonto we can now store and use ϕonto

as a potentially useful lemma.

With a similar intuition, we can consider weaker theories as sources for potential lem-

mas; any additional results of theories weaker than some Ti can be translated into the

language of the ontology. A similar approach of reusing lemmas has been implemented by

the Interactive Mathematical Proof System (IMPS) [14]. If we can prove the translated

result is also a result of the ontology, then this sentence may also be stored as a poten-

tially useful lemma. More generally, if we can prove or already know that the ontology

interprets some theory TA, we can assist in proving that the ontology also interprets a

theory TB stronger than TA by translating all axioms of TA as well as any other results

Chapter 6. Tuning 64

thereof into the language of the ontology to be used as lemmas. For example, if we prove:

Tonto ∪∆ |= TA

then the translations of the axioms and lemmas in TA into the language Lonto of Tonto

can be used as lemmas to show:

Tonto ∪∆ |= TB

This lends itself to an approach that begins with showing that Tonto interprets a very weak

theory and then retaining the translated axioms as lemmas for proving interpretations

of successively stronger theories. This is in line with the intuitive, though somewhat

ad-hoc approach to generating lemmas that sometimes occurs in practice. In search of

useful lemmas we often first attempt proofs of basic properties of relations, (for example,

reflexivity, symmetry, anti-symmetry, or transitivity).

Independent of the lemma’s source, for every lemma of an ontology there is some

minimal module of the ontology whose theory proves the lemma, such that the lemma

is not provable from any weaker modules. Storing lemmas with their weakest module

allows us to reuse the lemmas every time a module is used. This reuse is an important

consideration outside of individual development projects. The identification of useful

lemmas also has a great potential value if the ontology is reused.

6.3 Goal Simplification

The complexity of particular semantic requirements may prevent theorem provers from

finding proofs. Anecdotal evidence from work with Prover9 has led to the identification

of the technique of goal simplification to address this. In these cases, a property can be

split into a set of properties that might be easier to prove. For example, any biconditional

can be split into two implications.

Chapter 6. Tuning 65

This technique will most likely be restricted to addressing performance issues for the

Verification phase (as opposed to in its end application), unless the particular application

allows for the reasoning task to be performed in several steps. This technique is presented

and discussed in detail in [34] - in particular, the simplification of the theorem proving

goals may be used to allow for the use of weaker subsets in an attempt to further increase

reasoner efficiency.

6.4 Related Work

The techniques of subsets, lemmas, and goal-simplification are not particularly novel to

the automated reasoning community; however, to the best of our knowledge the applica-

tion of these techniques is not suggested in any existing ontology development method-

ologies, nor do the Description Logics and Semantic Web communities offer anything

similar to this approach of ontology tuning. Within the ontology community, perfor-

mance improvements are considered in [52] where the potential usefulness of a lemma

is illustrated. They present a specific example where the asserting the transitivity of

a relation achieves improvements, however the technique is not developed further. Of

potential use for the techniques presented here, [32] proposes a semiautomated procedure

for the decomposition of ontologies into modules; this technique could be applied as a

means of developing subsets.

In the knowledge base community, the work in [2] presents an approach to partitioning

and reasoning with large knowledge bases in order to improve reasoner efficiency. They

present message passing algorithms that reason within the knowledge base partitions,

and then pass the results between the partitions. Similar to our motivation for the use

of subsets, this approach is a means of reducing the search space, as reasoning will be

contained to smaller sets of axioms rather than over the entire knowledge base. Since

the effectiveness of this approach is dependent on the partitioning of the knowledge base,

Chapter 6. Tuning 66

the authors also present guidelines for how an effective partitioning may be achieved,

accompanied by algorithms for creating the knowledge base partitions according to these

guidelines. Although the ontologies we focus on are generally much less structured than

knowledge bases, this approach may still be an effective means of achieving performance

improvements. An alternative option, more closely related to the techniques presented

here, would be to investigate the results of the partitioning algorithm alone as a potential

method of subset development. The effectiveness of both approaches remains to be

investigated.

6.5 Discussion

In order to fully exploit the techniques presented here, their successful application should

be recorded in such a way that they can be easily reused. We noted that lemmas should

be stored with their weakest module to facilitate reuse; however further effort should

be made towards facilitating the reuse of results of the Tuning phase. For example, by

recording information about when a particular lemma or subset was used successfully

or unsuccessfully, we can provide future users with insight about how useful it might

be for their application. Documentation regarding the use of goal simplification for

reasoning problems could also be valuable if the ontology is reused. We work towards

facilitating this sort of documentation in Chapter 8 where we propose the requirements

for an ontology development environment.

For this phase of ontology development, we have presented three techniques that may

be employed in an effort to improve theorem prover performance. We make no claims

regarding the exhaustiveness of these techniques. In fact, while we have focused on tuning

the ontology, there are also opportunities to achieve performance improvements by tuning

the theorem prover. This direction is explored briefly in [34]; neither ontology tuning

nor theorem prover tuning have been deeply explored. This is likely due to the challenge

Chapter 6. Tuning 67

posed by the wide variety of automated theorem provers available, with each tool having

its own processes and heuristics. A good understanding of these tools is required to

further explore potential means of tuning the ontology or the theorem prover.

Chapter 7

Quality

When producing anything, from material goods to software, quality is an important issue,

and the development of an ontology is no different. As a consumer or a producer, we

need some understanding of how good a product is. The wide spectrum of quality in

existing ontologies and the variety of its interpretations is problematic for the promotion

of their use and reuse [49].

The fact that quality is a commonsense term is problematic because it is employed

in different senses with little care taken to provide a precise definition. The term is

used frequently in the context of ontologies, while a commonly accepted definition does

not exist; there are a variety of approaches to its evaluation in the literature, and the

relationship between them is unclear. This makes it nearly impossible to discuss the

quality of ontologies in any meaningful way, or to know how best to evaluate an ontology’s

quality as a developer or as a prospective user. Though of obvious relevance to our

development lifecycle, this is an issue that extends beyond the development of first-order

logic ontologies, to the ontological community as a whole.

We propose a common definition of quality, adapted from the ISO 9000 definition

[17]:

Definition 11 Ontology quality is a measure of the degree to which the ontology meets

68

Chapter 7. Quality 69

its requirements.

With this definition, quality must be evaluated implicitly through its requirements.

Recall the definition of semantic requirements presented in Chapter 3:

Definition 12 Semantic requirements specify the conditions for semantic correctness on

the intended models for the ontology, and/or models of the ontology’s axioms

However, it is not difficult to argue that the quality of an ontology should account for

more than the correctness of the models it entails. For example, consider the often cited

design criteria presented in [23]: clarity, coherence, extendibility, minimal encoding bias,

and minimal ontological commitment; surely such criteria must also be accounted for in

our definition of quality. To address this, we define a second type of requirements as

follows:

Definition 13 Pragmatic requirements specify properties of the way in which the axioms

are expressed. They are perceivable by a human user, or in the ontology’s implementation,

but do not pertain to the models the axioms entail.

In what follows, we analyze and categorize existing approaches to quality evaluation,

illustrating why no existing methods of evaluation are satisfactory. We also explore the

idea of requirements necessary for our definition of quality, to better understand the way

in which it may be evaluated; we work towards achieving an accurate, testable way of

defining ontology quality.

7.1 Related Work

Based on existing approaches to quality evaluation in the literature, we have generalised

the major perspectives of ontology quality. In the following, we discuss specific examples

and the merits and benefits of each perspective.

Chapter 7. Quality 70

7.1.1 Quality as Correctness

One school of thought is that the quality of an ontology directly corresponds to how ac-

curately it expresses its concepts. [36] presents this approach in terms of the relationship

between the intended models of an ontology, and the actual models that are entailed by

the axioms in the ontology. [13] take the same view of evaluating quality based on the

accuracy of an ontology - however, their approach uses principles from psychology. They

determine the correctness of the ontology’s structure with respect to a person’s cognitive

structure.

The correctness of an ontology certainly contributes to its quality, so these approaches

may be valid means of evaluating the quality of an ontology. However accuracy alone

does not completely constitute the quality of an ontology; methods that take this view do

not present a complete evaluation of ontology quality. In fact, this perspective of quality

is aligned with the semantic requirements presented in Chapter 3, which we have already

noted are valid but insufficient to characterize quality.

7.1.2 Quality as User Opinion

The view of ontology quality as the outcome of user evaluations is argued for in [47][48].

The focus of this work is on defining the appropriate reviews for prospective ontology

users. The selection of appropriate reviews is important because of the subjective nature

of the measures of quality resulting from users’ opinions. User reviews can capture

“soft” information that is difficult to formally evaluate, and they appear to be useful

for consumers in other areas (for example Amazon.com 1). However to be effective this

approach requires a large number of reviews, or “critical mass” as discussed in [48]. The

statistics presented as possible indications of interest in ontologies do not correlate to the

volume of ontologies being reused; as noted earlier, reuse is still a problematic area for

1http://www.amazon.com

Chapter 7. Quality 71

ontologies. In other words, the “critical mass” required to support such an approach has

not yet been reached, therefore some other form of consumer guidance will be required

to promote reuse before user reviews become a viable option.

A second concern regarding this approach is that it excludes formal views of quality

such as those discussed in Section 7.1.1. There is no guarantee that a user’s reviews will

accurately represent the correctness of an ontology, and this property may vary drastically

between applications. In addition, even with sufficient and informative user feedback this

approach does not address the developers’ needs for ontology quality evaluation, as it

can only be generated after the ontology has been completed. Some measure of quality

should be available during the development lifecycle to help facilitate the production of

high-quality ontologies.

7.1.3 Quality as a Numerical Approximation

Several attempts have been made towards the development of a specific formula that

can be employed to evaluate the quality of an ontology. For example, using different

underlying theories, both [7] and [60] present a series of formulae that attempt to aggre-

gate some numerical assessments of attributes of the ontology. The Ontology Auditor

presented in [7] implements an evaluation based on metrics from semiotics. The over-

all quality of an ontology is evaluated as the weighted sum of these metrics. While

the metrics appear sensibly selected, the measurements presented for each attribute are

somewhat unfounded. For example, the “interpretability” of an ontology is defined as

the ratio between the number of terms in the ontology, to the number of those terms

that are defined in WordNet 2. OntoQA, presented in [60] employs a similar approach

in that they develop formulae to calculate values for various attributes that represent

the quality of an ontology. In this case, the attributes and formulae are driven from

perceptions of what indicates the potential for “rich” knowledge representation in an

2http://wordnet.princeton.edu/

Chapter 7. Quality 72

ontology’s structure, and what indicates the effective use of such a structure. In general,

the problem with these approaches is that they are attempting to define equations to

develop numerical interpretations of these soft characteristics, when it is the nature of

these characteristics not to be measurable in this way.

7.2 Evaluation

Our approach to quality evaluation relies on our ability to evaluate the ontology’s require-

ments. We have already seen how the ontology’s semantic requirements can be defined

and evaluated; recall that these requirements correspond to the perspective of quality as

correctness. However, as recognized by the perspectives of quality as user opinion and

as numerical approximation, there are pragmatic requirements that also factor into how

we perceive the quality of an ontology. To evaluate quality in an accurate and complete

way, we need to consider these pragmatic requirements in conjunction with the seman-

tic requirements presented earlier; further still, these pragmatic requirements must be

testable in order to contribute to the definition and evaluation of quality.

To achieve this, first let us consider the semantic requirements defined earlier. At

its core - independent of any particular application, we can consider the function of

the ontology to be representation. In this way, the semantic requirements are easily

comparable to the functional requirements (FRs) in software engineering as they precisely

specify the models that the ontology is required to represent. In software engineering,

non-functional requirements (NFRs) are generally described as behavioural requirements;

they specify the manner in which a system must perform its function [10]. So in the

same way, a strong analogy can also be drawn between the pragmatic requirements of

ontologies and the NFRs in software engineering. The pragmatic requirements contribute

to the quality of the ontology, regardless of a particular application; to achieve a complete

representation of quality, they must describe how we require the ontology to perform its

Chapter 7. Quality 73

function (entail the intended models). Existing work [57] employs NFRs as part of an

ontology requirement specification document, however they differentiate their NRFs from

those in the software community. We feel that this distinction is generally misguided;

we can directly leverage work from the software engineering community on NFRs to help

supplement the need for the identification of testable attributes.

Unfortunately, the area of NFRs is not completely resolved in software engineering

[19]. In other words, there is no solution to identifying and evaluating these requirements

that can be reimplemented for ontologies. However, we can gain from the progress

that has been made in the software community; many NFRs have been defined and

implemented in practice and they can provide valuable guidance. A NFR could be

reinterpreted for the field of ontologies where a method for its evaluation might already

exist, or could be developed. In addition some NFRs and their evaluation methods may

have the potential to directly translate for use with ontologies.

7.2.1 Usability

One interesting example of a very common, and critical NFR is usability. Usability can

be defined by the following components, as they appear in [46]:

Learnability How easy is it for users to accomplish basic tasks the first time they

encounter the design?

Efficiency Once users have learned the design, how quickly can they perform tasks?

Memorability When users return to the design after a period of not using it, how easily

can they re-establish proficiency?

Errors How many errors do users make, how severe are these errors, and how easily can

they recover from the errors?

Satisfaction How pleasant is it to use the design?

Chapter 7. Quality 74

With the definition of some relevant tasks for ontology use, these components can be

interpreted to assess the usability of ontologies. Then, as with software, usability tests

can be developed and performed. Usability tests are performed with test subjects, or

directly evaluated with the use of some predefined criteria; templates and a discussion

of both approaches are readily available, (see [50] for an example geared towards website

design). As with software, the types of tasks tested may differ between ontologies, and

the tasks evaluated are not expected to be exhaustive, but representative. For example,

we could provide the test subject with the task of extending a particular concept in the

ontology as one way of representing how easily the ontology could be reused.

7.3 Discussion

The evaluation of quality would help to address some of the existing barriers to ontology

reuse. It would also be useful for benchmarking, or during development by providing

guidance for design decisions. There is a clear direction for future work in this case: to

continue the development of a definition of ontology quality that we can evaluate. To

accomplish this we need to continue to identify testable pragmatic requirements that rep-

resent ontology quality. Future work should continue to investigate the potential gains

from NFRs, and also from within the ontology community. The more pragmatic require-

ments that are defined, the better our understanding of quality will be. In addition, the

role of each requirement in the development lifecycle should be considered and clearly

described.

We provided a review of the current state of the definition and evaluation of ontology

quality, and used this to motivate a different view of quality. We presented clear direction

on how to proceed to work towards the eventual goal of achieving a complete definition

of quality and a means for its evaluation. In particular, we presented existing work in

the area of software engineering as a potential aid; certainly there will be challenges in

Chapter 7. Quality 75

defining ontology quality, but there is existing work that can be leveraged. Something

defined as a NFR will not necessarily translate to an appropriate pragmatic requirement,

but the idea is that we can take advantage of existing work to identify useful requirements

at which point we might also be able to leverage any existing evaluation methods. The

case where a method for a pragmatic requirements’ evaluation is not readily available

points to a need for a technique that has not yet been developed. These are cases

that need to be identified, and should be presented as open problems for the ontology

community.

Chapter 8

An Environment for Expressive

Ontology Development

Recall that this lifecycle was a result of experiences developing the PSL and BoxWorld

ontologies. Due to the tight, cyclic nature of the development process, one of the major

challenges that was encountered was that of maintaining effective documentation of the

verification results and the design changes that occurred. This type of information is

crucial to our understanding of the state of the development project, our ability to diag-

nose the outcomes of the Verification phase and to make design decisions. In practice,

as a result of the value of this information we manually kept records of tests performed,

test results, changes made to the ontology, subsets and lemmas used in the tests. Unfor-

tunately, this led to a great deal of overhead work during the development process, and

it was still difficult to conceptually aggregate the relationships between all of the infor-

mation being tracked. With revisions continuously being made to axioms and sometimes

to requirements, and different subsets used for different tests, it was nearly impossible

to ascertain the impact of a redesign on previously verified test results. Often times it

was easier to simply re-run all of the tests than to determine whether or not their results

were still valid.

76

Chapter 8. An Environment for Expressive Ontology Development 77

We consider these experiences here as we discuss the functionality that would be

required for MACLEOD (MACLEOD, A Common Logic Environment for Ontology De-

velopment), a system that could effectively support the development of an ontology in

accordance with our ontology development lifecycle.

8.1 System Requirements

We discuss the functionality that is required to support each phase of development,

addressing the issues identified in practice. We then specify the concepts and metadata

associated with this functionality and we formalize these requirements with a set of use

cases (included Appendix D). The purpose of MACLEOD would be to not only track

changes to individual objects related to ontology development (e.g. axioms, semantic

requirements) but to be able to evaluate the impact of design revisions on the entire

development project. Although we have made an effort not to restrict our lifecycle

methodology to development with first-order logic, we commit to its use here (specifically

the Common Logic language [9]) in order to be more specific in the discussion of functional

requirements.

8.1.1 Phase-Specific Requirements

Design The work that occurs in the design phase requires the ability to add, modify,

and remove axioms in the ontology. For documentation purposes, the system must

keep a record of all axioms and their revisions, accompanied by a specification of

the (re-)design rationale where possible; as noted in Chapter 4 design rationale is

an important aspect of documentation. The system should facilitate versioning of

the ontology, each axiom, and any associated modules. The system should also

support the techniques of model exploration suggested in the design phase - it

must support the use of any first-order logic model generator. The system should

Chapter 8. An Environment for Expressive Ontology Development 78

store any models generated, providing relevant information such as an association

to the version of axioms that were used in the input file. In accordance with

our development methodology, the majority of design decisions result from model

exploration or verification results; therefore the system should allow the user to not

only annotate each axiom revision but to associate it with the model or test result

that motivated it.

Requirements Two types of semantic requirements may be produced in the Require-

ments phase - partial characterizations and complete characterizations of the in-

tended models for the ontology. Both types may be formulated as an entailment

problem(s) - a test that will be evaluated in the verification phase, however the sys-

tem must distinguish between the types of requirements because their entailment

problems are formulated differently. Requirements contributing to a partial char-

acterization need only be specified as sentences in the vocabulary of the ontology,

because it is implicit that the antecedent of the entailment problem is the axiomati-

zation of the ontology. On the other hand, a complete characterization is formulated

with two types of reasoning problems (recall, Rep-1 and Rep-2 from Chapter 3).

These semantic requirements are also constructed with the use of a well-understood

theory that characterizes the intended models, and two sets of translation axioms

between the ontology and the well-understood theory. The system must allow for

the creation, modification, and removal of both types of semantic requirements. As

with the design of the axioms, modifications to the semantic requirements generally

result from model exploration or verification results. Here too the system should

allow for a revision to be both annotated and associated with the generated model

or test result that led to it.

Verification We saw that the Verification phase focuses on the use of an automated

theorem prover to evaluate the semantic requirements; the system must support

Chapter 8. An Environment for Expressive Ontology Development 79

the use of a first-order logic automated theorem prover to accomplish this. A more

subtle aspect of Verification is the translation of the semantic requirements into a

set of tests for the theorem prover. The system must generate the appropriate input

file for the theorem prover, depending on the semantic requirement being tested;

it should allow the user to select subsets to be used instead of the axiomatization

of the entire ontology; it should allow for the inclusion of lemmas; and it should

document all of these choices with the test itself. In the development process, the

analysis of the test (proof) output is of extreme importance as it determines what

must occur next. The system should document this aspect too, allowing the user

to indicate which of the three cases occurred. Finally, the use of model generation

plays a role here too, we require the same functionality as for the Design phase,

discussed above.

Tuning Recall that we discussed the use of subsets and lemmas in the Tuning phase

to improve theorem prover performance. The system should allow for the creation

and removal of subsets; subsets are constructed from a collection of axioms and/or

modules and should be updated whenever their associated axioms are revised. If

a subset has been created in error, the system should allow for its modification or

removal. The system must also support the addition, modification, and removal

of lemmas. Recall that a lemma is a result of the theory we are reasoning with

(usually the ontology) therefore any design changes may require a modification of

the lemma so that it is still valid.

8.1.2 System Metadata

Based on the requirements identified above, we recognize the following concepts and the

associated metadata that must be maintained by the system (described in Tables 8.1

- 8.10). This information is summarized in Figure 8.1. In this section we abbreviate

Chapter 8. An Environment for Expressive Ontology Development 80

the two types of semantic requirements: partial characterizations of the intended models

(referred to as PCs) and complete characterizations of the intended models(referred to

as CCs).

Table 8.1: Axiom Metadata

AxiomProperty Cardinality (min:max) RangeVersion 1:1 {N}Revision Cause 1:n {Test, Model, String}Contained in 1:n {Module, Subset}Used in 0:n {Test}

Table 8.2: Module Metadata

ModuleProperty Cardinality (min:max) RangeVersion 1:1 {N}Revision Cause 1:n {Test, Model, String}Contains 1:n {Axiom}Contained in 0:n {Subset}Characterized by 0:n {Semantic Requirement

(PC), Semantic Require-ment (CC)}

Table 8.3: Semantic Requirement (PC) Metadata

Semantic Requirement (PC)Property Cardinality (min:max) RangeVersion 1:n {N}Revision Cause 1:n {Test, Model, String}Characterizes 1:n {Module}Evaluated in 0:n {Test}Status 1:1 {“Verified”, “Unverified”}

Chapter 8. An Environment for Expressive Ontology Development 81

Table 8.4: Semantic Requirement (CC) Metadata

Semantic Requirement (CC)Property Cardinality (min:max) RangeVersion 1:n {N}Revision Cause 1:n {Test, Model, String}Relative Interpretation Direction 1:1 {“rep-1”, “rep-2”}Characterizes 1:n {Module}Characterized by 1:1 {Well-understood Theory}Uses 2:2 {Translation Axioms}Evaluated in 0:n {Test}Status 1:1 {“Verified”, “Unverified”}

Table 8.5: Well-understood Theory Metadata

Well-understood TheoryProperty Cardinality (min:max) RangeVersion 1:n {N}Revision Cause 1:n {Test, Model, String}Characterizes 1:n {Semantic Requirement

(CC)}

Table 8.6: Translation Axioms Metadata

Translation AxiomsProperty Cardinality (min:max) RangeVersion 1:n {N}Revision Cause 1:n {Test, Model, String}Used By 1:n {Semantic Requirement

(CC)}

Table 8.7: Subset Metadata

SubsetProperty Cardinality (min:max) RangeVersion 1:n {N}Revision Cause 1:n {Test, Model, String}Contains 1:n {Axiom, Module}Included in 0:n {Test}

Chapter 8. An Environment for Expressive Ontology Development 82

Table 8.8: Lemma Metadata

LemmaProperty Cardinality (min:max) RangeVersion 1:n {N}Revision Cause 1:n {Test, Model, String}Provable From 1:n {Module, Subset}Included in 0:n {Test}Used in 0:n {Test}

Table 8.9: Test Metadata

TestProperty Cardinality (min:max) RangeVersion 1:n {N}Status 1:1 {“Proof”, “Unintended Proof”,

“No Proof”}Evaluates 1:1 {Semantic Requirement

(PC), Semantic Require-ment (CC)}

Includes 1:n {Subset, Lemma}Uses 0:n {Subset, Lemma}Causes Revision 0:n {Axiom, Module, Sub-

set, Lemma, SemanticRequirement (PC), Se-mantic Requirement (CC),Well-understood Theory,Translation Axioms}

Table 8.10: Model Metadata

ModelProperty Cardinality (min:max) RangeVersion 1:n {N}Includes 1:n {Subset}Causes Revision 0:n {Axiom, Module, Sub-

set, Lemma, SemanticRequirement (PC), Se-mantic Requirement (CC),Well-understood Theory,Translation Axioms}

Chapter 8. An Environment for Expressive Ontology Development 83

Axiom

Version

Semantic Requirement:

Partial Characterization

Version

Status

Semantic Requirement:

Complete Characterization

Version

Status

Relative Interpretation Direction

Module

Version

Subset

Version

Lemma

Version

Well-Understood Theory

Version

Translation Axioms

Version

Test

Version

Status

Model

Version

Contained in

Contained in

Contained in

Includes

Includes

Proof Uses

Includes

CharacterizesCharacterizes

Characterized By

Uses

Evaluated In

Evaluated In

Causes Revision

Causes Revision

Causes Revision Causes RevisionCauses Revision

Causes Revision

Causes Revision

Causes Revision

Causes Revision

Causes Revision

Causes RevisionCauses Revision

Proof Uses

Causes Revision

Causes Revision

Causes Revision

Causes Revision

Figure 8.1: Concept Metadata

8.2 Related Work

A recent survey found approximately 150 different ontology development tools available

on the web [45]. Instead of attempting a large-scale review of existing tools for ontology

development, we discuss the development support provided by three of the most prevalent

tools: Ontolingua, OntoEdit, WebODE, and Protege. Each of these tools is widely

cited and repeatedly appears in ontology development tool surveys over the past decade

[11][8][45].

Although no longer active, Ontolingua [15] was one of the earliest efforts towards

an ontology development tool. The web-based system contained a repository for storing

and sharing ontologies, and also provided functionality for the creation of new ontologies

and the modification of existing ontologies. It also provided support for collaborative

development, however activities that are crucial to our lifecycle such as requirements

Chapter 8. An Environment for Expressive Ontology Development 84

specification and validation were not addressed.

Protege [18] was originally developed as a tool to assist the process of building knowl-

edge bases in a medical domain. However, later versions have extended far beyond this

and it has become a popular tool for ontology design. Protege is not associated with any

particular development methodology, and possibly as a consequence of this it does not

directly address lifecycle activities such as requirements specification or verification in its

design. That being said, Protege is open-source and there exist a wide variety of add-on

functionalities continuously being developed by other individuals.

OntoEdit [58] and WebODE [3] are development tools based on existing development

methodologies – the On-To-Knowledge Methodology [59] and METHONTOLOGY [16],

respectively. Both tools aim to address a gap between development methodologies and

environments that arises when tools do not account for development activities beyond

design and implementation. Both tools offer functionality that is in line with what we

propose here, in that they aim to support the lifecycle activities of design, requirements

specification and evaluation. Since these tools are designed to support development

methodologies that we have deemed insufficient for the development of expressive on-

tologies, it is only natural that their resulting functionality also falls short of what we

require. Substantial aspects of our lifecycle such as complete characterizations, subsets,

and lemmas are not supported, however there may be some potential to build on these

efforts to achieve the required functionality.

Admittedly it is unreasonable to expect that any existing environments will satisfy

our requirements given that the development methodology they are meant to support

was designed to address shortcomings of existing lifecycles; many of the tasks we require

the system to support do not exist elsewhere, so there is no reason to expect that existing

systems would have incorporated support for them.

Chapter 8. An Environment for Expressive Ontology Development 85

8.3 Discussion

The development of a tool to provide the functionality described in this chapter would

address the pragmatic challenges of our lifecycle methodology, which we acknowledge

may be a barrier to its widespread use. Removing this barrier would no doubt encourage

the adoption of our more rigorous verification practices for expressive ontologies, and

hopefully encourage the development of more expressive ontologies in general. With

the functionality discussed here, MACLEOD would be capable of identifying affected

test results when an axiom is modified, or removed. Ideally, in its implementation a

routine could re-run all of the affected tests automatically when design changes were

made, providing the user with a report of the results. As long as the axiomatization is

still consistent following the design change (consistency checking should be performed

at regular intervals during development in any case), the potentially affected test results

can be found by identifying all of the tests that used the particular axiom to generate

their proof. In addition, any unresolved tests (cases where the theorem prover failed to

find a proof) might also be re-run in this case, as the revision might cause a proof to be

easily found.

Although potentially convenient, it is important to consider how this type of automa-

tion might affect the usability of MACLEOD - automatically running too many proofs

at once might have an adverse affect, overwhelming the user as they will need to review

all of the results. We have not thoroughly considered the non-functional requirements of

MACLEOD at this point, however we note that this is a crucial task for future work. An

additional point for future work includes a deeper examination of existing development

tools with the goal of determining the feasibility of extending an existing tool such that

it would effecitvely support our lifecycle, as discussed here. This examination should

potentially extend beyond the ontological community, as we require a great deal of ver-

sioning functionality there may be some existing version control system that could be

tailored to meet our needs.

Chapter 9

Conclusion

The lifecycle methodology presented here was motivated by the challenges encountered

in the development of first-order logic ontologies. Each phase in the lifecycle was de-

signed not only to address these challenges, but to address the general lack of guidance

and consideration for expressive ontologies in existing development methodologies. The

fact that there is no widely accepted definition of ontology quality is a barrier to the

consideration of quality in the development of ontologies, and presents a problem to the

ontology community as a whole in terms of comparing and evaluating existing ontolo-

gies. To address this, we proposed a definition of ontology quality, and an approach to

evaluating it. In addition, we discussed the requirements for MACLEOD, a system to

assist the development of ontologies according to our lifecycle methodology.

To the best of our knowledge, ours is the first lifecycle methodology to prescribe such

rigorous specification and verification of requirements. Further, we provide detailed in-

struction where possible; in particular, the case-by-case guidelines for the outcomes of

verification presented in Chapter 5 are both novel and valuable for the ontology lifecycle.

Although we incorporated pragmatic insights and specific examples from our experi-

ences with two first-order logic ontologies, it is important to note that the methodology

presented here is not restricted to use for first-order logic ontologies. We make special

86

Chapter 9. Conclusion 87

consideration for the semidecidability of first-order logic and its associated challenges,

however the use of intended models as semantic requirements is valuable to any ex-

pressive ontology, as is the use of automated reasoning tools for model exploration and

semiautomated verification. This is true regardless of the logical language the ontology is

written in. We conclude with a discussion of open issues and directions for future work.

9.1 Open Issues

Although we attempted to address all of the challenges that arise with the use of first-

order logic for ontology development, the issue of semidecidability is not completely

resolved in the Verification phase. In Case 2 (see Section 5.3), we discuss the situation

where no proof is returned in the verification of a semantic requirement. Recall that in

this case we are uncertain as to whether a proof does not exist, or the theorem prover

is simply unable to find the proof in the allotted time limit. Although the generation

of a counterexample is a possible means of resolving this uncertainty, it is still possible

that neither the theorem prover nor the model generator will return any result. In

this situation, further manual investigation by the user and an eventual intuition-based

diagnosis may be unavoidable.

We present model generation as a tool to assist in both Design and Verification phases,

however this assumes that the ontology may be satisfied by finite models. If the ontology

has only infinite models, model generators cannot be used directly with the ontology’s

axioms; although the lifecycle may still be applied effectively, it will be more challenging

to do so without the ability to employ a model generator.

One potential barrier to the adoption of this lifecycle is the difficulty associated with

specifying a complete characterization of the semantic requirements (see Section 3.4).

The development of a complete characterization of an ontology’s intended models is a

challenging process; we do not propose any resolutions for this issue and we expect some

Chapter 9. Conclusion 88

apprehension as a result of this. However it should be emphasized that although complete

characterizations are advantageous, these rigorous requirements are not necessary for the

development of an ontology with this lifecycle. In fact, we have found that it is often the

case that a complete characterization of semantic requirements is not specified until after

the ontology has been correctly and completely developed, with respect to its partial

characterization of semantic requirements. In addition, there are potential advantages

to achieving a complete characterization beyond the obvious, rigorous specification and

verification of requirements. Although not yet investigated, we suspect that complete

characterizations with well-understood theories could also be approached as a means of

performance improvement. If the well-understood theory is axiomatized more clearly

and concisely (as we have found is often the case with mathematical theories), then it

is possible that queries to the ontology would exhibit better performance if they were

translated and attempted with the well-understood theory rather than the ontology itself.

In addition, as ontology repositories become more structured and more populated, they

have the potential to provide assistance with the task of identifying well-understood

theories that may characterize the semantic requirements of a particular ontology. We

acknowledge that the difficulty of achieving a complete characterization is an open issue

for our lifecycle, however for the aforementioned reasons we feel that it should not detract

from the value of our development methodology.

9.2 Future Work

Future work should include considerations for the non-functional requirements of MACLEOD

prior to its development. In particular, attention should be paid to the system’s usabil-

ity, as it will potentially be storing and displaying large amounts of information to the

user. In addition, the design of modular ontologies was not tackled in our methodol-

ogy, as modularity is still a relatively young research area. Future work should extend

Chapter 9. Conclusion 89

the lifecycle to incorporate some techniques for the design of modular ontologies, or the

modularization of ontologies.

Other directions for future work that we have identified have potential value not

only for our lifecycle methodology, but to the ontology community in general. The

tuning techniques presented in Chapter 6 are certainly not an exhaustive set. Further

development of tuning techniques, along with an effective means of storing and reusing

them would be of great value - not only for our lifecycle, but to improve the performance

of ontologies with automated reasoners in general. This point leads to a related area

for future work in ontology repositories: additional repository functionality should be

pursued. For example, search capabilities to facilitate reuse, and the storage of useful

tuning artefacts (e.g. lemmas and subsets) to assist with performance improvements. We

proposed a direction for the evaluation of ontology quality in Chapter 7, however further

work in the development of pragmatic requirements is needed before this definition can

be applied in practice. Maintenance is an important aspect of the ontology lifecycle

that was not considered here. The role of each development phase in the context of

ontology maintenance should be investigated. This should consider different types of

maintenance, from “bug-fixes” resulting from some error that was not detected prior to

the application of the ontology, to major maintenance resulting from some change to the

ontology’s domain. Another challenging task in this area is the definition of an ontology

revision. Future work should provide criteria to differentiate between a new revision of

an existing ontology, and an entirely new theory. It might also consider the similarities

between the process of maintenance and tasks associated with ontology reuse, potentially

yielding interesting results for both areas in the ontology community.

The lifecycle methodology for expressive ontologies was used effectively with the PSL

and BoxWorld ontologies, and we propose that it may be applied successfully with more

ontologies in the future. Furthermore, the pursuit of this future work will facilitate the

continued growth of both the methodology and the ontology community as a whole.

Appendix A

Glossary

Common Logic Common Logic is a logic framework intended for information exchange

and transmission. It defines an abstract syntax and an associated model-theoretic

semantics for a specific extension of first-order logic. The intent is that the con-

tent of any system using first-order logic can be represented in this International

Standard. (adapted from [9])

domain theory A domain theory Σ of an ontology Tonto consists of a set of axioms that

is constructed using Tonto to apply the semantics of the ontology in a particular

domain. [27]

elementary equivalence Elementary equivalence exists between two or more models

when they possess the same set of first-order sentences as consequences. [12]

first-order logic First-order logic is a sound and complete logical system that allows

for the use of quantifiers.

model We refer to a model of an ontology in the sense of a satisfying interpretation of

the ontology’s axiomatization; a Tarskian model as in [12].

theory We define a theory as a set of sentences, written in some logical language, that

is closed under logical implication. [12]

90

Appendix B

Axioms for Subactivity Occurrence

Orderings

Elements of the subactivity occurrence ordering are elements of an activity tree.

(∀a, s) soo(s, a) ⊃ (B.1)

root(s, a) ∨ (∃s1)min precedes(s1, s, a)

The root of an activity tree is mapped to an element of the subactivity occurrence

ordering by an order homomorphism.

(∀s1, a) root(s1, a) ⊃ (B.2)

(∃s2) soo(s2, a) ∧mono(s1, s2, a) ∧ same tree(s1, s2, a)

Every element of the activity tree is mapped to an element of the subactivity occur-

rence ordering by an order homomorphism.

(∀s1, s2, a)min precedes(s1, s2, a) ⊃ (B.3)

(∃s3) soo(s3, a) ∧mono(s2, s3, a) ∧ same tree(s3, s2, a)

91

Appendix B. Axioms for Subactivity Occurrence Orderings 92

There is no order homomorphism between distinct elements of the subactivity occur-

rence ordering.

(∀s1, s2, a)mono(s1, s2, a) ∧ soo(s1, a) ∧ soo(s2, a) (B.4)

∧same tree(s1, s2, a) ⊃ (s1 = s2)

The relation soo precedes is transitive.

(∀a, s1, s2, s3) soo precedes(s1, s2, a)∧ (B.5)

soo precedes(s2, s3, a) ⊃ soo precedes(s1, s3, a)

The relation soo precedes orders elements in the subactivity occurrence ordering.

(∀a, s1, s2) soo precedes(s1, s2, a) ≡ (B.6)

(soo(s1, a) ∧ soo(s2, a)

∧preserve(s1, s2, a) ∧ ¬preserve(s2, s1, a))

(∀s1, s2, a) preserve(s1, s2, a) ≡ (B.7)

(∃s3, s4)mono(s1, s3, a) ∧mono(s2, s4, a)

∧min precedes(s3, s4, a)

∧same tree(s1, s2, a) ∧ same tree(s1, s3, a)

Appendix C

Axioms for BoxWorld Ontology

C.1 Axioms for sorts and part.

(∀x) point(x) ∨ edge(x) ∨ surface(x) ∨ solid(x) (C.1)

(∀x) point(x) ⊃ (¬edge(x) ∧ ¬surface(x) ∧ ¬solid(x)) (C.2)

(∀x) edge(x) ⊃ (¬surface(x) ∧ ¬solid(x)) (C.3)

(∀x) surface(x) ⊃ ¬solid(x) (C.4)

(∀x, y, z) part(x, y) ∧ part(y, z) ⊃ part(x, z) (C.5)

C.2 Axioms for solids and surfaces.

Every solid contains a surface.

(∀x) solid(x) ⊃ (∃s) surface(s) ∧ part(s, x) (C.6)

93

Appendix C. Axioms for BoxWorld Ontology 94

Every surface is contained in a solid.

(∀s) surface(s) ⊃ (∃x) solid(x) ∧ part(s, x) (C.7)

Surfaces can only be part of solids.

(∀s, x) surface(s) ∧ part(s, x) ⊃ solid(x) (C.8)

A surface is a part of a unique solid.

(∀s, x1, x2) surface(s) ∧ part(s, x1) ∧ part(s, x2) ⊃ (x1 = x2) (C.9)

Solids are not part of anything.

(∀x) solid(x) ⊃ ¬(∃y) part(x, y) (C.10)

C.3 Axioms for edges, surfaces, and solids.

Every edge contains at least two vertices.

(∀e1) edge(e1) ⊃

(∃e2, e3, v1, v2)meet(e1, e2, v1) ∧meet(e1, e3, v2) ∧ (v1 6= v2) ∧ (e1 6= e2) (C.11)

An edge contains at most two vertices.

(∀e, x1, x2, x3) edge(e) ∧ vertex(x1) ∧ vertex(x2) ∧ vertex(x3)

∧part(x1, e) ∧ part(x2, e) ∧ part(x3, e)

⊃ ((x1 = x2) ∨ (x2 = x3) ∨ (x1 = x3)) (C.12)

Every edge is part of a surface.

(∀e) edge(e) ⊃ (∃s) surface(s) ∧ part(e, s) (C.13)

Appendix C. Axioms for BoxWorld Ontology 95

Every surface contains an edge.

(∀s) surface(s) ⊃ (∃e) edge(e) ∧ part(e, s) (C.14)

Every edge in a surface meets another edge in that surface.

(∀e1, s, p) edge(e1) ∧ part(e1, s) ∧ surface(s) ∧ vertex(p) ∧ part(p, e1)

⊃ (∃e2) part(e2, s) ∧meet(e1, e2, p) (C.15)

(∀e1, e2, e3, v, s) (meet(e1, e2, v) ∧meet(e1, e3, v) ∧ (e2 6= e3) ∧ surface(s)

∧part(e1, s) ∧ part(e2, s)) ⊃ ¬part(e3, s)). (C.16)

An edge is part of at most two surfaces.

(∀e, x1, x2, x3) edge(e))∧ surface(x1)∧ surface(x2)∧ surface(x3)∧ solid(x1)∧ solid(x2)

∧part(e, x1) ∧ part(e, x2) ∧ part(e, x2

⊃ ((x1 = x2) ∨ (x2 = x3) ∨ (x1 = x3)) (C.17)

If surfaces share more than one edge, then the edges are disjoint.

(∀e1, e2, s1, s2) edge(e1) ∧ edge(e2) ∧ surface(s1) ∧ surface(s2)

∧part(e1, s1) ∧ part(e1, s2) ∧ part(e2, s1) ∧ part(e2, s2) ∧ (e1 6= e2) ∨ (s1 6= s2)

⊃ ¬(∃p) part(p, e1) ∧ part(p, e2) (C.18)

A surface that is part of a solid containing other surfaces also contains a ridge.

(∀x, s1, s2) solid(x) ∧ surface(s1) ∧ surface(s2) ∧ (s1 6= s2) ∧ part(s1, x) ∧ part(s2, x)

⊃ (∃e) ridge(e) ∧ part(e, s1) (C.19)

Appendix C. Axioms for BoxWorld Ontology 96

(∀e1) border(e1) ⊃ (∃e2, e3, v1, v2) border(e2) ∧ border(e3)

∧(e2 6= e3) ∧meet(e1, e2, v1) ∧meet(e1, e3, v2) (C.20)

Edges are part of a surface or a solid.

(∀e, x) edge(e) ∧ part(e, x) ⊃ (surface(x) ∨ solid(x)) (C.21)

An edge is part of a unique solid.

(∀e, x1, x2) edge(e)∧ part(e, x1)∧ part(e, x2)∧ solid(x1)∧ solid(x2) ⊃ (x1 = x2) (C.22)

C.4 Axioms for points, edges, surfaces, and solids.

A point is part of an edge.

(∀p) point(p) ⊃ (∃e) edge(e) ∧ part(p, e) (C.23)

Points are part of an edge, surface, or solid.

(∀p, x) point(p) ∧ part(p, x) ⊃ (edge(x) ∨ surface(x) ∨ solid(x)) (C.24)

A point is part of a unique solid.

(∀p, x1, x2) point(p) ∧ solid(x1) ∧ solid(x2)

∧part(p, x1) ∧ part(p, x2) ⊃ (x1 = x2) (C.25)

Nothing is part of a point.

(∀p) point(p) ⊃ ¬(∃x) part(x, p) (C.26)

Appendix C. Axioms for BoxWorld Ontology 97

C.5 Axioms for the cyclic ordering of the edges in a

surface.

(∀e1, e2, e3, s) sbetween(e1, e2, e3, s) ⊃ surface(s) ∧ edge(e1) ∧ edge(e2) ∧ edge(e3)

∧part(e1, s) ∧ part(e2, s) ∧ part(e3, s) (C.27)

(∀e1, e2, e3, s) sbetween(e1, e2, e3, s) ⊃ sbetween(e2, e3, e1, s) (C.28)

(∀e1, e2, s) ¬sbetween(e1, e2, e2, s) (C.29)

(∀e1, e2, e3, e4, s) sbetween(e1, e2, e3, s) ∧ sbetween(e1, e3, e4, s) ⊃ sbetween(e1, e2, e4, s)

(C.30)

(∀e1, e2, e3, s) sbetween(e1, e2, e3, s) (C.31)

⊃ (((∃e4) sbetween(e1, e4, e2, s) ∧ (e4 6= e2)) ∨ ((∃p)meet(e1, e2, p)))

(∀e1, s) edge(e1) ∧ part(e1, s) ∧ surface(s) (C.32)

⊃ (∃e2, e3) sbetween(e1, e2, e3, s)

C.6 Axioms for the cyclic ordering of the border

edges in a solid.

(∀e1, e2, e3, t) bbetween(e1, e2, e3, t) ⊃ solid(t) ∧ border(e1) ∧ border(e2) ∧ border(e3)

∧part(e1, t) ∧ part(e2, t) ∧ part(e3, t) (C.33)

Appendix C. Axioms for BoxWorld Ontology 98

(∀e1, e2, e3, t) bbetween(e1, e2, e3, t) ⊃ bbetween(e2, e3, e1, t) (C.34)

(∀e1, e2, t) ¬bbetween(e1, e2, e2, t) (C.35)

(∀e1, e2, e3, e4, t) bbetween(e1, e2, e3, t) ∧ bbetween(e1, e3, e4, t) ⊃ bbetween(e1, e2, e4, t)

(C.36)

(∀e1, e2, e3, t) bbetween(e1, e2, e3, t) (C.37)

⊃ (((∃e4) bbetween(e1, e4, e2, t) ∧ (e4 6= e2)) ∨ ((∃p)meet(e1, e2, p)))

(∀e1, t) border(e1) ∧ part(e1, t) ∧ solid(t) (C.38)

⊃ (∃e2, e3) bbetween(e1, e2, e3, t)

C.7 Axioms for the cyclic ordering of concurrent

ridges.

(∀e1, e2, e3, t) rbetween(e1, e2, e3, t) ⊃ solid(t) ∧ ridge(e1) ∧ ridge(e2) ∧ ridge(e3)

∧part(e1, t) ∧ part(e2, t) ∧ part(e3, t) (C.39)

(∀e1, e2, e3, t) rbetween(e1, e2, e3, t) ⊃ rbetween(e2, e3, e1, t) (C.40)

(∀e1, e2, t) ¬rbetween(e1, e2, e2, t) (C.41)

Appendix C. Axioms for BoxWorld Ontology 99

(∀e1, e2, e3, e4, t) rbetween(e1, e2, e3, t) ∧ rbetween(e1, e3, e4, t) ⊃ rbetween(e1, e2, e4, t)

(C.42)

(∀e1, e2, e3, t) rbetween(e1, e2, e3, t) (C.43)

⊃ (((∃e4) rbetween(e1, e4, e2, t)∧ (e4 6= e2))∨ ((∃s) surface(s)∧ part(e1, s)∧ part(e2, s)))

(∀p, e1, e2, e3, t) strong peak(p) ∧ part(p, e1) ∧ part(p, e2) ∧ part(p, e3) (C.44)

∧edge(e1) ∧ edge(e2) ∧ edge(e3) ∧ solid(t)

∧part(e1, t) ∧ part(e2, t) ∧ part(e3, t)

⊃ (rbetween(e1, e2, e3, t) ∨ rbetween(e2, e3, e1, t) ∨ rbetween(e3, e1, e2, t))

C.8 Axioms for the linear ordering of concurrent

edges.

(∀e1, e2, e3, t) ebetween(e1, e2, e3, t) ⊃ solid(t) ∧ edge(e1) ∧ edge(e2) ∧ edge(e3)

∧part(e1, t) ∧ part(e2, t) ∧ part(e3, t) (C.45)

(∀e1, e2, e3, t) ebetween(e1, e2, e3, t) ⊃ ebetween(e3, e2, e1, t) (C.46)

(∀e1, e2, t) ¬ebetween(e1, e2, e2, t) (C.47)

(∀e1, e2, e3, e4, t) ebetween(e1, e2, e3, t) ∧ ebetween(e2, e3, e4, t) ⊃ ebetween(e1, e2, e4, t)

(C.48)

Appendix C. Axioms for BoxWorld Ontology 100

(∀e1, e2, e3, t) ebetween(e1, e2, e3, t) (C.49)

⊃ (((∃e4) ebetween(e1, e4, e2, t)∧ (e4 6= e2))∨ ((∃s) surface(s)∧ part(e1, s)∧ part(e2, s)))

(∀p, e1, e2, e3, t)peak(p)∧¬strong peak(p)∧part(p, e1)∧part(p, e2)∧part(p, e3) (C.50)

∧edge(e1) ∧ edge(e2) ∧ edge(e3) ∧ solid(t)

∧part(e1, t) ∧ part(e2, t) ∧ part(e3, t)

⊃ (ebetween(e1, e2, e3, t) ∨ ebetween(e2, e3, e1, t) ∨ ebetween(e3, e1, e2, t))

C.9 Axioms for connectedness of surfaces.

(∀s) ¬connected(s, s) (C.51)

(∀s1, s2, s3) connected(s1, s2) ∧ connected(s2, s3) ⊃ connected(s1, s3) (C.52)

(∀s1, s2) connected(s1, s2) ⊃ (C.53)

(((∃e) edge(e) ∧ part(e, s1) ∧ part(e, s2)) ∨ ((∃s3)connected(s1, s3) ∧ connected(s3, s2)))

(∀t)solid(t) ⊃ (∃s1)surface(s1)∧part(s, t)∧((∀s2)surface(s2)∧(s1 6= s2) ⊃ connected(s1, s2))

(C.54)

(∀s1, s2) connected(s1, s2) ⊃ surface(s1) ∧ surface(s2) (C.55)

Appendix C. Axioms for BoxWorld Ontology 101

C.10 Axiom for Tconn−smo.

(∀e1, e2, e3, s) edge(e1) ∧ edge(e2) ∧ edge(e3) ∧ surface(s) (C.56)

∧part(e1, s) ∧ part(e2, s) ∧ part(e3, s)

⊃ (between(e1, e2, e3) ∨ between(e2, e3, e1) ∨ between(e3, e1, e2))

C.11 Axiom for Tclosed−smo.

(∀t) solid(t) ⊃ closed(t) (C.57)

C.12 Definitions for above axioms.

(∀e1, e2, v)meet(e1, e2, v) ≡ (C.58)

edge(e1) ∧ edge(e2) ∧ (e1 6= e2) ∧ point(v) ∧ part(v, e1) ∧ part(v, e2)

A vertex is a point that is part of multiple edges.

(∀v) vertex(v) ≡ (∃e1, e2)meet(e1, e2, v) (C.59)

A ridge is an edge that is part of two surfaces.

(∀e) ridge(e) ≡ (C.60)

edge(e) ∧ (∃s1, s2) surface(s1) ∧ surface(s2) ∧ (s1 6= s2) ∧ part(e, s1) ∧ part(e, s2)

A border is an edge that is part of a unique surface.

(∀e) border(e) ≡ edge(e) ∧ ¬ridge(e) (C.61)

A closed solid is a solid in which all edges are ridges.

(∀x) closed(x) ≡ solid(x) ∧ ((∀e) edge(e) ∧ part(e, x) ⊃ ridge(e)) (C.62)

Appendix C. Axioms for BoxWorld Ontology 102

A peak is a point that is part of a ridge.

(∀p) peak(p) ≡ (C.63)

(∃e1, e2, e3)meet(e1, e2, p) ∧meet(e1, e3, p) ∧ (e1 6= e2) ∧ (e1 6= e3) ∧ (e2 6= e3)

A corner is a point that is part of two edges but not part of a ridge.

(∀p) corner(p) ≡ (∃e1, e2) border(e1) ∧ border(e2) ∧meets(e1, e2, p) (C.64)

Appendix D

MACLEOD Use Cases

D.1 Use Case Outline

1. Create ontology development project

2. View project status (axioms, modules, requirements, subsets, lemmas, tests, testresults)

Design

3. Add axiom(s)

4. Remove axiom

5. Modify axiom

6. Create module

7. Generate model

Requirements

8. Add semantic requirement (partial characterization)

9. Remove semantic requirement (partial characterization)

10. Modify semantic requirement (partial characterization)

11. Add semantic requirement (complete characterization)

12. Modify translation axioms

13. Modify well-understood theory

103

Appendix D. MACLEOD Use Cases 104

14. Remove semantic requirement (complete characterization)

Tuning

15. Create subset

16. Modify subset

17. Remove subset

18. Create lemma

19. Modify lemma

20. Remove lemma

Verification

21. Test requirement

22. Categorize test result

D.2 Use Cases

Use Case 1 Create ontology development projectGoal in Context User wishes to begin the development of a new

ontologyScope & LevelPreconditions User has basic information regarding the

projectSuccess End A new project is created in the ODSFailure End No new project is created in the ODSPrimary &Secondary Actors Primary: User

Secondary:TriggerDescription Step Action

1 User provides project information (name, com-ments, associated information)

2 User creates new project on the ODSExtensions Step Branching Actions

Variations Step Branching Actions

Related InformationPriority High

Appendix D. MACLEOD Use Cases 105

Performance <=5 minutes for user to fill out fields forproject creation;Instant project creation

Frequency LowOpen Issues Should the ODS automatically generate a de-

fault module when a new project is created?

Use Case 2 View project status summaryGoal in Context User wishes to view project statusScope & LevelPreconditions A project exists in the ODSSuccess End Requested project status information is dis-

played to userFailure End Requested project status information is not

displayed to userPrimary &Secondary Actors Primary: User

Secondary:TriggerDescription Step Action

1 User selects information of interest2 User reviews summary

Extensions Step Branching Actions1a User wishes to view axiom status summary

1a1. User selects axiom status1b User wishes to view test status summary

1b1. User selects test status1c User wishes to view semantic requirement (PC)

status summary1c1. User selects semantic requirement (PC) sta-tus

1d User wishes to view semantic requirement (CC)status summary1d1. User selects semantic requirement (CC) sta-tus

1e User wishes to view module status summary1e1. User selects module status

1f User wishes to view subset status summary1f1. User selects subset status

1g User wishes to view lemma status summary1g1. User selects lemma status

Appendix D. MACLEOD Use Cases 106

Variations Step Branching Actions

Related InformationPriority HighPerformance <1 minute to make selections

Instant display of informationFrequency HighOpen Issues

Appendix D. MACLEOD Use Cases 107

Use Case 3 Add axiomGoal in Context User wishes to add an axiom to the ontologyScope & LevelPreconditions A project exists in the ODS;

User is able to specify the axiom in commonlogic

Success End A new axiom is added to the ontologyFailure End No new axiom is added to the ontologyPrimary &Secondary Actors Primary: User

Secondary:Trigger A new axiom is identified that should be added

to the ontologyDescription Step Action

1 User provides the axiom written in common logic2 User specifies what module the axiom is to be lo-

cated in3 User specifies additional annotations4 User specifies if the addition results in a new ver-

sion5 User adds the axiom to the ontology

Extensions Step Branching Actions3a If the axiom is added to address a specific require-

ment, User specifies which requirement(s)3b If the axiom is added to address (include or ex-

clude) a model that has been generated, specifywhich model(s)

3c If the axiom is added to address a test result, spec-ify which result

Variations Step Branching Actions2a The User does not intend to add the axiom to an

existing module2a1. User creates a new module for axiom to belocated in2a2. User selects the new module for the axioms’location

5a The axiom is not written in correct common logicsyntax5a1. User is notified of error5a2. User corrects the error5a3. New axiom is added to the ontology in thespecified module

Appendix D. MACLEOD Use Cases 108

5b The addition results in a new version5b1. A new version of the module containing theaxiom is created, annotated with a reference tothe new axiom5b2. A new version of the ontology is created,annotated with a reference to the new axiom

Related InformationPriority HighPerformance <=2 minutes for user to fill out fields for axiom

creation;Instant axiom creation

Frequency HighOpen Issues Should the ODS allow for the creation of mul-

tiple axioms “simultaneously”? (i.e. reusedaxioms);Should User be given an option regarding ver-sioning (because initial build of axioms wouldtypically be considered one version)?

Use Case 4 Remove axiomGoal in Context User wishes to remove an axiom to the ontol-

ogyScope & LevelPreconditions A project exists in the ODS;

At least one axiom exists in the ontologySuccess End The target axiom is removed from the ontologyFailure End The target axiom is not removed from the on-

tologyPrimary &Secondary Actors Primary: User

Secondary:Trigger Unnecessary or incorrect axiom is identifiedDescription Step Action

1 User selects the axiom to be removed2 A new version of the module that had contained

the target axiom is created, with the target axiomremoved

3 A new version of the ontology is created, with thenew module replacing the module that containsthe target axiom

Appendix D. MACLEOD Use Cases 109

Extensions Step Branching Actions1a If the axiom is removed to address a specific re-

quirement, User specifies which requirement(s)1b If the axiom is removed to address (include or ex-

clude) a model that has been generated, specifywhich model(s)

1c If the axiom is removed to address a test result,specify which result

3a If the axiom was contained in any subsets, a newversion of each subset is created with the axiomremoved

Variations Step Branching Actions1a The axiom selected is not in the current version

of the ontology1a1. User receives error message1a2. Axiom is not removed

2a The module that had contained the axiom isempty following its removal2a1. No new version of the module is created

3a The module that had contained the axiom isempty following its removal3a1. The new version of the ontology does notcontain the module that had contained the axiom

Related InformationPriority HighPerformance <=1 minutes for user to select the axiom for

removal;Instant axiom removal

Frequency HighOpen Issues Should the ODS allow for the removal of mul-

tiple axioms “simultaneously”?

Use Case 5 Modify axiomGoal in Context User wishes to modify an axiom in the ontol-

ogyScope & LevelPreconditions A project exists in the ODS;

At least one axiom exists in the ontologySuccess End The target axiom is modified in the ontologyFailure End The target axiom is not modified in the ontol-

ogy

Appendix D. MACLEOD Use Cases 110

Primary &Secondary Actors Primary: UserSecondary:

Trigger An error in an axiom is identifiedDescription Step Action

1 User selects the axiom to be modified2 User inputs the modified version of the axiom3 A new version of the axiom is created4 A new version of the module that had contained

the target axiom is created, with the target axiomreplaced by the modified axiom

5 A new version of the ontology is created, with thenew version of the module replacing the modulethat contained the previous version of the axiom

Extensions Step Branching Actions1a If the axiom is modified to address a specific re-

quirement, User specifies which requirement(s)1b If the axiom is modified to address (include or

exclude) a model that has been generated, specifywhich model(s)

1c If the axiom is modified to address a test result,specify which result

5a If the axiom was contained in any subsets, a newversion of each subset is created with the modifiedaxiom

Variations Step Branching Actions1a The axiom selected is not in the current version

of the ontology1a1. User receives error message1a2. Axiom is not modified

2a The modified axiom provided is faulty (incorrectcommon logic syntax, empty)2a1. User is notified of error2a2. User corrects the error2a3. New axiom is added to the ontology in thespecified module

Appendix D. MACLEOD Use Cases 111

Related InformationPriority HighPerformance <=1 minutes for user to select the axiom for

modification;<=2 minutes to input modified axiom;Instant axiom removal

Frequency HighOpen Issues Any restrictions or mechanisms to enforce the

difference between a new axiom and an axiommodification? (i.e. do we allow a modificationthat is written with a completely different setof relations?)

Use Case 6 Move axiomGoal in Context User wishes to move an axiom to a different

module in the ontologyScope & LevelPreconditions A project exists in the ODS;

At least one axiom exists in the ontologySuccess End The target axiom is moved to the target mod-

ule in the ontologyFailure End The target axiom is not moved to the target

module from the ontologyPrimary &Secondary Actors Primary: User

Secondary:Trigger Incorrect module placement of an axiom is

identifiedDescription Step Action

1 User selects the axiom to be moved2 User selects the target module the axiom is to be

moved to3 A new version of the module that had contained

the target axiom is created, with the target axiomremoved

4 A new version of the target module is created,with the target axiom added

5 A new version of the ontology is created, with thetwo new modules replacing their previous versions

Extensions Step Branching Actions1a If the axiom is moved to address a specific require-

ment, User specifies which requirement(s)

Appendix D. MACLEOD Use Cases 112

1b If the axiom is moved to address (include or ex-clude) a model that has been generated, specifywhich model(s)

1c If the axiom is moved to address a test result,specify which result

Variations Step Branching Actions1a The axiom selected is not in the current version

of the ontology1a1. User receives error message1a2. Axiom is not moved

2a The User does not intend to add the axiom to anexisting module2a1. User creates a new module for axiom to belocated in2a2. User selects the new module for the axioms’location

3a The module that had contained the axiom isempty following its removal3a1. No new version of the module is created

4a The User created a new module to relocate theaxiom to4a1. No new version of the module is created; thisinitial version is the “new” version

5a The module that had contained the axiom isempty following its removal5a1. The new version of the ontology does notcontain the module that had contained the axiom

Related InformationPriority HighPerformance <=1 minutes for user to select the axiom for

removal;Instant axiom removal

Frequency HighOpen Issues Should the ODS allow for the relocation of

multiple axioms “simultaneously”?

Use Case 7 Create new moduleGoal in Context User wishes to create a new module in the on-

tologyScope & LevelPreconditions User has basic information regarding the new

module

Appendix D. MACLEOD Use Cases 113

Success End A new module is created in the ontologyFailure End No new module is created in the ontologyPrimary &Secondary Actors Primary: User

Secondary:Trigger A new module is recognized in the ontologyDescription Step Action

1 User provides module information (name, com-ments, associated information)

2 User creates a new module3 A new version of the ontology, with the new mod-

ule added, is createdExtensions Step Branching Actions

Variations Step Branching Actions1a The module name is not unique (in the context of

the entire project)1a1. User receives an error message1a2. The new module is not created

Related InformationPriority HighPerformance <=5 minutes for user to provide module infor-

mation;Instant module creation

Frequency LowOpen Issues Should the ODS allow for the creation of mul-

tiple modules “simultaneously”?

Use Case 8 Add semantic requirement (partial character-ization)

Goal in Context User wishes to add a semantic requirement tothe project

Scope & LevelPreconditions A project exists in the ODS;

User is able to specify the requirement as anentailment problem, in common logic

Success End A new semantic requirement is added to theproject

Failure End No new semantic requirement is added to theproject

Primary &Secondary Actors Primary: UserSecondary:

Appendix D. MACLEOD Use Cases 114

Trigger A new semantic requirement (partial charac-terization) is identified

Description Step Action1 User provides the requirement written in common

logic2 User specifies additional annotations3 User adds the semantic requirement to the project4 New version of semantic requirements created

with the new semantic requirement addedExtensions Step Branching Actions

2a If the semantic requirement is added to address atest result, specify which test result

2b If the semantic requirement is added to address(include or exclude) a model that has been gener-ated, specify which model(s)

Variations Step Branching Actions3a The semantic requirement is not written in correct

common logic syntax3a1. User is notified of error3a2. User corrects the error3a3. New semantic requirement is added to theontology in the specified module

Related InformationPriority HighPerformance <=2 minutes for user to fill out fields for re-

quirement creation;Instant requirement creation

Frequency MediumOpen Issues Should semantic requirements’ versioning be

segregated, similar to axioms’ module version-ing? Track versioning of individual semanticrequirements?

Use Case 9 Remove semantic requirement (partial charac-terization)

Goal in Context User wishes to remove a semantic requirementfrom the project

Scope & LevelPreconditions A project exists in the ODS;

At least one semantic requirement exists in theontology

Appendix D. MACLEOD Use Cases 115

Success End The target semantic requirement is removedfrom the ontology

Failure End The target semantic requirement is not re-moved from the ontology

Primary &Secondary Actors Primary: UserSecondary:

Trigger An incorrect or unnecessary semantic require-ment (partial characterization) is identified

Description Step Action1 User selects the semantic requirement to be re-

moved2 A new version of the semantic requirements is

created, with the target semantic requirement re-moved

Extensions Step Branching Actions1a If the semantic requirement is removed to address

a specific test result, User specifies which test re-sult(s)

1b If the semantic requirement is removed to address(include or exclude) a model that has been gener-ated, specify which model(s)

Variations Step Branching Actions1a The semantic requirement selected is not in the

current version of the semantic requirements1a1. User receives error message1a2. Semantic requirement is not removed

Related InformationPriority HighPerformance <=1 minutes for user to select the semantic

requirement for removal;Instant semantic requirement removal

Frequency LowOpen Issues Should the ODS allow for the removal of mul-

tiple semantic requirements “simultaneously”?

Use Case 10 Modify semantic requirement (partial charac-terization)

Goal in Context User wishes to modify a semantic requirementin the ontology

Scope & LevelPreconditions A project exists in the ODS;

At least one semantic requirement exists in theontology

Appendix D. MACLEOD Use Cases 116

Success End The target semantic requirement is modifiedin the ontology

Failure End The target semantic requirement is not modi-fied in the ontology

Primary &Secondary Actors Primary: UserSecondary:

Trigger An error in a semantic requirement (partialcharacterization) is identified

Description Step Action1 User selects the semantic requirement to be mod-

ified2 User inputs the modified version of the semantic

requirement3 A new version of the semantic requirements are

created, with the target semantic requirement re-placed by the modified semantic requirement

Extensions Step Branching Actions1a If the semantic requirement is modified to address

a test result, User specifies which result1b If the semantic requirement is modified to address

(include or exclude) a model that has been gener-ated, specify which model(s)

Variations Step Branching Actions1a The semantic requirement selected is not in the

current version of the semantic requirements1a1. User receives error message1a2. Semantic requirement is not modified

2a The modified semantic requirement provided isfaulty (incorrect common logic syntax, empty)2a1. User is notified of error2a2. User corrects the error2a3. New semantic requirement is added to theontology in the specified module

Related InformationPriority HighPerformance <=1 minutes for user to select the semantic

requirement for modification;<=2 minutes to input modified axiom;Instant semantic requirement modification

Appendix D. MACLEOD Use Cases 117

Frequency LowOpen Issues Any restrictions or mechanisms to enforce the

difference between a new semantic require-ment and a semantic requirement modifica-tion? (i.e. do we allow a modification thatis written with a completely different set ofrelations?)

Use Case 11 Add semantic requirement (complete charac-terization)

Goal in Context User wishes to add a complete characterizationof the ontology’s semantic requirements

Scope & LevelPreconditions A project exists in the ODS;

A version of the ontology exists in the ODS;User has the axioms of the “well-understood”theory in common logic;User has both sets of translation axioms incommon logic

Success End A complete characterization of semantic re-quirements is added to the project

Failure End No complete characterization of semantic re-quirements is added to the project

Primary &Secondary Actors Primary: UserSecondary:

Trigger A new semantic requirement (complete char-acterization) is identified

Description Step Action1 User provides characterization information

(name, comments, associated information)2 User provides the axiomatization of the well-

understood theory3 User selects the module(s) that the well-

understood theory characterizes4 User provides complete set of translation defini-

tions from the ontology to the well-understoodtheory

5 User provides complete set of translation defini-tions from the well-understood theory to the on-tology

Appendix D. MACLEOD Use Cases 118

6 The complete characterization requirements (re-sulting relative interpretation entailment prob-lems) are generated and added to the project

Extensions Step Branching Actions

Variations Step Branching Actions1a The characterization name is not unique (in the

context of the entire project)1a1. User receives an error message1a2. User corrects the error

2a The well-understood theory is not written in cor-rect common logic syntax2a1. User is notified of error2a2. User corrects the error2a3. Well-understood theory is added to the com-plete characterization

4a The translation definitions are not written in cor-rect common logic syntax3a1. User is notified of error3a2. User corrects the error3a3. Translation definitions are added to the com-plete characterization

5a The translation definitions are not written in cor-rect common logic syntax4a1. User is notified of error4a2. User corrects the error4a3. Translation definitions are added to the com-plete characterization

Related InformationPriority HighPerformance <=5 minutes to add all required files (trans-

lation definitions, etc)Instant addition of requirements to project

Frequency LowOpen Issues

Use Case 12 Modify translation axiomsGoal in Context User wishes to modify the translation axioms

for a complete characterizationScope & LevelPreconditions A project exists in the ODS;

At least one complete characterization existsfor the ontology

Appendix D. MACLEOD Use Cases 119

Success End The target translation axioms are modifiedFailure End The target translation axioms are not modifiedPrimary &Secondary Actors Primary: User

Secondary:Trigger An error is identified in the translation axioms

for a complete characterizationDescription Step Action

1 User selects the translation axioms to be modified2 User provides annotation for modification3 User inputs the modified version of the translation

axioms4 A new version of the translation axioms is created5 A new version of complete characterization re-

quirements are generatedExtensions Step Branching Actions

2a If the axiom is modified to address a revision tothe ontology, User specifies which revision

Variations Step Branching Actions3a The modified axioms are faulty (incorrect com-

mon logic syntax, empty)3a1. User is notified of error3a2. Translation axioms are not modified

Related InformationPriority HighPerformance <1 minute to input correctionFrequency LowOpen Issues Minimal functionality here because changes

shouldn’t really be required (except for “ty-pos”)

Use Case 13 Modify well-understood theoryGoal in Context User wishes to modify the axioms of a well-

understood theory for a complete characteri-zation

Scope & LevelPreconditions A project exists in the ODS;

At least one complete characterization existsfor the ontology

Success End The target well-understood theory axioms aremodified

Failure End The target well-understood theory axioms arenot modified

Appendix D. MACLEOD Use Cases 120

Primary &Secondary Actors Primary: UserSecondary:

Trigger An error is identified in a well-understood the-ory

Description Step Action1 User selects the well-understood theory to be

modified2 User provides annotation for modification3 User inputs the modified version of the well-

understood theory axioms4 A new version of the well-understood theory ax-

ioms is created5 A new version of complete characterization re-

quirements are generated using the modified ax-ioms

Extensions Step Branching Actions

Variations Step Branching Actions3a The modified axioms are faulty (incorrect com-

mon logic syntax, empty)3a1. User is notified of error3a2. Axioms are not modified

Related InformationPriority HighPerformance <1 minute to input correctionFrequency LowOpen Issues Minimal functionality here because changes

shouldn’t really be required (except for “ty-pos”)

Use Case 14 Remove a complete characterizationGoal in Context User wishes to remove a complete characteri-

zation from the projectScope & LevelPreconditions A project exists in the ODS;

At least one complete characterization existsfor the ontology

Success End The target complete characterization require-ments are removed

Failure End The target complete characterization require-ments are not removed

Appendix D. MACLEOD Use Cases 121

Primary &Secondary Actors Primary: UserSecondary:

Trigger Incorrect or unnecessary complete characteri-zation is identified

Description Step Action1 User selects the complete characterization to be

removed2 User provides annotation for the well-understood

theory and translation axioms regarding the re-moval

3 The target complete characterization require-ments are removed from the project

Extensions Step Branching Actions

Variations Step Branching Actions

Related InformationPriority HighPerformance <1 minute to select requirement for removalFrequency LowOpen Issues Completely remove everything? (as opposed

to storing the well-understood theory and/ortranslation axioms)The removal of a complete characterizationshould be very rare, in practice documenta-tion of this may not be a priority

Use Case 15 Create a subsetGoal in Context User wishes to create a new subset out of the

axioms of the ontologyScope & LevelPreconditions A project exists in the ODS;

At least one axiom exists in the ontologySuccess End A new subset is createdFailure End No new subset is createdPrimary &Secondary Actors Primary: User

Secondary:TriggerDescription Step Action

1 User selects desired axioms to include in subset2 User provides annotation for subset

Appendix D. MACLEOD Use Cases 122

3 A new subset is createdExtensions Step Branching Actions

1a Select entire module(s) to include in the subsetVariations Step Branching Actions

1a A subset with the same set of axioms already ex-ists1a1. User is notified of error1a2. Subset is not created

Related InformationPriority HighPerformance <5 minutes to select subset contentsFrequency MediumOpen Issues

Use Case 16 Modify subsetGoal in Context User wishes to modify an existing subsetScope & LevelPreconditions A project exists in the ODS;

At least one subset exists in the ODSSuccess End The subset is modifiedFailure End The subset is not modifiedPrimary &Secondary Actors Primary: User

Secondary:Trigger Mistake identified in a subsetDescription Step Action

1 User selects the subset to be modified2 User selects axioms to be added3 User selects modules to be added4 User selects axioms to be removed5 User selects modules to be removed

Extensions Step Branching Actions

Variations Step Branching Actions1a If the axiom is modified to address a specific re-

quirement, User specifies which requirement(s)1b If the axiom is modified to address a test result,

specify which result5a If the resulting subset modification already exists

in the ODS,5a1. User is notified of error5a2. Subset is not modified

Appendix D. MACLEOD Use Cases 123

Related InformationPriority HighPerformance <5 minutes to select/deselect subset contentsFrequency MediumOpen Issues When is it not longer a modification (should

any enforcement be implemented?)

Use Case 17 Remove subsetGoal in Context User wishes to remove a subset from the

projectScope & LevelPreconditions A project exists in the ODS;

At least one subset exists in the ODSSuccess End The target subset is removed from the ontol-

ogyFailure End The target subset is not removed from the on-

tologyPrimary &Secondary Actors Primary: User

Secondary:TriggerDescription Step Action

1 User selects the subset to be removed2 User provides rationale for removal

Extensions Step Branching Actions

Variations Step Branching Actions

Related InformationPriority LowPerformance <=1 minutes for user to select the subset for

removal;Instant subset removal

Frequency LowOpen Issues

Appendix D. MACLEOD Use Cases 124

Use Case 18 Create a lemmaGoal in Context User wishes to create a new lemma for the

projectScope & LevelPreconditions A project exists in the ODS;

At least one axiom exists in the ontology;User has axiomatization of the lemma in com-mon logic

Success End A new lemma is added to the projectFailure End No new lemma is added to the projectPrimary &Secondary Actors Primary: User

Secondary:TriggerDescription Step Action

1 User provides the lemma written in common logic2 User provides annotation for the lemma3 User specifies what axioms the lemma was derived

from4 A new lemma is created

Extensions Step Branching Actions2a User includes a proof file

Variations Step Branching Actions1a The lemma already exists in the project, and is

“active”1a1. User is notified of error1a2. Lemma is not created

Related InformationPriority HighPerformance <=1 minute to input lemma and associated

proofFrequency MediumOpen Issues Should an inclusion of a proof file for the

lemma be required for its addition?

Use Case 19 Modify lemmaGoal in Context User wishes to modify an existing lemmaScope & LevelPreconditions A project exists in the ODS;

At least one lemma exists in theODSSuccess End The lemma is modifiedFailure End The lemma is not modifiedPrimary &Secondary Actors Primary: User

Secondary:

Appendix D. MACLEOD Use Cases 125

TriggerDescription Step Action

1 User selects the lemma to be modified2 User provides modified version of the lemma3 User specifies what axioms the revised lemma was

derived from4 A new version of the lemma is created

Extensions Step Branching Actions2a User includes a proof file

Variations Step Branching Actions1a If the axiom is modified to address a specific re-

quirement, User specifies which requirement(s)1b If the axiom is modified to address a test result,

specify which resultRelated InformationPriority HighPerformance <=1 minute to input modified lemma and as-

sociated proofFrequency MediumOpen Issues Should an inclusion of a proof file for the

lemma be required for its addition?

Use Case 20 Remove lemmaGoal in Context User wishes to remove a lemma from the

projectScope & LevelPreconditions A project exists in the ODS;

At least one lemma exists in the ODSSuccess End The target lemma is removed from the ontol-

ogyFailure End The target lemma is not removed from the on-

tologyPrimary &Secondary Actors Primary: User

Secondary:TriggerDescription Step Action

1 User selects the lemma to be removed2 User provides rationale for removal

Extensions Step Branching Actions

Variations Step Branching Actions

Appendix D. MACLEOD Use Cases 126

Related InformationPriority LowPerformance <=1 minutes for user to select the lemma for

removal;Instant lemma removal

Frequency LowOpen Issues

Appendix D. MACLEOD Use Cases 127

Use Case 21 Test a requirementGoal in Context User wishes to verify a requirementScope & LevelPreconditions A project exists in the ODS;

At least one requirement exists for the ontol-ogy;User has a first-order ATP availableUser has a translation to and from commonlogic and the ATP language

Success End The requirement is testedFailure End The requirement is not testedPrimary &Secondary Actors Primary: User

Secondary:Trigger The Prototype Design is finished; the ontology

is ready for testingDescription Step Action

1 User selects the requirement to be tested (fromonly current versions of requirements)

2 User selects a subset of the ontology for the inputfile

3 An input file is generated from the entailmentproblem

4 User selects desired ATP-dependent options(timeout, etc)

5 Requirement is tested; test set-up and results aresaved

Extensions Step Branching Actions

Variations Step Branching Actions5a If proof is found:

5a1. Review output file5a2. Mark entailment problem as “verified” or“indicates error”

Related InformationPriority HighPerformance <=1 minute to select test input to perform

Test duration is user-dependentFrequency HighOpen Issues Restrict tests to current versions?

Save results in the form of proof output file(because of prover-variable statistics)?

Appendix D. MACLEOD Use Cases 128

Use Case 22 Categorize a test resultGoal in Context User wishes to categorize the test outcome for

a requirementScope & LevelPreconditions A project exists in the ODS;

At least one test has been performedSuccess End The test result is categorizedFailure End The test result is not categorizedPrimary &Secondary Actors Primary: User

Secondary:Trigger A requirement has been testedDescription Step Action

1 User selects a test that has been performed2 User reviews the test output3 User identifies the test result as: unintended

proof, no proof, or requirement verifiedExtensions Step Branching Actions

Variations Step Branching Actions

Related InformationPriority HighPerformance <1 minuteFrequency HighOpen Issues

Use Case 23 Generate a modelGoal in Context User wishes to generate a modelScope & LevelPreconditions A project exists in the ODS;

At least one requirement exists for the ontol-ogy;User has a first-order model finder availableUser has a translation to and from commonlogic and the model generator language

Appendix D. MACLEOD Use Cases 129

Success End A model is generatedFailure End A model is not generatedPrimary &Secondary Actors Primary: User

Secondary:TriggerDescription Step Action

1 User selects a subset of the ontology for the inputfile

2 User specifies additional requirements for the de-sired model

3 User selects desired model finder-dependent op-tions (timeout, etc)

4 Model generator is run;set-up and results aresaved

5 User annotates testExtensions Step Branching Actions

Variations Step Branching Actions

Related InformationPriority HighPerformance <=1 minute to select input

Duration is user-dependentFrequency HighOpen Issues Restrict tests to current versions?

Save results in the form of proof output file(because of model finder-variable statistics)?

Bibliography

[1] Gangemi A. and Presutti V. Handbook of Ontologies (2nd edition), chapter Ontology

Design Patterns. Springer: Berlin, 2009.

[2] E. Amir and S. McIlraith. Partition-based logical reasoning. In Proceedings of the

Seventh International Conference on Principles of Knowledge Representation and

Reasoning (KR’2000), pages 389–400, Breckenridge, Colorado, USA, April 12-15

2000.

[3] Julio C. Arpırez, Oscar Corcho, Mariano Fernandez-Lopez, and Asuncion Gomez-

Perez. Webode: a scalable workbench for ontological engineering. In Proceedings

of the 1st international conference on Knowledge capture, K-CAP ’01, pages 6–13,

New York, NY, USA, 2001. ACM.

[4] Jon Barwise and John Etchemendy. Language, Proof and Logic. Center for the

Study of Language and Information, Stanford, California, 2008.

[5] Eva Blomqvist. Semi-automatic ontology engineering using patterns. In Karl Aberer,

Key-Sun Choi, Natasha Noy, Dean Allemang, Kyung-Il Lee, Lyndon Nixon, Jennifer

Golbeck, Peter Mika, Diana Maynard, Riichiro Mizoguchi, Guus Schreiber, and

Philippe Cudr-Mauroux, editors, The Semantic Web, volume 4825 of Lecture Notes

in Computer Science, pages 911–915. Springer Berlin / Heidelberg, 2007.

[6] Conrad Bock and Michael Gruninger. Psl: A semantic domain for flow models.

Software and Systems Modeling, 4:209–231, 2004.

130

Bibliography 131

[7] Andrew Burton-Jones, Veda C. Storey, Vijayan Sugumaran, and Punit Ahluwalia.

A semiotic metrics suite for assessing the quality of ontologies. Data Knowl. Eng.,

55:84–102, October 2005.

[8] Oscar Corcho, Mariano Fernandez-Lopez, and Asuncion Gomez-Perez. Methodolo-

gies, tools and languages for building ontologies: where is their meeting point? Data

Knowl. Eng., 46:41–64, July 2003.

[9] H. Delugach. Common logic – a framework for a family of logic-based languages.

iso/iec wd 24707 (information technology)., 2007.

[10] Alan Dennis, Barbara Haley Wixom, and Roberta M. Roth. Systems Analysis De-

sign. John Wiley & Sons, Inc., third edition, 2006.

[11] A. J. Duineveld, R. Stoter, M. R. Weiden, B. Kenepa, and V. R. Benjamins. Wonder-

tools?: a comparative study of ontological engineering tools. Int. J. Hum.-Comput.

Stud., 52:1111–1133, June 2000.

[12] Herbert B Enterton. A Mathematical Introduction to Logic. Academic Press, 1972.

[13] Joerg Evermann and Jennifer Fang. Evaluating ontologies: Towards a cognitive

measure of quality. Inf. Syst., 35(4):391–403, 2010.

[14] W.M. Farmer. An infrastructure for intertheory reasoning. In Conf. of Automated

Deduction (CADE-17), pages 115–131. Springer, 2000.

[15] Adam Farquhar, Richard Fikes, and James Rice. The ontolingua server: a tool for

collaborative ontology construction. In International Journal of Human-Computer

Studies, 1996.

[16] Mariano Fernandez, Asuncion Gomez-Perez, and Natalia Juristo. Methontology

from ontological art towards ontological engineering. In Symposium on Ontological

Engineering of AAAI, 1997.

Bibliography 132

[17] International Organization for Standardization TC 176/SC. Quality management

systems – fundamentals and vocabulary. Technical report, 2005.

[18] John H. Gennari, Mark A. Musen, Ray W. Fergerson, William E. Grosso, Monica

Crubzy, Henrik Eriksson, Natalya F. Noy, and Samson W. Tu. The evolution of

protg: An environment for knowledge-based systems development. International

Journal of Human-Computer Studies, 58:89–123, 2002.

[19] Martin Glinz. On Non-Functional Requirements. volume 0, pages 21–26, Los Alami-

tos, CA, USA, October 2007. IEEE Computer Society.

[20] Asuncion Gomez-Perez, Oscar Corcho, and Mariano Fernandez-Lopez. Ontological

Engineering : with examples from the areas of Knowledge Management, e-Commerce

and the Semantic Web. First Edition (Advanced Information and Knowledge Pro-

cessing). Springer, July 2004.

[21] B. Grau, B. Parsia, and E. Sirin. Ontology integration using e-connections. In

Modular Ontologies, pages 293–320. Springer-Verlag, Berlin, 2009.

[22] Bernardo Cuenca Grau, Ian Horrocks, Yevgeny Kazakov, and Ulrike Sattler. A

logical framework for modularity of ontologies. In In Proc. IJCAI-2007, pages 298–

304. AAAI, 2007.

[23] Thomas R. Gruber. Toward principles for the design of ontologies used for knowledge

sharing. Int. J. Hum.-Comput. Stud., 43:907–928, December 1995.

[24] M. Gruninger and M. S. Fox. Methodology for the design and evaluation of ontolo-

gies. In International Joint Conference on Artificial Inteligence (IJCAI95), Work-

shop on Basic Ontological Issues in Knowledge Sharing, 1995.

[25] Michael Gruninger. Ontology of the process specification language. In Handbook of

Ontologies, pages 599–618. Springer-Verlag, Berlin, 2003.

Bibliography 133

[26] Michael Gruninger. The ontological stance for a manufacturing scenario. J. Cases

on Inf. Techn., 11(4):1–25, 2009.

[27] Michael Gruninger. Ontologies and domain theories. In Proceedings of the Ninth

International Symposium on Logical Formalizations of Commonsense Reasoning,

Toronto, Canada, 2009.

[28] Michael Gruninger. Using the psl ontology. In Handbook of Ontologies, pages 419–

431. Springer-Verlag, Berlin, 2009.

[29] Michael Gruninger and Arnaud Delaval. A first-order cutting process ontology for

sheet metal parts. In Proceeding of the 2009 conference on Formal Ontologies Meet

Industry, pages 22–33, Amsterdam, The Netherlands, The Netherlands, 2009. IOS

Press.

[30] Michael Gruninger and Mark S. Fox. The role of competency questions in enter-

prise engineering. In Proceedings of the IFIP WG5.7 Workshop on Benchmarking –

Theory and Practice, 1994.

[31] Michael Gruninger, Torsten Hahmann, Ali Hashemi, and Darren Ong. Ontology

verification with repositories. In FOIS, pages 317–330, 2010.

[32] Michael Gruninger, Torsten Hahmann, Ali Hashemi, Darren Ong, and Atalay Oz-

govde. Modular first-order ontologies via repositories. to appear in Special Issue on

Modular Ontologies, Applied Ontology, 2011.

[33] Michael Gruninger, Torsten Hahmann, and Megan Katsumi. Exploiting modular-

ity for ontology verification. In Proceedings of the 5th International Workshop on

Modular Ontologies, 2011.

[34] Michael Gruninger, Torsten Hahmann, and Megan Katsumi. Stl-2011-1. Technical

report, University of Toronto, 2011.

Bibliography 134

[35] N. Guarino and P. Giaretta. Ontologies and Knowledge Bases: Towards a Termi-

nological Clarification. Towards Very Large Knowledge Bases: Knowledge Building

and Knowledge Sharing, pages 25–32, 1995.

[36] Nicola Guarino. Towards a formal evaluation of ontology quality. IEEE Intelligent

Systems, 19:74–81, 2004.

[37] Nicola Guarino, Daniel Oberle, and Steffen Staab. What is an Ontology?, pages

1–17. Springer-Verlag, Berlin, 2 edition, 2009.

[38] Nicola Guarino and Christopher Welty. An overview of ontoclean. In Steffan Staab

and Rudi Studer, editors, Handbook on Ontologies, pages 151–159. Springer, New

York, 2004.

[39] Ali Hashemi and Michael Gruninger. Ontology design through modular repositories.

In KEOD, pages 192–199, 2009.

[40] P. Hayes. A catalog of temporal theories. Technical report, University of Illinois,

1996.

[41] IEEE. IEEE Recommended Practice for Software Requirements Specifications. Tech-

nical report, 1998.

[42] Megan Katsumi and Michael Gruninger. Theorem proving in the ontology lifecy-

cle. In Proceedings of the International Conference on Knowledge Engineering and

Ontology Development, 2010.

[43] Megan Katsumi and Michael Gruninger. Automated reasoning support for ontol-

ogy development. to appear in Lecture Notes, Communications in Computer and

Information Science, 2011.

[44] Alexander Maedche and Steffen Staab. Ontology learning. In Handbook on Ontolo-

gies, pages 173–190. 2004.

Bibliography 135

[45] Riichiro Mizoguchi and Kouji Kozaki. Ontology engineering environments. In Steffen

Staab and Rudi Studer, editors, Handbook on Ontologies, International Handbooks

Information System, pages 315–336. Springer Berlin Heidelberg, 2009.

[46] Jakob Nielsen. Usability 101: Definitions and fundamentals, 2003.

[47] Natalya Noy. Evaluation by ontology consumers. IEEE Intelligent Systems, 19:74–

81, 2004.

[48] Natalya Noy, Ramanathan Guha, and Mark Musen. User ratings of ontologies: Who

will rate the raters? 2005.

[49] Leo Obrst, Werner Ceusters, Inderjeet Mani, Steve Ray, and Barry Smith. The

evaluation of ontologies. In Christopher J. O. Baker and Kei-Hoi Cheung, editors,

Semantic Web, pages 139–158. Springer US, 2007.

[50] U.S. Department of Health & Human Services. Usability.gov.

[51] Christine Parent and Stefano Spaccapietra. Modular ontologies. chapter An

Overview of Modularity, pages 5–23. Springer-Verlag, Berlin, Heidelberg, 2009.

[52] Adam Pease and Geoff Sutcliffe. First order reasoning on a large ontology. In

ESARLT, 2007.

[53] H. Sofia Pinto, C. Tempich, and Steffen Staab. Ontology engineering and evolution

in a distributed world using diligent. In Handbook on Ontologies, International

Handbooks on Information Systems, pages 153–176. Springer, 2003.

[54] Maria Poveda-Villalon, Mari Carmen Suarez-Figueroa, and Asuncion Gomez-Perez.

Reusing ontology design patterns in a context ontology network. In Proceedings of

the 2nd International Workshop on Ontology Patterns, pages 179–192. IOS Press,

2010.

Bibliography 136

[55] Alan Rector. Normalisation of ontology implementations: Towards modularity, re-

use, and maintainability. In Proceedings Workshop on Ontologies for Multiagent

Systems (OMAS) in conjunction with European Knowledge Acquisition Workshop,

Siguenza, Spain, 2002, October 2002.

[56] D. Solow. How to Read and Do Proofs: an Introduction to Mathematical Thought

Processes. Wiley, 2002.

[57] Mari Carmen Suarez-Figueroa, Asuncion Gomez-Perez, and Boris Villazon-Terrazas.

How to write and use the ontology requirements specification document. In Proceed-

ings of the Confederated International Conferences, CoopIS, DOA, IS, and ODBASE

2009 on On the Move to Meaningful Internet Systems: Part II, OTM ’09, pages 966–

982, Berlin, Heidelberg, 2009. Springer-Verlag.

[58] York Sure, Jurgen Angele, and Steffen Staab. Ontoedit: Guiding ontology devel-

opment by methodology and inferencing. In On the Move to Meaningful Internet

Systems, 2002 - DOA/CoopIS/ODBASE 2002 Confederated International Confer-

ences DOA, CoopIS and ODBASE 2002, pages 1205–1222, London, UK, UK, 2002.

Springer-Verlag.

[59] York Sure, Steffen Staab, Rudi Studer, and Ontoprise Gmbh. On-to-knowledge

methodology (otkm). In Handbook on Ontologies, International Handbooks on In-

formation Systems, pages 117–132. Springer, 2003.

[60] Samir Tartir, I. Budak Arpinar, Michael Moore, Amit P. Sheth, and Boanerges

Aleman-Meza. OntoQA: Metric-based ontology quality analysis. In Proceedings of

IEEE Workshop on Knowledge Acquisition from Distributed, Autonomous, Seman-

tically Heterogeneous Data and Knowledge Sources, 2005.

[61] Mike Uschold and Michael Gruninger. Ontologies: Principles, methods and appli-

cations. Knowledge Engineering Review, 11:93–136, 1996.

Bibliography 137

[62] Mike Uschold, Mike Healy, Keith Williamson, Peter Clark, Steven, and Steven

Woods. Ontology reuse and application. In Proceedings of the 1st International

Conference on Formal Ontology in Information Systems(FOIS98), pages 179–192.

IOS Press, 1998.

[63] Mike Uschold and Martin King. Towards a methodology for building ontologies. In

In Workshop on Basic Ontological Issues in Knowledge Sharing, held in conjunction

with IJCAI-95, 1995.