AAAAAAAAA · activities and presentation metho ds to p eople are needed to realize natural h...

8

Transcript of AAAAAAAAA · activities and presentation metho ds to p eople are needed to realize natural h...

Page 1: AAAAAAAAA · activities and presentation metho ds to p eople are needed to realize natural h uman-mac hine interaction in h uman-mac hine co-existing environmen t. One approach is

The Knowledgeable Environment| An Knowledge-level Approach for Human-machine Co-existing Space |

Hideaki Takeda, Nobuhide Kobayashi, Yoshiyuki Matsubara, and Toyoaki NishidaGraduate School of Information Science,Nara Institute of Science and Technology

8916-5, Takayama, Ikoma, Nara 630-01, [email protected]

http://ai-www.aist-nara.ac.jp/

Abstract

In this paper, we propose the knowledgeable envi-ronment as a framework for integrated systems forhuman-machine co-existing space. In the knowl-edgeable environment, integration is achieved asknowledge-level communication and cooperation. Weabstract all machines as information agents whoseroles and communication are understandable for peo-ple. Ontology is here used as explicit representationof the abstraction of the environment which includesagents, objects, and activities of agents and people.Cooperation is also achieved by using ontology. Were-de�ne concept of human-machine interaction in thelight of knowledge-level interaction, i.e., interactionwith various logical and spatial relation among par-ticipants.

Introduction

In recent years, various types of computers andcomputer-controlled machines are introduced into ourdaily life. We expect so-called robots will be also in-troduced there soon. Such machines contribute to im-prove quality of our life because they provide perfor-mance which we cannot have or want to enhance. Butintroduction of these machines also make us annoyingbecause each of them has its behavior and interface touser and requires us to understand them. We need aframework for integrated systems for human-machineco-existing space. In this paper, we propose the knowl-edgeable environment in which integration is achievedas knowledge-level communication and cooperation.

Requirements for systems for

human-machine co-existing space

It is a new �eld for robotics, arti�cial intelligence,and human interface domains to deal with space forhuman activity. One reason is dynamics in physicalspace. Distributed and therefore cooperative systemsare needed to capture spatial distributed human ac-tivities. The other reason is that human activitiescannot intrinsically modeled in computers. It impliesthat human-machine interaction is an important issuewhich can bridge human and computer activities. We

can summarize these problems as the following threeresearch issues;

1. Modeling of environments which include machinesand peopleIts meaning has two-holds. One is to model not onlymachines but also people. As we mentioned above,we cannot have perfect models of human activitiesbut can have models of human activities to someextent. The other is to make models of environmentswhich are understandable not only for machines butalso humans. It is natural because people are alsoparticipants of the environments for which modelsare provided.

2. Extension of basis of human-machine interactionVarious and distributed sensors to detect humanactivities and presentation methods to people areneeded to realize natural human-machine interactionin human-machine co-existing environment. Oneapproach is to extend variety of physical instru-ments(Sato et al. 1994). The other approach isto extend concept of sensoring and presenting. Forexample, we can call tracking of movement of peo-ple(Torrance 1995) as a sensor. Our three distinctionof human-machine interaction (described Section 7)is a proposal for this approach.

3. Cooperative architecture for real-time distributedproblemsPeople and machines are spatially distributed andsynchronized, It means that two types of spatialdistribution and synchronization exist, i.e., thosefor environments (machines are distributed and syn-chronized) and those for problems (human activitiesare distributed and synchronized). We need coop-erative systems to integrate machines and people insuch situation.

An knowledge-level approach for

human-machine co-existing space

Our approach called the knowledgeable environment isaiming to build a framework for cooperative systemsfor human-machine co-existing space. Figure 1 showsan image of space which we want to realize. In the

Page 2: AAAAAAAAA · activities and presentation metho ds to p eople are needed to realize natural h uman-mac hine interaction in h uman-mac hine co-existing environmen t. One approach is

AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

AAAAA

AA

A

AAAAAAAAAAAA

AAAA

AAAA

!

?

!

Bring the book

Here it is

Figure 1: An image for the knowledgeable environment

space, people and machine are mutually cooperative,i.e., people can ask some tasks to machines and viceversa. It may seem strange that machines can asksomething to people. Since machines in our daily liv-ing space cannot be almighty unlike those in factories,some tasks cannot be achieved only by machines butby combination of machines and people. In such case,people can be asked by machines.In the knowledgeable environment, the above three

problems is solved by knowledge-level modeling andinteraction. We abstract all machines as informationagents whose roles and communication are understand-able for people. Ontology is here used as explicit repre-sentation of the abstraction. Cooperation is achievedby using ontology. We re-de�ne concept of human-machine interaction in the light of knowledge-level in-teraction, i.e., interaction with various logical and spa-tial relation among participants.In the following section, we show the current

progress of our research(see (Takeda et al. 1996) and(Takeda et al. 1997)).

Multi-agent architecture for real-world

agentsThe basic idea of our approach for modeling machinesis to model them as software agents, which can com-municate to each other with some abstracted language.The merit of this approach is as follows;

� Abstracted de�nition of agents is applicable

� Techniques developed for software agents like coop-eration are available

� Cooperation between software agents and machinesis solved in the same architecture

We call agents which have facilities to obtain infor-mation from the physical environment or to do some-thing to the environment as real-world agents. On theother hand, we call agents concerning only informationin computers as information agents.

Figure 2: Two mobile robots

All robots and machines are agenti�ed as KQMLagents(Finin et al. 1994). KQML(Knowledge Queryand Manipulation Language) is a protocol for exchang-ing information and knowledge among agents. KQMLis mainly designed for knowledge sharing through agentcommunication. A KQML message consists of a mes-sage type called performative like ask, tell and sub-scribe, and a number of parameters like sender, re-ceiver, content and language. For example, a mes-sage content is written as a value of parameter con-tent. We mostly use KIF (Knowledge Interchange For-mat)(Genesereth & Fikes 1992) as language to describemessage contents. KIF is a language for interchange ofknowledge and based on the �rst-order predicate logic.

A real-world agent can consist of some sub-agentseach of which perform speci�c information process-ing in the agent. By separating facilities in an agent,we can construct agents without depending physicalperformance of each robot or machine. A typicalreal-world agent consists of three sub-agents, namelyKQML handling sub-agent which parses and generateKQML messages, database sub-agent which holds sta-tus of agent itself and environments, and hardware con-trolling sub-agent which sends commands to actuatorsand to obtain sensor values.

We currently agenti�ed a mobile robot with two ma-nipulators called Kappa1a and a mobile robot withoutmanipulators called Kappa1b (see Figure 2). A manip-ulator has six degrees of freedom and a gripper. Wealso have computer-controlled rack and door as real-world agents (see Figure 3).

Knowledge for cooperation of agents

Our aim is to establish information infrastructure tocooperate heterogeneous real-world agents at knowl-edge level, i.e., to clarify what knowledge is needed forthose agents for cooperation.

Page 3: AAAAAAAAA · activities and presentation metho ds to p eople are needed to realize natural h uman-mac hine interaction in h uman-mac hine co-existing environmen t. One approach is

Figure 3: Rack and door agents

Need for sharing concepts

The simplest way to accomplish a task with multipleagents is to break down the task and design subtaskseach of which is executable for each agent. But thisapproach is not applicable where tasks are dynamicallyde�ned like environments where human and agents co-exist.In order to do it more intelligently, agents should

understand what parters are doing or requesting andso on. In other words, agents should have commoncommunication abilities to tell and understand inten-sion. It means that they should share not only proto-cols and languages to communicate but also conceptsused in their communication. The latter is called on-tology which a system of concepts shared by agents tocommunicate to each other(Gruber 1993).Ontology is de�ned and used mostly in informa-

tion agents (For example see (Takeda, Iino, & Nishida1995)(Cutkosky et al. 1993)). The primary concernin such studies was to model objects which agentsshould handle. Modeling objects is not su�cient torealize communication among real-world agents. Mod-eling space is also important because they should sharespace to cooperate each other. Modeling action is an-other important concept because they should under-stand what other agents do or request1. Thereforethere are three ontologies, namely ontologies for ob-ject, space, and action (see Figure 4).

Concept for object

The environments are usually ful�lled with various ob-jects, and tasks are usually related to some of theseobjects. They should share concepts for objects, oth-erwise they cannot tell what they recognize or handleto other agents.Di�culty lies that what they can understand are dif-

ferent because the way they can perceive objects is dif-

1Another important concept is one for time. In thispaper, time is not explicitly described but embedded asshared actions.

ferent. It depends on abilities for sensing, acting, andinformation processing.The policy for making shared concepts is using ab-

straction levels to represent objects. We build taxon-omy of objects as hierarchy of is-a relations. It doesnot mean that all agents can understand all objects inthis �gure. Most agents can only understand subsetsof those objects because recognition abilities are lim-ited. For example, some agent can recognize a box butcannot recognize di�erence between a trash box and aparts-case, because it can only detect whether it is abox or not. It is su�cient for this agent to understandconcept box and its higher concepts.We provide current position, default position, color,

weight for attributes which are common for all objects.Descriptions of attributes have also levels of abstrac-tion. For example, in most abstract description ofweight, there are only three values, i.e., light, middle,and heavy. In other abstract level, it can be repre-sented by more symbols, or numerically. The level ofabstraction is determined by abilities of sensing andpattern recognition.

Concept for space

The next important concept for cooperation is conceptfor space. Since agents are working in the physicalworld, they should understand space, that is, wherethey are, where they are moving for, where the targetobject exists, and so on. Especially it is importantfor agents to work together. According to sensing andprocessing abilities of agents, they can have di�erentways to represent space. For example, agents whichmove by programmed paths would represent space bypaths and points to stop. Some agents cannot haveabsolute positions but relative position.We provide the following two types of representation

as shared space ontology.

Representation with preposition Relative po-sition is described as combination of prepositionand object which is represented as object ontol-ogy(Herkovits 1986). We provide seven prepositions,i.e., at, on, in, in-front-of, behind, to-the-right-of,and to-the-left-of. For example, a position in frontof the rack agent is represented as in-front-of(rack-agent). Actual position is determined byagents who interpret representation.

Representation with action Relative positioncan be also represented by action. For example, itis useful for agents who want to achieve action to de-scribe space as \where you can look at the rack" or\where you can meet Agent X." The actual positionmay be di�erent according to which agent would takeaction, because ability of action agent can do may bedi�erent. No matter actual position di�ers, it is suf-�cient to understand positions where such action canbe done.

Page 4: AAAAAAAAA · activities and presentation metho ds to p eople are needed to realize natural h uman-mac hine interaction in h uman-mac hine co-existing environmen t. One approach is

(a)Knowledge on object (b)Knowledge on space (c)Knowledge on action

Figure 4: Three types of concepts

We describe position with combination of anaction-related term and object(s). For example,viewpoint(rack) means a position where the agentcan look at the rack, and meetingpoint(agent1,agent2) means where agent1 and agent2 can meet.

Concept for action

The last category for shared concepts is concept for ac-tion. Agents should understand what the other agentsare doing in order to cooperate with them. Like theother categories, concepts which an agent can under-stand are also di�erent according to ability of the agentitself, in particular concepts which are directly associ-ated to its physical action. But more abstract conceptscan be shared among agents. Concepts associated toagents' physical actions should be related to more ab-stract concepts shared by them in order to understandeach other.De�nition of concept for action consists of name,

attributes like subject and starting-point, and con-straints among attributes. Constraints are representedas sharing of attribute values. Relation among con-cepts is decomposition relation, i.e., an action can havean sequence of action which can achieve the originalaction. Relation among decomposed actions are repre-sented as constraints among attributes.

Mediation for Task Execution

In this section, we discuss how to realize communi-cation among agents with di�erent ontologies. Weintroduce mediators which break down and translatetasks to be able to understand agents(Takeda, Iino, &Nishida 1995).The function of mediators is to bridge a gap between

tasks speci�ed by human and actions which can bedone by agents. Since in most cases, tasks should beperformed by multiple agents, tasks are decomposedinto subtasks to distribute to agents. Ontologies havetwo roles in this process. Firstly, it is used to un-derstand the given tasks. Since given tasks are whathumans want agents to do, they are insu�cient and in-complete for specifying a sequence of actions. Ontolo-gies supply information on environments and agentsto complete task descriptions. Secondly, it is used todistribute tasks to agents. As we mentioned in the pre-vious section, each agent has its own ontology whichis dependent to their physical and information ability.

But shared ontologies integrate these agent ontologiesusing abstraction. Tasks can be translated to a setof local tasks each of which is understandable by someagent by using multiple abstraction levels in ontologies.

We provide four processes to process the given tasksin mediation (see Figure 5). A task is described as anincomplete action which has some properties like sub-ject and object. Incompleteness means that all prop-erties should not be speci�ed. Unspeci�ed propertieswill be ful�lled by mediators using the current stateof the environment like what each agent is doing andwhere objects are.

Supplement of object attributes If necessary at-tributes of objects are missing in a task description,the mediator can add these attributes using defaultvalues in object ontology.

Assignment of agents The mediator assigns agentsto perform actions to realize the task. It is doneby consulting knowledge on agent abilities which isrepresented by object, space, and action ontologies.

Action decomposition The mediator decomposesthe given action into actions each of which can be ex-ecutable by some agents. Decomposition of action isdone by consulting action ontology. Action decom-position and agent assignment are done simultane-ously because action decomposition restricts agentassignment and vice versa.

Translation into local ontology Allinformation before this process is represented by theshared ontologies. Before sending out the decom-posed messages to agents, the mediator translateseach message into one in the local ontology to thereceiver agent.

The above process describes how to deal with a sin-gle task. In the human-machine co-existing environ-ment, there are multiple asynchronous tasks. In ourapproach, it is processed by cooperation among mul-tiple mediators. The basic idea is that every emergedtask invokes a mediator and then it tries to gather andcontrol necessary agents independently. Each media-tor processes the given task by using state informationof the environment and communication with other me-diators if necessary.

Page 5: AAAAAAAAA · activities and presentation metho ds to p eople are needed to realize natural h uman-mac hine interaction in h uman-mac hine co-existing environmen t. One approach is

Assignment of Agents

Action decomposition

Object DB

Object ontology

Action decomposition knowledge

Action ontology

Supplement of object attributes

Translation into local ontology

Knowledge on agents abilities

Object ontologySpace ontologyAction ontology

Figure 5: Mediation ow

Human-machine interactionWe need natural ways for people to communicate andcooperate with machines or robots just as same as theydo with other people, i.e., people interact with otherpeople anywhere at anytime. In this section, we mainlyfocus on interaction between people and mobile robots.The primitive way for human-robot interaction is

interaction through special instruments. People cancommunicate with robots by using instruments likecomputers. Recent technologies for multimodal com-munication can provide various communication chan-nels like voice and gestures(e.g., (Darrell & Pent-land 1993)). Interface agents (e.g., (Maes & Kozierok1993))can be used for their communication. But peo-ple could not communicate with robots directly, andthey are bound to computer terminals.Other way is direct interaction with people and

robots. In addition to multimodal communication withcomputer, robots can use their bodies when they com-municate to people. Although it is more restrictedthan virtual interface agents because of their mechan-ical structures, physical motion are more natural andacceptable for people. We call such physical direct in-teraction between robot and people intimate interac-tion.The intimate interaction enables people multimodal

direct interaction, but another problem arises. Peopleand robots should be close to each other to establishsuch interaction. It is obstacle to realize ubiquitousinteraction among people and robots. We also needloose interaction such as interaction among people androbots who are apart from each other or interactionamong people and anonymous robots which are readyto response.Although loose interaction absorbs the distance

problem between people and robots, interaction is stillclosed within participants of interaction. We some-times need more robots (or even people) involved toaccomplish interaction. For example, a robot is askedto bring a book by a person, but it has no capacity tobring books. It should ask another robot which canbring books and the person should interact anotherrobot as a result. We call this type of interaction co-operative interaction. Cooperative interaction makesinteraction extensive, i.e., interaction can be extended

by introducing more robots and people as much as itneeds. It can solve the problem of limitation of func-tions of each robot so that interaction should not bebound to functions of robots which people are inter-acting.

Intimate human-robot interaction

The �rst interaction we investigate is intimate inter-action which is direct one-to-one interaction betweenpeople and robots. We provide two communicationchannels, i.e., gesture and vocal communication. Peo-ple can tell their intention by using their gestures, andthe real-world agent can tell its intention by its ges-tures and voice.Gesture recognition is implemented in a relatively

simple way and can only extract gestures by hands.Firstly the agent identi�es motion areas of hands bysearching a black part in the scene and assuming itperson's head. Secondly, it de�nes rectangle areas ad-jacent to both sides of the black part as motion areas ofhands. Thirdly, it detects motion of hands by optical ow. The result is sequences of primitive hand motionswhich are speci�ed by hand and direction. Then ges-tures are identi�ed by comparing detected sequencesof motions with knowledge on gestures. We providesome gestures like \shake", \wave", and \move bothhands".There needs another step to know meaning of such

detected gestures, because meaning of gestures is de-pendent on situation of interaction. In our system, thereal-world agent reacts to gestures according to prede-�ned state transition network. Each state has actionsthat the real-world agent should take and some links toother states. Each link has conditions described withgestures of the person and its sensor modes. If one ofconditions of link of the current state is satis�ed, thecurrent state is shifted to next state which is pointedby the link. Since a single gesture can be includedin conditions of multiple links, multiple interpretationof gestures is possible. Figure 6 shows an example ofintimate interaction.The real-world agent has variety of actions. One is

informative actions or gestures which cause no physicalchanges of the environment like \Yes", \No", and \Un-understanding" using head motion, and \bye-bye" and

Page 6: AAAAAAAAA · activities and presentation metho ds to p eople are needed to realize natural h uman-mac hine interaction in h uman-mac hine co-existing environmen t. One approach is

Figure 6: An example of intimate interaction (\throwit out")

\raise both hands" using hand motion. Voice genera-tion is also included in possible informative actions ofthe real-world agent. Other is e�ective actions whichcause physical changes of the environment like \graspsomething" and \release something" using hand mo-tion, and \move to somewhere" using driving units.We currently provide some interaction modes like

\take a box", \janken2", and \bye-bye". Some inter-action is closed within the real-world agent and theperson, but others not. If the latter case, the real-world agent asks tasks to mediator in order to involveother real-world agents. We will discuss this processas cooperative interaction later.

Loose human-robot interaction

Loose interaction is interaction between people androbots who are separated. Since robot may not see theperson, the same method for intimate interaction is notapplicable. We introduce an agent called \watcher"which watches a room to �nd what is happening inthe room. It uses a camera to look over the room (seeFigure 7) and communication to other agents.If watcher notices a request from someone to others,

it composes a task description and passes to mediator.Noti�cation of requests comes by either recognition ofcamera scenes or communication from other agents.Watcher currently observes two areas, i.e., around adoor and a desk (two boxes in Figure 7). An exam-ple of knowledge on task composition is shown in Fig-ure 8. This de�nition tells \if it is found by camerathat someone is waving, compose a task that Kappa1ashould go to her/his position". As a result, the personwho waves can tell her/his intention to the real-world

2It is a children's game in which two or more personshow one of three forms of hand to each other. The agentuses hand motions instead of forming hands.

Figure 7: Scene by camera for watcher

(define Come_on

(content

((behavior wave)

(source camera)

(client ?human))

)

(task

((subject camera)

(come (subject kappa1a)(destination ?human))

)))

Figure 8: Knowledge on task composition

agent even if it is not near her/him (see Figure 9). It isimportant that watcher should not make direct ordersto real-world agents but tasks which can be scheduledby mediator. If the appointed agents are busy to pro-cess other tasks, the composed task may be postponeduntil the current task is �nished, or be processed byother agents.

Cooperative human-robot interaction

Some interaction should be extended to include agentsneeded to accomplish its purpose, i.e., interactionshould be performed cooperatively by more than twoagents. Suppose that a person is facing a robot whichcannot take and carry objects and asking the robot tobring an object to her/him. The robot may try to doit by itself and �nally �nd it cannot, or simply refuseher/his request because it knows that it is impossiblefor it to do it. A better solution is that the robotshould ask other robots which can take and carry ob-

Page 7: AAAAAAAAA · activities and presentation metho ds to p eople are needed to realize natural h uman-mac hine interaction in h uman-mac hine co-existing environmen t. One approach is

Figure 9: An example of loose interaction (a camerabehind the robot detected human request and told therobot to go)

jects to perform the recognized request. In this case,three agents, i.e., a person and two robots are neces-sary members to accomplish the interaction.Cooperative human-robot interaction is realized here

by mediators. If requests are detected by cameras,this process is done by watcher. Otherwise requestingagents themselves compose tasks and send them to thewatcher. Then the watcher invokes a mediator anddelegates the received task to it.Figure 10 shows how the example of cooperative in-

teraction mentioned above can be solved in our system.The following numbers correspond those in Figure 10.

1. The person asks the mobile agent (Kappa1a) in frontof her/him to bring a manual for C++ by gestures.

2. The real-world agent understands the request, but�nds it is impossible for it to bring the manual, be-cause the manual is now on the rack and it cannottake objects on the rack. Then it decides to composea task for the request and sends it to the watcher.

3. The watcher invokes a mediator newly for the task,and delegates the task to the mediator.

4. The mediator completes and decomposes the taskinto a sequence of actions each of which is executablefor an agent. The sequence means that another mo-bile agent (Kappa1b) will �nd the position of thebook, receive it from the rack agent by working to-gether, and bring it to the person. Then it executesthis sequence one by one.

5. In the above process, the mediator �nds thatKappa1a would be obstacle for Kappa1b to approachthe person. It composes a new task that Kappa1awould go out, and sends it to the watcher.

6. Receiving the task, the watcher invokes another me-diator and delegates the task to it.

7. The secondly invoked mediator also completes anddecomposes the delegated task to a sequence of ac-tions. Then it executes them one by one.

Related work

Most relevant work are Robotic room(Sato et al. 1994)and Intelligent room(Torrance 1995)(Coen 1997). Al-though they have similar goals but their methods aredi�erent in according to their application �elds.Robotic room is aiming intelligent environments for

health care or hospitals. The key technology is to pro-vide various sensoring devices and to detect humanbehaviors with them. It is di�erent approach to oursin treatment of people in the system. People are some-thing for the system to observe in their system, whichis analogous to patients in hospitals.Intelligent room project investigates various compu-

tational techniques to support people in meeting or dis-cussion, for example, tracking person's movement andaugmented reality which impose computer-generatedimages to real images. People are here active and thesystem tries to help their activities, which is analogousto tools in o�ces.On the other hand, the system and people are mu-

tually understandable and cooperative in our system.Not only people can ask the system to help them, butthe system may request people to help it when theyare unable to perform the task asked by people. It isanalogous to partners or secretaries in o�ce.It is interdisciplinary work so that there are much

related work in arti�cial intelligence, robotics, andhuman interfaces. In particular there are interestingstudies on human-robot interaction and cooperationof agents. Please see Takeda(Takeda et al. 1997) indetail.

Summary

We proposed here the knowledgeable environment inwhich all machines and interaction between machineand people are modeled as knowledge-level communi-cation. We provide ontology for basis of communi-cation and mediation of tasks based on the ontology.Human-machine interaction is realized as three di�er-ent ways which di�er in physical and logical relationbetween people and machines.One of the key issues in our approach is how to

provide good ontology for human-machine co-existingspace. The current ontology is naive and poor in de-scribing various machines and human activities. Weshould investigate them more precisely. For example,human actions and their utilization of objects in or-dinary o�ce work is needed to analyze. Cooperationof agents is still insu�cient, especially we should con-sider more tight cooperation in order to be applicableto situations with more limited resources.

Page 8: AAAAAAAAA · activities and presentation metho ds to p eople are needed to realize natural h uman-mac hine interaction in h uman-mac hine co-existing environmen t. One approach is

Rack

Door

Coffeemaker

kappa1a

kappa1b

Wo

rkS

tati

on

Human1

camera

move

find

move

move

handover

move

telltell

Take a mannual for C++

1. A request is accepted

Possible positions for persons

RackKappa1b

move (subject Kappa1b)find (subject Kappa1b)move (subject Kappa1b)handover (subject Rack)move (subject Kappa1b)tell (subject Kappa1b)

tell (subject (kappa1a) move (subject Kappa1a)

2. Task A (Take a mannual) is composed3. Task A is delegated

4. Task A is completed and decomposed

5. Task B (Move out Kappa1a) is composed

6. Task B is delegated

7. Task B is completed and decomposed

Mediator Watcher Mediator

Kappa1a

Figure 10: An example of cooperative interaction

References

Coen, M. H. 1997. Building brains for rooms: Design-ing distributed software agents. In IAAI-97, 971{977.

Cutkosky, M. R.; Engelmore, R. S.; Fikes, R. E.;Genesereth, M. R.; Gruber, T. R.; Mark, W. S.;Tenenbaum, J. M.; and Weber, J. C. 1993. PACT:An experiment in integrating concurrent engineeringsystems. IEEE Computer January 1993:28{38.

Darrell, T., and Pentland, A. 1993. Space-time ges-tures. In Proceedings of IEEE 1993 Computer SocietyConference on Computer Vision and Pattern Recog-nition, 335{340.

Finin, T.; McKay, D.; Fritzson, R.; and McEntire,R. 1994. KQML: An information and knowledge ex-change protocol. In Fuchi, K., and Yokoi, T., eds.,Knowledge Building and Knowledge Sharing. Ohmshaand IOS Press.

Genesereth, M., and Fikes, R. E. 1992. Knowledge in-terchange format, version 3.0 reference manual. Tech-nical Report Technical Report Logic-92-1, ComputerScience Department, Stanford University.

Gruber, T. R. 1993. Toward principles for the designof ontologies used for knowledge sharing. TechnicalReport KSL 93-04, Knowledge Systems Laboratory,Stanford University.

Herkovits, A. 1986. Language and spatial cognition.Cambridge University Press.

Maes, P., and Kozierok, R. 1993. Learning interfaceagents. In Proceedings of AAAI-93, 459{465.

Newell, A. 1982. The knowledge level. Arti�cialIntelligence 18:82{127.

Sato, T.; Nishida, Y.; Ichikawa, J.; Hatamura, Y.; andMizoguchi, H. 1994. Active understanding of humanintention by a robot through monitoring of humanbehavior. In Proceedings of the 1994 IEEE/RSJ In-ternational Conference on Intelligent Robots and Sys-tems, volume 1, 405{414.

Takeda, H.; Iwata, K.; Takaai, M.; Sawada, A.; andNishida, T. 1996. An ontology-based cooperative en-vironment for real-world agents. In Proceedings ofSecond International Conference on Multiagent Sys-tems, 353{360.

Takeda, H.; Kobayashi, N.; Matsubara, Y.; andNishida, T. 1997. Towards ubiquitous human-robotinteraction. In Working Notes for IJCAI-97 Work-shop on Intelligent Multimodal Systems, 1{8.

Takeda, H.; Iino, K.; and Nishida, T. 1995. Agent or-ganization and communication with multiple ontolo-gies. International Journal of Cooperative Informa-tion Systems 4(4):321{337.

Torrance, M. C. 1995. Advances in human-computerinteraction: The intelligent room. In Working Notesof the CHI 95 Research Symposium.