Proceedings Template - WORD - Laboratoire de …dragice/papers/p245-Bastide-rev.doc · Web viewA...

12
A Model-Based Approach for Real-Time Embedded Multimodal Systems in Military Aircrafts Rémi Bastide, David Navarre, Philippe Palanque, Amélie Schyn & Pierre Dragicevic LIIHS-IRIT, Université Toulouse 3 118, route de Narbonne 31062 Toulouse Cedex +33 561 55 6965 {bastide, navarre, palanque, schyn, dragice}@irit.fr ABSTRACT This paper presents the use of a model-based approach for the formal description of real- time embedded multimodal systems. This modeling technique has been used in the field of military fighter aircrafts. The paper presents the formal description techniques, its application on the case study of a multimodal command and control interface for the Rafale aircraft as well as its relationship with architectural model for interactive systems. Categories and Subject Descriptors H.5.2 User Interfaces – Prototyping. D.2.2 Design Tools and Techniques - User interfaces. D.2.4 Software/Program Verification - Formal methods. D.2.2Design Tools and Techniques -Petri nets. General Terms Languages,Theory,Verification. Keywords Model-based approaches, formal description techniques,embedded systems. 1. INTRODUCTION The academic world has been providing prototypes,toolkits and toy systems offering multimodal interaction techniques since the early work from Bolt in the early 80's[5].Some "real"systems have also been presented but the practical engineering of multimodal interactive systems remains a cumbersome task, usually carried out through an ad hoc process. The engineering of interactive systems featuring multimodal interaction adds a new level of complexity to the design, specification, validation and implementation of interactive systems which are already difficult tasks rarely addressed by current software engineering practices. The UML Unified Software Development Process [13],for example,devotes only a short paragraph to the design of the user interface. However, in the same way as model-based approaches can bring many benefits to the non interactive parts of a software system, we believe that the use of an adequate modelling technique can provide support for a more systematic development of multimodal interactive systems.Within the categories of modelling techniques, formal description techniques enable describing systems in a complete and non-ambiguous way,thus allowing for an easier understanding of problems between the various actors taking part in the development process. Besides, formal description techniques allow designers to reason about the models by using analysis techniques. Classical results can be the detection of deadlocks or presence or absence of terminating states.A set of properties for multimodal systems (known as the CARE properties)have been identified [9] but their verification over an existing multimodal system is usually impossible to achieve. For instance it is impossible to guarantee that

Transcript of Proceedings Template - WORD - Laboratoire de …dragice/papers/p245-Bastide-rev.doc · Web viewA...

Page 1: Proceedings Template - WORD - Laboratoire de …dragice/papers/p245-Bastide-rev.doc · Web viewA Model-Based Approach for Real-Time Embedded Multimodal Systems in Military Aircrafts

A Model-Based Approach for Real-Time Embedded Multimodal Systems in Military Aircrafts

Rémi Bastide, David Navarre, Philippe Palanque, Amélie Schyn & Pierre DragicevicLIIHS-IRIT, Université Toulouse 3

118, route de Narbonne 31062 Toulouse Cedex

+33 561 55 6965

{bastide, navarre, palanque, schyn, dragice}@irit.fr

ABSTRACTThis paper presents the use of a model-based approach for the formal description of real-time embedded multimodal systems. This modeling technique has been used in the field of military fighter aircrafts. The paper presents the formal description techniques, its application on the case study of a multimodal command and control interface for the Rafale aircraft as well as its relationship with architectural model for interactive systems.

Categories and Subject DescriptorsH.5.2 User Interfaces – Prototyping. D.2.2 Design Tools and Techniques - User interfaces. D.2.4 Software/Program Verification - Formal methods. D.2.2 Design Tools and Techniques - Petri nets.

General TermsLanguages, Theory, Verification.

KeywordsModel-based approaches, formal description techniques, embedded systems.

1. INTRODUCTIONThe academic world has been providing prototypes, toolkits and toy systems offering multimodal interaction techniques since the early work from Bolt in the early 80's [5]. Some "real" systems have also been presented but the practical engineering of multimodal interactive systems remains a cumbersome task, usually carried out through an ad hoc process.

The engineering of interactive systems featuring multimodal interaction adds a new level of complexity to the design, specification, validation and implementation of interactive

systems which are already difficult tasks rarely addressed by current software engineering practices. The UML Unified Software Development Process [13], for example, devotes only a short paragraph to the design of the user interface.

However, in the same way as model-based approaches can bring many benefits to the non interactive parts of a software system, we believe that the use of an adequate modelling technique can provide support for a more systematic development of multimodal interactive systems. Within the categories of modelling techniques, formal description techniques enable describing systems in a complete and non-ambiguous way, thus allowing for an easier understanding of problems between the various actors taking part in the development process. Besides, formal description techniques allow designers to reason about the models by using analysis techniques. Classical results can be the detection of deadlocks or presence or absence of terminating states. A set of properties for multimodal systems (known as the CARE properties) have been identified [9] but their verification over an existing multimodal system is usually impossible to achieve. For instance it is impossible to guarantee that two modalities remain redundant whatever the state of the system.

Verifying such interaction properties as well as safety and liveness ones (that are more familiar in the field of software engineering) becomes a critical issue when dealing with safety-critical interactive software. Indeed, the development process of such systems features an additional phase called the "certification phase" during which certificators, external to the development process, evaluate both the processes and the product prior to providing an authorisation for commercial use of the systems.

While designing user interfaces for the command and control of such systems, the designers face a dilemma: either stay with poor interaction techniques (in which case the system will be easier to validate and certify) or start exploring new interaction techniques in order to increase the bandwidth between users and system (and thus make the design, specification, validation and certification process more complex and costly).

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

ICMI’04, October 13–15, 2004, State College, Pennsylvania, USA.Copyright 2004 ACM 1-58113-890-3/04/0010...$5.00.

Page 2: Proceedings Template - WORD - Laboratoire de …dragice/papers/p245-Bastide-rev.doc · Web viewA Model-Based Approach for Real-Time Embedded Multimodal Systems in Military Aircrafts

Previous studies on multimodal interaction techniques [18], [7]have shown that, in the field of safety-critical systems, multimodal interaction presents several advantages:

Multimodality increases reliability of the interaction. Indeed, it permits to drastically decrease critical error (between 35% and 50%) during interaction. This advantage alone can justify the use of multimodality when interacting with safety-critical systems.

Multimodality increases the efficiency of the interaction, especially when issuing spatial commands (multimodal interaction is 20% faster than classical interaction when specifying geometric and localisation information).

Users predominantly prefer interacting in a multimodal way, probably because it allows more flexibility in interaction thus taking into account users' variability.

Multimodality allows increasing naturalness and flexibility of the interaction so that the learning period is shortened.

For all these reasons, multimodality has gained interest in the field of command and control interactive systems. This paper aims at bridging the gap between these two worlds by introducing a formal description technique able to describe entire multimodal interactive systems while providing at the same time formal analysis tools for supporting the analysis and verification phases.

The paper is structured as follows. The next section is dedicated to related work dealing with specification of multimodal interactive systems. Section 3 is dedicated to the informal presentation of the ICO (Interactive Cooperative Objects) formalism. Section 4 presents a case study demonstrating the modelling of a multimodal dialogue on board of a military plane. This case study shows how easy it is to explore several alternative behaviours of the dialogue controller. Last section (section 5) presents the advantages and limitations of the approach as well as ongoing and future work.

2. RELATED WORKBased on our experience on formal notations for interactive systems ([3], [4], [17]), we have seen that multimodal interactive systems feature intrinsic characteristics that make conventional formal description techniques inappropriate. First, as every interactive system, a multimodal system is a reactive system, i.e. a system able to emit and to receive events, whose internal state evolves according to received event, and whose emitted events are function of changes in the internal state. That’s why we believe that a formal notation dedicated to modeling multimodal interface has to provide a sound basis for dealing with the notions of states and events. Second, temporal constraints are at the core of these systems, which often real-time and highly concurrent. Indeed, users’ actions may occur simultaneously on several input devices and the fusion mechanism must process these inputs in real-time. In a similar way, system outputs occur on several devices at the same time and the fission mechanism must be able to process these outputs at the same time and to guarantee their synchronization. Third, the use of temporal windows in fusion mechanisms requires, from a formal description technique, the possibility to represent time in a quantitative way by

expressing for instance that an event must be received within 100 milliseconds. In a similar way, it’s essential to be able to specify that an output (i.e. a video, a tactile feedback …) must have duration of 1 second, for instance.

Besides the benefits in precision and completeness brought by formal description techniques, an additional benefit is the possibility to verify properties of the model. There are two kinds of properties: safety and liveness properties. Liveness properties assess that the system will be able to perform required actions (informally, these properties state that something good will eventually happen), while safety properties check that undesirable behaviors will be avoided (informally, they state that nothing bad will ever happen).

While a detailed comparison of available notations and approaches is beyond the scope of this paper, se have tried to provide a short synthesis of their main features.

Table 1 presents a summary of the related work in the formal specification of the multimodal systems with .relation to. the characteristics presented above. Table 2 gives information about tool support for the notations.

Table 1. Relative position of related work on post-WIMP user interfaces modeling

Table 2. Related tools

ATN [1], [6], [15] [12]

CSP [20], [21] LOTOS [8]

Flownet [23] Z [10], [16]

Hynet [22] Current Paper

Table 1 is organised as follows. Works located in the left column correspond to the formalisms dedicated to the

Page 3: Proceedings Template - WORD - Laboratoire de …dragice/papers/p245-Bastide-rev.doc · Web viewA Model-Based Approach for Real-Time Embedded Multimodal Systems in Military Aircrafts

interaction description whereas works located in the right column concern formalisms dedicated to the system description. Our formalism is between the two columns because it allows the description of both interaction and the system. The six rows correspond to the criterions selected to compare formalisms: Quantitative time representation, Qualitative time representation, Event representation, State-based notation, Properties checking, Rendering modelling. Theses criteria correspond to the intrinsic characteristics of multimodal systems identified.

As it is very difficult to specify a whole interactive application without tool, we also consider the tools associated to the different notations. Table 2 specifies the functionalities of related tools, if existing (as far as we know, there is no tool for Hynet notation). We have selected five functionalities that we consider as essential: edition, automatic syntactic /semantic check of the specification, automatic checking of properties, performance evaluation, executability and prototyping facilities.

3. INFORMAL DESCRIPTION OF ICO In the following, we use the Interactive Cooperative Objects (ICOs) formalism, a formal description technique dedicated to the specification, modeling and implementation of interactive systems [4]. It uses concepts borrowed from the object-oriented approach (dynamic instantiation, classification, encapsulation, inheritance, client/server relationship) to describe the structural or static aspects of systems, and uses high-level Petri nets [11] to describe their dynamic or behavioral aspects.

An ICO specification is constituted to a set of cooperating classes. A class is an entity featuring five components

A behavioral part (Cooperative object): it models the behavior of the model by means of a high-level Petri net model called Object Control Structure, ObCS). A cooperative object is fully described by its offered services and the availability of these services in function of its internal state. A cooperative object offers two kinds of services to its environment. The first ones concern the services offered to others objects of the system (called system services) and the second ones (called event services) concern the services offered to a user (producing events) or to other components in the system but only through event-based communication. The availability of all the services in a CO (which depends on the internal state of the objects) is fully stated by the high-level Petri net.

A presentation part: it specifies the external perceptible appearance of the ICOs. It is a set of widgets in a set of windows and a set of graphical functions that can be called by the CO. Each widget can be used (by the user) for interacting with the ICO and/or a way (for the system) to display information about internal state of the object.

o The activation function links users' actions on the presentation part (for instance a click using a mouse on a button) to event services.

o The rendering function maintains the consistency between the internal state of the system

and its external appearance by reflecting system states changes through functions calls.

An ICO model is fully executable, which gives the possibility to prototype and test an application before it is fully implemented [3]. The models can also be validated using analysis and proof tools developed within the Petri nets community and extended in order to take into account the specificities of the Petri net dialect used in the ICO formal description technique.

However, the ICO formalism lacked several features for modelling multimodal interaction techniques. We had to define several extensions to the notation, and to provide a formal definition for each of the extension in order to keep applicable the verification tools available on Petri net based notation. The first one, addition of time in ObCS, is presented in [14]. The second one, presented in this paper, is an event-based communication mechanism.

Moreover, the modelling of large scale multimodal systems led us to revising the modelling process itself, beyond the changes in the formal notation per se. This paper introduces:

A structuring mechanism based on transducers mechanism [17], [19], which allows us to respect the ARCH [2] framework by transforming low-level, device-related events such as pressing a button on a joystick) into higher-level ones (such as issuing a command to the functional core).

A more general rendering mechanism, so that we can model output media behaviours in term of internal state of the different ICO of the specification.

4. CASE STUDYThe case study presented here concerns one of the many tasks of a military pilot engaged in a combat mission. It is developed on a full-scale simulator platform of the Dassault Aviation Rafale plane (Figure 1). The simulator is driven by commercial flight simulator software.

In the course of a mission, the pilot can locate a target on ground, and choose to put a virtual mark on it. The mark will, from then on, be displayed on various devices, and its location be correctly updated as the plane moves.

Figure 1: The fighter simulator used in the case study

Page 4: Proceedings Template - WORD - Laboratoire de …dragice/papers/p245-Bastide-rev.doc · Web viewA Model-Based Approach for Real-Time Embedded Multimodal Systems in Military Aircrafts

The study presents several advanced interaction techniques:

The target on ground is located visually by the pilot. The direction of the pilot’s gaze is acquired with a Detector of Position (DoP) integrated in his helmet. This information, combined with the position and attitude of the plane, allows computing the geographic position of the target on the ground.

The “Marking” command itself can be issued through multimodal interaction: the pilot can either utter a predefined vocal command ( “Mark !” ), or push up one of the numerous buttons of a joystick (in the prototype, the joystick used is a Thrustmaster Hotas Cougar™). Conversely, the mark may be cleared by issuing a vocal “Clear!” command, or pushing down the same joystick button.

The mark on the target can also be rendered multimodally, either on the Head-Up Display integrated in the pilot’s helmet, on a 2D geographic map displayed on a dashboard screen, or with 3D sound played in the pilot’s earphones.

Figure 2 illustrates the architecture of the simulator and the flow of information between its various components. At the bottom of the picture we can see the various input devices used in the system. The information produced by the input devices is then processed by the dialogue controller. The dialogue controller interacts with the functional core for triggering semantic functionalities such as expected behavior of the aircraft according to the aircraft model embedded in the simulator.

Figure 3: The case study according to ARCH

According to the ARCH [2] user interface architecture paradigm, our formal notation deals mainly with the right-side of the arch, from the concrete input and output devices to the dialogue controller. In our study, we have reused a functional core developed at THALES Avionics that performs the required geometric and geographic computations.

Figure 3 described the case study according to ARCH. We first describe the dialogue model and then going down towards the fusion mechanism and the input devices.

4.1 The Dialogue ControllerThe high-level dialogue is described formally using the ICO [4] formalism as well. In this case study, we deal only with a single task of the pilot, so the specification remains pretty simple. Yet, the ICO model provides a very fine-grained description of the dialogue, which is absolutely necessary in this kind of safety-critical system where no part of the application can remain under-specified.

Multimodalinput

DoPHOTAS

HUDHUDFunctional Core

Vocal Command

Dialogue controller

Multimodaloutput

HOTAS commandPlane position

and attitudePlane position

and attitude

Flight SimulatorFlight Simulator

Head positionand attitude

Head and Planeparameters

Head and Planeparameters

Geographicposition

Geographicposition

Renderingcommand

Renderingcommand

3D sound3D sound

"Mark" or "Clear"commands

Figure 2: the simulator's architecture

Page 5: Proceedings Template - WORD - Laboratoire de …dragice/papers/p245-Bastide-rev.doc · Web viewA Model-Based Approach for Real-Time Embedded Multimodal Systems in Military Aircrafts

The model in Figure 4 aims at specifying the structure of the dialogue between the pilot and the rest of the system. It is modeled, according to the ICO formalism, as a high-level Petri net. This net receives events from the multi-modal input devices, changes state accordingly, and notifies the output devices of its new state. The output devices react by rendering the new state to the user, using multi-modal, sound and graphic output.

Three different input events are handled by this net:

Marking: this event is triggered when the pilot enters the “Marking” command, either vocally or using the joystick. This event is received in the Petri net by the set of Synchronized transitions Marking_SetMark and Marking_ChangeMark. Conventional transitions in Petri nets fire as soon as they are enabled, while synchronized transitions fire (if they are enabled) when a signal (in our case an input event) is received from an external source. When a Marking event is received, the Petri net determines if the mark is correct or not (an incorrect mark would occur, for instance, if the pilot triggers the Marking command while looking up in the sky) by firing the transitions ValidMark1 or ValidMark2 if correct or InvalidMark1 or InvalidMark2, if not. If the mark is correct a Petri net token holding the correct mark is deposited in the place MarkSet and the net notifies the output devices that a correct mark has been set. If an incorrect mark is detected, an appropriate action is executed: if no mark was previously set, the net returns to a state where no mark is set (i.e. the place NoMark holds a token), while if a correct mark was previously set, the place MarkSet receives the previous correct mark.

ClearMark: this event is triggered the same way as the Marking event. When received, the net returns to a state

where no mark is set, by firing of one of the three synchronized transitions ClearMark_0, ClearMark_1 or ClearMark_2, depending of the current state of the Petri net.

Update: when a mark is set, the rendering of the mark (either graphic or 3D sound) must be updated regularly as the plane moves. The aim of the synchronized transition Update is to when there is a token in place MarkSet contains one token (corresponding to a correct mark).

The mathematical nature of Petri nets allows performing formal correctness verification of the models. The PetShop tool, used to design and execute the models in the case study, also performs several mathematical analyses of the models, namely computing Place and Transition invariants.

An interesting place invariant is: (NoMark + Result + Result_1 + MarkSet). This invariant states that the number of tokens in this set of places is constant (in our case, equals to 1). This shows that the dialogue will always be in one of these states, Result and Result_1 being states where a Marking event has been received from the input devices, and is being evaluated for correctness. This is an example of a safety property.

An interesting transition invariant is (Marking_SetMark + ValidMark_1 + ClearMark_0), showing that this sequence of transitions does not change the state of the net, i.e. returns the system to a state where no mark is set. This is an example of a liveness property.

All these invariants are calculated automatically by our tool, and can be used by the designer (at design time) to assert the correctness of the models.

Table 3 describes the rendering associated to the dialogue model of Figure 4 and shows that when a token (i.e. a mark)

Figure 4: A possible behavior (for the Dialogue Controller) described using ICO

Page 6: Proceedings Template - WORD - Laboratoire de …dragice/papers/p245-Bastide-rev.doc · Web viewA Model-Based Approach for Real-Time Embedded Multimodal Systems in Military Aircrafts

arrives in place MarkSet then two methods are called one to show the mark on the HUD and one to provide sound feedback in the headphones.

Table 3. Rendering function of the Dialogue Controller

Place Event Rendering method

MarkSet Token_entry <coord, ger>

showMark(coord, ger);soundNotification(addMark.wav, coord)

NoMark Token_entry <coord, ger>

hideMark();soundNotification(clearMark.wav, coord);

4.2 Class VocalCommandInstances of the class VocalCommand aims at interpreting raw speech entries into both Marking and ClearMark events, handled by the Dialogue Controller.

Table 4, Table 5 and Figure 5 show the formal description of this class. More precisely, Figure 5 shows the behavior of this voice interpreter, where the synchronized transition Speech received speech events that hold words pronounced by the pilot, and put it as a token in the place word_uttered. Then one of the three transitions Mark, Clear or Other that produce an event depending of the word uttered.

Table 4. Class VocalCommand activation function

Source Interaction object Event Service Rendering

methods

Microphone None Utterance of w Speech None

Figure 5 - Behavior of the class VocalCommandTable 5. Class VocalCommand event production function

Transition Event Transition Event

Mark Mark() Clear Clear()

4.3 Class HOTASInstances of the class HOTAS, shown by Table 6, Table 7 and Figure 6, handle inputs (pushDown or pushUp) from the HOTAS device and convert it into both ClearMark or Mark events, required by the Dialogue controller.

Figure 6 - Behavior of the class HOTASTable 6. Class HOTAS activation function

Source Interaction object Event Service Rendering

methodsHOTAS None pushDown() pushDown NoneHOTAS None pushup() pushDown None

Table 7. Class HOTAS event production function

Transition Event Transition Event

pushUp Mark() pushDown Clear()

4.4 Class DoPInstances of the class DoP, shown by Table 8 and Figure 7 make discrete the flow of data provided by the Detector of Positions in order to produce events representing the position designated by the pilot's head. These events are then used by the MultimodalInput class described hereafter. To allow this discretisation, the description of the behavior of the class DoP uses a timed transition (called DoP) that periodically gets the current position designated by the pilot's head via the DoP device.

Figure 7 - Behavior of the class DoPTable 8. Class DOP event production function

Transition Event

DoP() DoP(p)

4.5 Multimodal InputInstances of the class MultimodalInput (shown in Table 9 and Figure 8), provide events resulting from the fusion of Mark events (provided by both voice recognition, through instances of the class VocalCommand, or by the HOTAS device) and

Page 7: Proceedings Template - WORD - Laboratoire de …dragice/papers/p245-Bastide-rev.doc · Web viewA Model-Based Approach for Real-Time Embedded Multimodal Systems in Military Aircrafts

position events from the DoP device. The resulting events are higher level event that holds both the command Mark and the position to be marked. Indeed, when a Mark event occurs, the synchronized transition Mark reads the current designated position from the token held by the place pos, and then produced the correct event.

Table 9. Multimodal Input activation function

Source Interaction object Event Servic

eRendering methods

Class Vocal

Command None Mark() Mark None

Class Vocal

CommandNone Clear() Clear None

Class HOTAS None Mark() Mark None

Class HOTAS None Clear() Clear None

Class DoP None DoP(pos) DoP None

Figure 8 - Behavior of the multimodal fusion of events

5. PROTOTYPING BEHAVIORSOne of the advantages of describing behaviors using high level models is the possibility to prototype rapidly such behaviors and to change them according to users' feedback.

This kind of issue has been raised in the project we are presented here when users noticed that it was possible to provide invalid marks (for instance when the user was pointing outside the aircraft but not looking at the earth). In the dialogue model of Figure 4 the previous mark in the system is destroyed and the system returns in a state where there is no mark.

A simple modification in the model (as shown in Figure 9) changes such behavior in order to keep the last valid mark when a new invalid mark is created.

Such prototyping issues are really efficient when dealing with interactive systems. We have already shown in [1] this ability for non multimodal interaction but thanks to the extensions made on the formalism they are now available for prototyping fusion engines and dialogue controllers.

6. CONCLUSIONThe continuously increasing complexity of the information manipulated by interactive systems calls for new interaction techniques in order to increase the bandwidth between the user and the system. Multimodal interaction techniques are today considered as a promising way for tackling this problem. However, the lack of engineering techniques and processes for such systems makes them hard to design and build, thus jeopardizing their actual exploitation in the area of safety critical application.

This paper has presented a formal description technique that can be used for the modelling and the analysis of multimodal interactive systems. This work is part of a project on the evaluation and use of multimodal interaction techniques in the field of command and control real-time military systems. We have shown how it is possible to address both usability and reliability issues by providing formal analysis techniques applicable on the models built.

Figure 9: A second possible behavior (for the Dialogue Controller) described using ICO

Page 8: Proceedings Template - WORD - Laboratoire de …dragice/papers/p245-Bastide-rev.doc · Web viewA Model-Based Approach for Real-Time Embedded Multimodal Systems in Military Aircrafts

There is no mechanism embedded in the notation for dealing in a symmetric way with multimodal output (as we do for multimodal input). Considering interactive systems more generally, input is event-based, while output is state-based. As Petri nets represent states explicitly (the distribution of tokens in the places) we have the basic components available for managing multimodal outputs. However, issues such as perceivability and more generally, the usability of the systems have to be considered carefully. Similarly,

7. ACKNOWLEDGMENTSThe work presented in the paper is partly funded by French DGA under contract #00.70.624.00.470.75.96. Special thanks for THALES group that has developed the RAFALE simulator in Bordeaux (Thierry Ganille, Christian Nouvelle and Denis Philippon).

8. REFERENCES[1] Bares, M. and Pastor, D. Principes d'un moteur

d'interaction multimodale pour systèmes embarqués. L'Interface des Mondes Réels et Virtuels, Cinquièmes Journées Internationales INFORMATIQUE: 471-483, 1996.

[2] Bass, L., Pellegrino, R., Reed, S., Seacord, R., Sheppard, R., and Szezur, M. R. The Arch model: Seeheim revisited. In proceeding of the User Interface Developpers' workshop. version 1.0, 1991.

[3] Bastide R. , Navarre D. , and Palanque P. A Model-Based Tool for Interactive Prototyping of Highly Interactive Applications. Proceedings of the ACM SIGCHI 2002 (Extended Abstracts): 516-517, 2002.

[4] Bastide R.., Palanque P., Le Duc H., and Muńoz J. Integrating Rendering Specifications into a Formalism for the Design of Interactive Systems Proceedings of the 5th Eurographics workshop on Design, Specification and Verification of Interactive systems DSV-IS'98Springer Verlag, 1998.

[5] Bolt, R. Put That There : Voice and Gesture at the Graphics Interface. SIGGRAPH'80 Proceeding. Vol 14, Number 3: ACM Press: p262-270, 1980.

[6] Bourguet, M. L. Outil de Prototypage pour la Conception et l'Evaluation d'Interfaces Utilisateur Multimodales. 14eme Conférence sur l'Interaction Homme-MachineACM Press, 2002.

[7] Cohen, Philip R., Johnston, Michael, McGee, David, Oviatt, Sharon, Pittman, Jay, Smith, Ira, Chen, Liang, and Clow, Josh. QuickSet: multimodal interaction for distributed applications . Proceedings of the fifth ACM international conference on Multimedia ACM Press: p 31-40, 1997.

[8] Coutaz J. , Paterno F. , Faconti G. , and Nigay L. A Comparison of Approaches for Specifying MultiModal

Interactive Systems. Proceedings of the ERCIM Workshop on Multimodal Human-Computer Interaction.: 165-174, 1993.

[9] Coutaz, Joëlle, Nigay, Laurence, Salber, Daniel, Blandford, A., May, J., and Young, R. Four Easy Pieces for Assessing the Usability of Multimodal in Interaction : the CARE Properties. Human Computer Interaction, Interact' 95Nordby, K. // Helmenrsen, P.//Gilmore, D. //Arnesen, S. Chapman & Hall (IFIP): pp. 115-120, 1995.

[10] Duke, D. and Harrison, M. D. MATIS: A Case Study in Formal Specification Technical Report SM/WP17, ESPRIT BRA 7040, Amodeus-2 York: University of York, 1994.

[11] Genrich, H. J. Predicate/Transition Nets High-Level Petri Nets: Theory and Application Berlin: K. Jensen and G. Rozenberg (Eds.), Springer Verlag: 3-43., 1997.

[12] Hinckley, K., Czerwinski, M., and Sinclair, M. Interaction and Modeling Techniques for Desktop Two-Handed Input . http://research.microsoft.com/users/kenh/papers/two-hand.pdf, 1998.

[13] Jacobson, I, Booch, G., and Rumbaugh, J. The Unified Software Development ProcessAddison-Wesley, 1999.

[14] Lacaze, X, Palanque, P., Navarre, D., and Bastide, R. Performance Evaluation as a Tool for Quantitative Assessment of Complexity of Interactive Systems. DSV-IS'02 9th workshop on Design Specification and Verification of Interactive Systems, 2002.

[15] Latoschik, M. E. Designing Transition Networks for Multimodal VR-Interactions Using a Markup Language. Proceedings of the IEEE fourth International Conference on Multimodal Interfaces, ICMI 2002ACM Press: 411-416, 2002.

[16] MacColl, I. and Carrington, D. Testing MATIS : a case study on specification-based testing of interactive systems. FAHCI 98 P57-69: ISBN 0-86339-7948 , 1998.

[17] Navarre, D., Palanque, P., Bastide, R., and Sy, O. Structuring Interactive Systems Specifications for Executability and Prototypability 7th Eurographics Workshop on Design, Specification and Verification of Interacive Systems DSV-IS'2000 Limerick, Ireland: Lecture Notes in Computer Science; 1946, 2000.

[18] Oviatt, Sharon. Ten myths of Multimodal Interaction Communication of the ACM; 42: 11: 74-81, 1999.

[19] Palanque, P. and Schyn, A. A Model-Based Approach for Engineering Multimodal Interactive . Proceeding of the Ninth IFIP TC13 International Conference on Human-Computer Interaction (INTERACT'2003), IFIP, 2003.

[20] Smith, Shamus and Duke, David. Using CSP to specify interaction in virtual environmentsUniversity of York, 1999.

[21] Van Schooten, B., Donk, O., and Zwiers, J. Modelling Interaction in Virtual Environments using Process Algebra. Proceedings TWLT 15: Interactions in Virtual Worlds, 1999.

Page 9: Proceedings Template - WORD - Laboratoire de …dragice/papers/p245-Bastide-rev.doc · Web viewA Model-Based Approach for Real-Time Embedded Multimodal Systems in Military Aircrafts

[22] Wieting, Ralf. Hybrid High-Level Nets . Proceedings of the 1996 Winter Simulation ConferenceJ. M. Charnes, D. T. Morrice and D. T. Brunner ed.ACM Press: 848 855, 1996.

[23] Willans, James S. and Harrison, Michael D. Prototyping pre-implementation designs of virtual environment behaviour. 8th IFIP Working conference on engineering for human-computer interaction (EHCI'01) Lecture notes in computer science, 2001.