Protocol Testing: Review of Methods and Relevance for ...bochmann/Curriculum/Pub/1994 - Protocol...

16
Protocol Testing: Review of Methods and Relevance for Software Testing Gregor v. Bochmann and Alexandre Petrenko D4partement dinformatique et de recherche op&a- t.ionnelle, Universit6 de Montn%l, CP. 6128, Succ. Centre-Ville, Montr6al, Canada H3C 3J7 Abstract. Communication protocols are the rules that govern the communication between the differ- ent components within a distributed computer sys- tem. Since protocols are implemented in software and/or hardware, the question arises whether the existing hardware and software testing methods would be adequate for the testing of communication protocols. The purpose of this paper is to explain in which way the problem of testing protocol imple- mentations is different from the usual problem of software testing. We review the major results in the area of protocol testing and discuss in which way these methods may also be relevant in the more general context of software testing. 1. Introduction During the last ten years, much research results have been obtained in the area of protocol confor- mance testing in view of obtaining better and more systematic methods for the testing of protocol im- plementations against their specifications. At the same time, the methods for testing computer hard- ware and software have also evolved. Since proto- cols are implemented in software and/or hardware, the question may arise whether the existing hard- ware and software testing methods would be ade- quate for the testing of communication protocols. The purpose of this paper is to explain in which way the problem of testing protocol implementa- tions is different fkom the usual problem of soft- ware testing, We will review the major results in the area of protocol testing and discuss in which way these methods may also be relevant in the more general context of software testing. Protocol testing is mainly based on the black- box approach. Therefore the nature of the protocol specification has a strong influence on protocol testing. In fact, the methods for the development of test cases are largely dependent on the specification formalism. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantaqe, the ACM copyright notice and the title of the publication and Its date appear, and notice is given that copying is by permission of the Association of Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. ISSTA 94- 8/94 Seattle Washington USA @ 1994 ACM 0-89791 -683-2/94/0008..$3.50 Most communication protocols have a reactive na- ture; therefore specification languages for reactive systems are favored which precisely define the temporal ordering of interactions. It is therefore understandable that the finite state machine (FSM) model is often used for defining protocol specil3ca- tions. For this reason, most work on protocol test- ing has been based on FSM models. In Section 1, we discuss the main characteristics of communication protocols and different formal description techniques which have been used in the protocol engineering cycle. In Section 2, we review the main results in the area of test suite development based on specii3ca- tions given in the form of deterministic or nonde- terministic FSM models. Emphasis is put on meth- ods that are related to a particular conformance re- lation which the implementation under test should satisfy in respect to the specification. Also, the test suite should be complete in respect to a given fault model, which is defined in terms of the set of faulty implementations that the test suite should be able to detect. We then give in Section 3 an overview of some testing issues related to test suite development for specification written in LOTOS, or other specifica- tion formalisms based on rendezvous communica- tion. In Section 4, we discuss the main approaches to the testing in respect to specifications written in an extended FSM formalism, such as the languages SDL and Estelle. In this context, there are many relations with software testing methods, because the extensions of the FSM model are usually de- fined with programming language concepts. In Section 5, we discuss the impact of the test architecture which, in the case of protocol testing, usually involves several access points, partial ob- servation and synchronization problems between the different parts of the test system. These issues have a strong impact on testability. Finally, we re- view the practice of protocol conformance testing and state our conclusions. 1.1. The nature of protocol testing communication protocols are the rules that govern the communication between the different compo- nents within a distributed computer system. In or- der to organize the complexity of these rules, they are usually partitioned into a hierarchical structure of protocol layers, as exemplified by the seven lay- ers of the standardized 0S1 Reference Model [1S7498]. Figure 1(a) shows a communication system from the point of view of two users. The users in- teract with the communication service through in- teractions, called service primitives, exchanged at 109

Transcript of Protocol Testing: Review of Methods and Relevance for ...bochmann/Curriculum/Pub/1994 - Protocol...

Page 1: Protocol Testing: Review of Methods and Relevance for ...bochmann/Curriculum/Pub/1994 - Protocol tes… · Review of Methods and Relevance for Software Testing Gregor v. Bochmann

Protocol Testing:Review of Methods and

Relevance for Software Testing

Gregor v. Bochmann and Alexandre Petrenko

D4partement dinformatique et de recherche op&a-t.ionnelle, Universit6 de Montn%l, CP. 6128, Succ.

Centre-Ville, Montr6al, Canada H3C 3J7

Abstract. Communication protocols are the rulesthat govern the communication between the differ-ent components within a distributed computer sys-tem. Since protocols are implemented in softwareand/or hardware, the question arises whether theexisting hardware and software testing methodswould be adequate for the testing of communicationprotocols. The purpose of this paper is to explain inwhich way the problem of testing protocol imple-mentations is different from the usual problem ofsoftware testing. We review the major results in thearea of protocol testing and discuss in which waythese methods may also be relevant in the moregeneral context of software testing.

1. IntroductionDuring the last ten years, much research results

have been obtained in the area of protocol confor-mance testing in view of obtaining better and moresystematic methods for the testing of protocol im-plementations against their specifications. At thesame time, the methods for testing computer hard-ware and software have also evolved. Since proto-cols are implemented in software and/or hardware,the question may arise whether the existing hard-ware and software testing methods would be ade-quate for the testing of communication protocols.

The purpose of this paper is to explain in whichway the problem of testing protocol implementa-tions is different fkom the usual problem of soft-ware testing, We will review the major results inthe area of protocol testing and discuss in whichway these methods may also be relevant in themore general context of software testing.

Protocol testing is mainly based on the black-box approach. Therefore the nature of the protocolspecification has a strong influence on protocoltesting. In fact, the methods for the development oftest cases are largely dependent on the specificationformalism.

Permission to copy without fee all or part of this material isgranted provided that the copies are not made or distributed fordirect commercial advantaqe, the ACM copyright notice and thetitle of the publication and Its date appear, and notice is giventhat copying is by permission of the Association of ComputingMachinery. To copy otherwise, or to republish, requires a feeand/or specific permission.ISSTA 94- 8/94 Seattle Washington USA@ 1994 ACM 0-89791 -683-2/94/0008..$3.50

Most communication protocols have a reactive na-ture; therefore specification languages for reactivesystems are favored which precisely define thetemporal ordering of interactions. It is thereforeunderstandable that the finite state machine (FSM)model is often used for defining protocol specil3ca-tions. For this reason, most work on protocol test-ing has been based on FSM models.

In Section 1, we discuss the main characteristicsof communication protocols and different formaldescription techniques which have been used in theprotocol engineering cycle.

In Section 2, we review the main results in thearea of test suite development based on specii3ca-tions given in the form of deterministic or nonde-terministic FSM models. Emphasis is put on meth-ods that are related to a particular conformance re-lation which the implementation under test shouldsatisfy in respect to the specification. Also, the testsuite should be complete in respect to a given faultmodel, which is defined in terms of the set of faultyimplementations that the test suite should be able todetect.

We then give in Section 3 an overview of sometesting issues related to test suite development forspecification written in LOTOS, or other specifica-tion formalisms based on rendezvous communica-tion.

In Section 4, we discuss the main approaches tothe testing in respect to specifications written in anextended FSM formalism, such as the languagesSDL and Estelle. In this context, there are manyrelations with software testing methods, becausethe extensions of the FSM model are usually de-fined with programming language concepts.

In Section 5, we discuss the impact of the testarchitecture which, in the case of protocol testing,usually involves several access points, partial ob-servation and synchronization problems betweenthe different parts of the test system. These issueshave a strong impact on testability. Finally, we re-view the practice of protocol conformance testingand state our conclusions.

1.1. The nature of protocol testingcommunication protocols are the rules that governthe communication between the different compo-nents within a distributed computer system. In or-der to organize the complexity of these rules, theyare usually partitioned into a hierarchical structureof protocol layers, as exemplified by the seven lay-ers of the standardized 0S1 Reference Model[1S7498].

Figure 1(a) shows a communication systemfrom the point of view of two users. The users in-teract with the communication service through in-teractions, called service primitives, exchanged at

109

Page 2: Protocol Testing: Review of Methods and Relevance for ...bochmann/Curriculum/Pub/1994 - Protocol tes… · Review of Methods and Relevance for Software Testing Gregor v. Bochmann

so-called service access points (SAP). The defini-tion of the behavior of the box which extends be-tween the two users is called the service specifica-tion. It defines the local rules of interactions at agiven service access point, as well as the so-calledend-to-end properties which relate the interactionsat the different access points and represent thecommunication properties, such as end-to-end con-nection establishment or reliable data transfer.

SAP1 SAP2

communication

service N communlcah

+Figure 1: The architecture of a communication system.

Figure 1(b) shows a more detailed view of theservice box showing two so-called protocol entities(PE) which communicate through an underlying,more simple communication service. The definitionof the required behavior for a protocol entity iscalled the protocol specification, and involves theinteractions at the upper and lower service accesspoints. In addition, the protocol specification usu-ally identifies different types of so-called protocoldata units (PDU, or messages) which are codedand exchanged between the protocol entitiesthrough the underlying medium.

The discipline of protocol engineering[Boch93], [Boch90] deals with the methods fordeveloping communication protocol specificationsand implementations, including the activities ofvalidation. Protocol implementations are testedagainst the protocol specification in order to assurecompatibility with other implementations of thesame protocol. This is called protocol conformancetesting. In addition, implementations are usuallyalso tested for other properties, not specified by theprotocol, which may include implementation-spe-cific choices, robustness to user errors and perfor-mance properties. We call this type of testing im-plementation assessment.

For certain protocol standards, the standardiza-tion committees have also defined standardizedconformance test suites, which usually consist of alarge number of test cases. The successful execu-tion of these test cases should provide a reasonableassurance that the tested implementations followsall the rules defined by the protocol.

One of the main characteristics of protocol test-ing is the fact that conformance testing is black-boxtesting. This implies that a precise reference speci-fication must be provided, which is the basis for

the derivation of the test cases and the analysis ofthe test results. (This specification is in fact theprotocol specification, which defines the requiredproperties of any implemented protocol entity).

1.2. Specification languages for communi-cations protocols

As they develop, protocols must be describedfor many purposes. Early descriptions provide areference for cooperation among designers of dif-ferent parts of a protocol system. The design mustbe checked for logical correctness. Then the proto-col must be implemented, and if the protocol is inwide use, many different implementations mayhave to be checked for compliance with a standard.Although narrative descriptions and informal walk-throughs are invaluable elements of this process,painful experience has shown that by themselvesthey are inadequate.

The informal techniques traditionally used todesign and implement communication protocolshave been largely successful, but have also yieldeda disturbing number of errors or unexpected andundesirable behavior in most protocols. The use ofa specification written in natural language gives theillusion of being easily understood, but leads tolengthy and informal specifications which oftencontain ambiguities and are difficult to check forcompleteness and correctness. The arguments forthe use of formal spec~lcation methods in the gen-eral context of software engineering apply also toprotocols.

In this context, many different formal descrip-tion techniques have been proposed fur the protocolengineering cycle, including finite state machines(FSM), Petri nets, formal grammars, high-levelprogr amming languages, process algebras, abstractdata types, and temporal logic. The simpler mod-els, such as FSM, Petri nets and formal grammars,were often extended by the addition of data parame-ters and attributes in order to naturally deal withcertain properties of the protocols, such as se-quence numbering and addressing [Boch90].

With the begiming work on the standardizationfor Open Systems Interconnection (0S1), specialworking groups on “Formal Description Tech-niques” (FDT) were established within 1S0 andCCITT in the early eighties with the purpose ofstudying the possibility of using formal speciilca-tions for the definition of the 0S1 protocols andservices. Their work led to the proposal of threelanguages, Estelle, LOTOS and SDL, which arefurther discussed below (for a tutorial introductionand further references, see [Budk87], [Bo1o87] and~eli89], respectively). These languages are called_ detiption techniques, since care has beentaken to define not only a formal syntax for the lan-

110

Page 3: Protocol Testing: Review of Methods and Relevance for ...bochmann/Curriculum/Pub/1994 - Protocol tes… · Review of Methods and Relevance for Software Testing Gregor v. Bochmann

guage, but also a formal semantics which definesthe meaning, in a formal manner, of any validspecification. This is in contrast to most program-ming languages which have a formally definedsyntax (for instance in BNF), but an informallydefined semantics. The formal semantics is essen-tial for the construction of tools which are helpfulfor the validation of specifications or the develop-ment of implementations.

While SDL had been developed within CC!ITTsince the seventies for the description of switchingsystems, Estelle and LOTOS were developedwithin ISO for the specification of communicationprotocols and services. However, all these lan-guages have potentially a much broader scope ofapplications. However, their effective use in the0S1 area, so far, has been relatively slow. Thismay be partly explained by the competition betweenthese three languages, which each have certain ad-vantages, and by the difficulty many people have inlearning anew language.

In Estelle, a speciilcation module is modeled byan extended FSM. The extensions are related to in-teraction parameters and additional state variables,and involve type definitions, expressions andstatements of the Pascal programming language. Inaddition, certain “Estelle statements” cover aspectsrelated to the creation of the overall system struc-ture consisting in general of a hierarchy of moduleinstances. Communication between modules takesplace through the interaction points of the moduleswhich have been interconnected by the parentmodule. Communication is asynchronous, that is,an output message is stored in an input queue of thereceiving module before it is processed.

SDL, which has the longest history, is alsobased on an extended FSM model. For the dataextensions, it uses the concept of abstract datatypes with the addition of a notation of programvariables and data structures, similar to what is in-cluded in Estelle. However, the notation is not re-lated to Pascal but to CHILL, the programminglanguage recommended by CCITI’. The communi-cation is asynchronous and the destination processof an output message can be identified by variousmeans, including process identifiers or the namesof channels or routes. Recently, the language hasbeen extended to include certain features for object-oriented specifications In contrast to the otherFDT’s, SDL was developed, right from the begin-ning, with an orientation towards a graphical repre-sentation. The language includes graphical elementsfor the FSM aspects of a process and the overallstructure of a specification. The data aspects areonly represented in the usual linear, program-likeform. In addition, a completely program-like formis also defined, called SDL-PR, which is mainly

used for the exchange of specifications betweendifferent SDL support systems.

LOTOS is based on an algebraic calculus forcommunicating systems (CCS ~iln80]) which in-cludes the concept of finite state machines plusparallel processes which communicate through arendezvous mechanism which allows the specifica-tion of rendezvous between two or more processes.Asynchronous communication can be modeled byintroducing queues explicitly as data types. The in-teractions are associated with gates which can bepassed as parameters to other processes participat-ing in the interactions. These gates play a role simi-lar to the interaction points in Estelle. The data as-pects are covered by an algebraic notation for ab-stract data types, called ACT ONE, which is quitepowerful, but would benefit fkom the introductionof certain abbreviated notations for the descriptionof common data structures. A graphical representa-tion for LOTOS has also been defined.

In addition to the formal description techniquesdiscussed above, the 0S1 standardization commit-tees also use certain semi-formal languages (whichhave no formally defined semantics). In particular,a language called ‘ITCN [Sari92] is used for de-scribing conformance test cases, and the ASN. 1notation is used for describing the data structures ofthe protocol data units (messages) exchanged bythe 0S1 application layer protocols ~euf92]. Thisnotation is associated with a coding scheme whichdefines the format in which these data units are ex-changed over the communication medium. All theother languages mentioned above do not addressthis problem.

In the context of application layer protocols, inparticular for Open Distributed Processing and dis-tributed systems management, certain forms of ob-ject-oriented specifications are being used whichare based on extensions of ASN. 1. Also the fonrmlspecification language Z has been proposed, whichis based on set theory and predicate calculus, andwas originally developed for software specifica-tions.

2. Testing based on FSM specificationsThe development of testing methods based on

FSM specifications has initially been driven byproblems arising from functional testing of sequen-tial circuits. The appearance of communicationprotocols has given to this theory a new boost.Currently, this model is also attracting much atten-tion in relation with testing of object-oriented soft-ware systems [Hoff!13], [Turn92]. Moreover, re-cent results in binary decision diagram (13DD) tech-nology have increased our ability to deal with thelarge number of states [Brya86]. Traditional workin this field has relied on the model of completely

111

Page 4: Protocol Testing: Review of Methods and Relevance for ...bochmann/Curriculum/Pub/1994 - Protocol tes… · Review of Methods and Relevance for Software Testing Gregor v. Bochmann

specified deterministic finite state machines. Re-cently, more complex FSM models have also beenstudied.

2.1. Basic definitions: FSM models andconformance relations

A nondeterministic fiite state machine (NFSM)is an initialized nondeterministic Mealy machinewhich can be formally defined as follows[PBD93]. A nondeterministic finite state machineA is a 6-tuple (S, X, Y, h, SO,DA), where S is a setof n states with so as the initial state; X is a finite setof input symbols; Y is a finite set of output sym-boly DA is a specification domain which is a subset

of SXX; and h is a behavior function h;

DA+ P(SXY), where P(SXY) is the powerset of

SXY. The behavior functions defines the possibletransitions of the machine. For each present state Siand input symbol x, each pair (s~ y) in the result ofthe function represents a possible transition leadingto the next state Sjand producing the output y. Thisis also written as a transition of the form Si-X/y->Sj.

The machine A becomes deterministic (FSM)

when lh(s~)l = 1 for all (SJ) e DA. In a determini-stic machine, instead of the behavior function, we

use two functions: the next state function d, and the

output function A

In the general case, let a = x~xz...x~e X*, a is

called an acceptable input sequence for state Si e S,

if there exist k states sil, siz, .... sik c S and an

output sequence y= yIyQ. ..y~ G Y* such that thereis a sequence of transitions Si-Xz/yl- >Sil-X2/y2->Si2

-> . . . -> sik.l-x~yk- >Sik in the machine. We use

X ~ to denote the set of all the acceptable input se-

quences for the state si and x; for the state SO,i.e.for A. We extend the behavior function to the set

xl of all acceptable input words (sequences) con-

taining the empty word e, i.e., h: SxX~ + P (S xY*) [PBD93].

An NFSM A is said to be completely specified,

if DA+ixX. If DA@xX then A is considered par-

tially specijied. An NFSM will be also referred toas a complete or a partial machine.

The equivalence relation between two states Si

and Sj in the NFSM A holds if X7=XJ* and

VCXGx;(h2(si,~)=h2(sj,~)), otherwise, they are

nonequivalent, here hz is the second projection ofthe behavior function which corresponds to theoutput function of the deterministic FSM [Star72].

The machine is said to be retied if all its states arepairwise nonequivalent. Two NFSMS A and B aresaid to be equivalent if their initial states are equiva-len~ intuitively, this means that the observable be-haviors of the two machines are identical.

Given a NFSM A = (S, X, Y, h, SO,DA), A is

said to be initially connected if Vs G S 3 a~ Xl

(se hz(s~,a)), where hl is the frost projection ofthe behavior function which corresponds to thenext function of the deterministic FSM [Star72].Every NFSM is equivalent to an initially connectedone.

The complete NFSM B = (T~,Y,H,t~) is saidto be quasi-eqtu”valent to A if for all acceptable in-

put sequences a= X: (Hz(tO,tz)=hz(sO, a)). Intu-itively, this means that for the input sequences forwhich the behavior of A is defined, the behavior ofB is identical.

Given the NFSM A = (S, X, Y, h, SO,DA), andcomplete NFSM B, B is said to be a reduction of

A, written B <A, if Va G X; ( H2(to,a) G

Itz(sO,a) ). Intuitively, this means that for the inputsequences for which the behavior of A is defined,the output sequences of B are included in those de-fined for A.

The equivalence, quasi-equivalence, and reduc-tion relations between NFSMS are widely used asthe conformance relations between implementationsand their specifications in protocol test derivationbased on the finite state machines.

2.2. Interfaces, fault models and completetest suites

In protocol engineering, it is customary to dis-tinguish (at least) two levels of abstraction for thedescription of module interfaces: For the protocoland service specifications, the interactions at a SAPare described at an abstract level in terms of atomicevents which normally correspond to a request bythe user or an indication received by the user fromthe protocol entity. For each implementation of theprotocol, an implementation of the abstract interfaceis provided either in software (e.g. procedure calls,operating systems calls) or hardware. The nature ofthe interface implementation is not defined by theprotocol specxlcation. Only the abstract propertiesof interactions, their parameters and ordering con-straints defined by the protocol specification mustbe reflected by the implementation.

The test cases developed for the conformancetesting of a given protocol should be applicable toall implementations of that protocol, irrespective ofthe interface conventions used by the implementa-tion. Therefore these test cases are based on the

112

Page 5: Protocol Testing: Review of Methods and Relevance for ...bochmann/Curriculum/Pub/1994 - Protocol tes… · Review of Methods and Relevance for Software Testing Gregor v. Bochmann

abstract interface definition of the specification. Inorder to execute such a test case with a given im-plementation under test (JUT), these abstract inter-actions must be realized in terms of the concreteinterface adopted by the protocol implementation.For instance, the upper tester (UT) accessing theupper SAP of the IUT (see Figure 2) must beadapted to the particular SAP interface adopted bythe IUT.

We conclude that in case of black-box testingbased on the specification of the I’UT, the test suitemust be developed based on the abstract interfacesdefined within the specification, and it must be as-sumed that the (abstract) properties of these inter-faces are satisfied by the concrete realization ofthese interfaces in the implementation under testwe call this the correct interjace assumption. (Howthe validity of this assumption can be tested is out-side the scope of this paper).

In the context of FSM-based testing, it is as-sumed that the interface is event-driven, and more-over, that all events on the interface are alternatelycontrolled by the environment and the IUT. In par-ticular, there exist two channels, one is to convey“inputs” of the environment and the second is to re-ceive “outputs” from the IUT. Each abstract inputyields exactly one output symbol, and the next in-put symbol can be sent to the IUT only after theIUT has moved into a stable state and produced anoutput symbol in response to the previous input.An output reaction of the IUT to any input can be inthe form of a meaningful output (as explicitly de-fiied by the specification), an “error” or a “null”output (which means “no output”, and which willbe detected by waiting a specifkd time-out period).All these outputs can be observed and are treated asabstract output symbols.

Based on such an interface, the reference behav-ior is assumed to be specified as an NFSM. Thecorrect interface assumption implies that any KITcan also be modeled as an NFSM. In particular, noIUT can “refuse” to execute any input in any of itsstates or produce an output without any input. AnyIUT is thus a completely specified NFSM whoseinput alphabet at least includes the one of the speci-fication NFSM. Moreover, the implementations arecomplete machines even if the specification is apartial machine and include the input alphabet of thereference NFSM.

In order to introduce the notion of test coverage,it is necessary to determine which faults of an im-plementation should be covered. One way to intro-duce a fault model is by defining a set of implemen-tations with erroneous behavior that should be de-tected by the test suite. If this set is finite, a finitetest suite exists that covers all faults; if it is not fi-nite, there may be no finite test suite that covers all

faults. In order to define a fault model, we take themutation approach, where a mutant of a specifica-tion A is any implementation that satisfies the cor-rect interface assumption. In the case of FSM-based conformance testing, the mutants of A are theclass of all completely specified NFSMS with the

same input alphabet. A fault model 5 is a subset ofthis class.

The concept of conformance relation is neededto distinguish mutants which could exhibit an erro-neous behavior ~BD93]. It is usually introducedwhen both the IUT and its corresponding specifi-cation are represented with the same formalism. Inthe case of the FSM model, the equivalence, quasi-equivalence, and reduction relation are natural can-didates for conformance relation. The given speci-fication NFSM A and a conformance relation conf

partition the set of mutants 5 into the subsets of

conforming implementations C(A) = {1 I IG i? & Iconf A } and non-conforming implementationsN(A) for the given specification A, i.e., N(A)= {1 I

IC 3 & I notconf A }. Based on this partition, thenotion of a complete test suite for a given NFSM Awith respect to a given conformance relation andfault model can be formally introduced as follows.

A test suite E sX~ is said to be compzete for the

specification NFSM A and the fault modeI 5 w.r.t.the reduction relation if for any NFSM B = (T, X,

Y, H, to) in 5 which is not a reduction of A, these

exist an a= E and a yG Hp(to, a) such that

yzh2(s0,@. Similarly, complete test suites are de-fined for the equivalence and quasi-equivalence re-lations. A complete test suite guarantees detectionof all mutants in the fault model that do not con-form to the specification.

Them are different ways to characterize a mu-tant, or the set of mutants within a given faultmodel. The traditional mutation approach defines aset of mutation types, each representing a specifictype of structural fault, that is, a modificationwhich may be introduced into a specification in or-der to obtain a different behavior. In the so-calledsingle fault model, the mutants considered are thoseobtained from the specification by applying a singlemutation of one of the specified types. In the caseof the so-called multiple fault modkl, the mutantsare obtained by me or sev~ such mutations. Inthe case of FSM speciflcat.ions, the mutation typesconsidered are usually ouqwt faults (the output of atransition is wrong) or lransfer faults (the next stateof a transition is wrong). In the area of softwaretesting, various types of mutations may be consid-ered, such as sequencing faults (e.g. missing alter-

113

Page 6: Protocol Testing: Review of Methods and Relevance for ...bochmann/Curriculum/Pub/1994 - Protocol tes… · Review of Methods and Relevance for Software Testing Gregor v. Bochmann

native path, improper nesting of loops, WHILEinstead of REPEAT, wrong logic expression tocontrol a conditional or looping statement); arith-metic and manipulative errors, such as ignoringoverflow, wrong operator (e.g. GT instead of GEor + instead of *); calling of the wrong function;wrong specification of data type and/or wrong for-mat of data representation; wrong initial value, orwrong number of data elements; or a reference toan undefined variable or the wrong variable.

This “fault-based” identification of mutantsseems most natural in the context of diagnostictesting where one wants not only detect erroneousbehavior, but also determine how the implementa-tion could be changed in order to make it conformto the specification (e.g. identify a faulty compo-nent to be replaced, or identify the part of the pro-- or the part of the FSM state table that shouldbe corrected). It seems that it works well in thecontext of deterministic and completely definedFSM specifications, however, in the context ofnondeterministic systems, there are many exampleswhere a few mutations lead to a mutant that con-forms to the speciilcation; the correspondence be-tween mutations and erroneous behavior is there-fore not straightforward. Therefore one uses oftena “wholistic” identification of mutants, simplycharacterizing the erroneous behavior of the mutantby an appropriate model without comparing itsstructure with the structure of the (correct) specifi-cation.

2.3. Test derivation from completelyspecified deterministic FSMS

The simplest fault models are defined for speci-fications which can be represented by completelyspecified deterministic FSMS. In this class of ma-chines, the equivalence relation serves as confor-mance relation, and the mutants are completelyspecified deterministic machines.

When applied to such a machine, the mutationtechnique yields a certain number of mutant FSMS.A particular mutant FSM can be generated ilom thespecification FSM by changing the next state and/orthe output of a number of selected transitions be-tween states. This mutant machine represents acertain combination of output and transfer faults.The number of all possible mutants grows expo-nentially when the number of states and inputs in-crease, therefore, some heuristics, such as infor-mation about the implementation, severity of faults,test hypotheses should guide this process. Basedon the set of mutants considered, it is possible tofind in one way or another a set of tests which dis-tinguish (kill) all these mutants from the specifica-tion FSM.

There are different methods for defining the set

of mutants in the fault model, such as the follow-ing. In each case, corresponding methods of deriv-ing a complete test suite have been developed.(1) Limiting the number of states: The fault

model consists of the universe Sn of all completedeterministic FSMS with a number of states lessorequal to nt, and which have the input alphabet ofthe specii3cation FSM A, where m > n, the numberof states in A. Guessing the bound m is an intuitiveprocess based on the knowledge of a specification,the class of implementations and their interiorstructure which have to be tested for conformance.The value of m can be also limited due to economicconsiderations. The larger the bound is chosen, thelarger the complete test suite is, as it is illustratedby the existing upper bounds for the complexity oftests [PBD93].

The methods for deriving complete test suiteshorn complete deterministic machines are all basedon transition checking approach ~enn64]. Eachmethod, however, requires a distinct type of acharacterization set [Vasi73] (a W-set) used forstate identification. The UIOv-method [Vuon89],for instance, constructs the W-set only from so-called unique input/output sequences. The methods[Yevt90a], [Petr91], [Petr92] are based on “har-monized” state identifiers eliminating the verilZca-tion phase that is necessary for the W- and Wp-methods [Chow78], [Fuji9 1]. The methods in[Vasi73], [Chow78], [Fuji91], [Yevt90a] allow theIUT’S to have more states than in the specification

(i.e., m2n); and the others [Vuon89], [Petr91],~etr92] do not (i.e., m=n). Which of these meth-ods can produce a shorter complete test suite for agiven FSM and nz=n, is still an open question. Allthe mentioned methods assume that a reliable resetis available in the IUT’s; note that there is anothergroup of methods which guarantee full fault cover-age without this assumption (see, e.g., ~enn64],and the UIOG-method in [Yao93]). They can beapplied for reduced, deterministic, strongly con-nected FSM’S. However, these methods usuallyyield much longer complete test suites than theformer. Earlier reviews of the test derivation meth-ods especially of those that do not guarantee fullfault coverage, can be found elsewhere (see, e.g.,~ra.191], [Sidh89]).

The complexity of a corresponding complete testsuite seems to be the price for assuming such abroad fault model. A more restricted fault modelrequires, however, a justit3cation for each particularapplication.(2) Output faults only: These faults are rela-tively easily to catch. Any test suite based a transi-tion tour of A (i.e. traversing each transition of Aat least once) is already complete for this fault

114

Page 7: Protocol Testing: Review of Methods and Relevance for ...bochmann/Curriculum/Pub/1994 - Protocol tes… · Review of Methods and Relevance for Software Testing Gregor v. Bochmann

model [Nait8 1]. This corresponds to the “allbranches” criteria used in software testing.(3) Fault functions: grouped faults which canbe represented by a so-called fault function

[Petr92]. Let A = (S, X, Y, 6, A, SO) be a reducedinitially connected FSM. It is assumed that all pos-sible mutants are deterministic reductions of an ap-propriate NFSM F(A) = (S’, X, Y, H, SO) con-structed from the specification FSM A in such away that A is a det&ministic submachine of F’A).The behavior function H describes all output andtransfer faults for each transition and is called afault function for A. If F(A) is weakly initialized,i.e. it has more that one initial state, then initializa-tion faults are also included in this fault model. Afault function is a general and flexible formalism to

model different faults. In fact, if H(sJ) = S’XY for

all (s,x)e S’XX, and lSl=nz, then all machines of

the set ~~ are deterministic submachine of F(A).Output faults are easy expressed by a specific form

of the fault function as well: H(SJ) = { [XSJ),YI I

y= Y} for all (s,x)e S xX, S’=S. A natural ques-tion on how a fault function can be chosen for thegiven FSM arises. We believe, this function is,frost of all, a tool to formalize hypotheses of a testengineer about the defects in a given X’UT, but theirformulation is a process which is rather intuitiveand largely based on experience. He may have cer-tain ideas on which part of the protocol (e.g.,transitions) is more likely to be incorrectly imple-mented, which errors are most critical or frequentlymade by implementators of particular class of pro-tocols, which part does require testing and so on.Particular forms of the fault function are con-structed in an obvious way in cases when, e.g., thereference FSM describes the behavior of a com-posite system and some component machines areassumed to be error-fkee. Such cases are typical forgrey-box testing and encountered when a protocolis represented by a composition of several FSMS orby an EFSM. A so-called FF-method for generat-ing a test suite which is complete for all faults de-fined by any given fault function, is proposed in[Pe@2]. It tunes a test suite to user-defined classesof faults. Similar to the above considered methods,it is also based on transition identification. How-ever, next states of transitions are identified in cer-tain subsets of states determined with the help ofthe given fault function.

Concluding this section, we note, that to ourknowledge, there have been no systematic studiesto determine whether the assumptions on a particu-lar fault model are realistic. The assumption of noadditional states seems to be justified in the case

that a direct implementation of a pure FSM specifi-cation is realized by systematic translation expectedfaults could be wrong output or transfer for speeiilctransitions, but the number of states of the imple-mentation would normally be the same as for thespecflcation. However, if additional parameters arepresent (extended FSM model) the implementationof these parameters may possibly lead to morestates in the implementation.

2.4. Partially specified FSMSMost of the existing protocols are not completely

specified, in the sense that in real communicationsystems, not all the sequences of interactions arefeasible. The model of a partial FSM adequatelyexpresses this feature of protocols. The semanticsof partial machines needs, however, ftier clarifi-cation. A specific feature of a partial FSM is that ithas “undefined” transitions called also “non-core”or “don’t care”. There are different conventions thatmay be used to give a formal meaning to “unde-fined” (we note that in the context of labeled transi-tion systems, see Section 3, there is a different no-tation, since an “undefined” means “blocking”, notransition possible):(1) “Implicitly defined” transitions. Withthis convention, a partial FSM represents a com-pletely specified FSM by adopting some “com-pleteness assumption”. Such an assumption isbased on the fact that all implementations can berepresented by complete machines which neverrefuse any input. It states that all “don’t care”transitions in the specification are substituted bylooping transitions with the null outputs(Convention la), or with transitions to an prror(terminal) state with an “error” output (Conventionlb). In SDL specifications, for instance, unex-pected or inopportune events are ignored, so Con-vention la is applied. The convention used duringthe process of implementation is assumed to beknown for testing purposes. The given partial FSMis substituted by a quasi-equivalent completelyspecitled FSM, and the problem of test derivationfkom a partial machine is thus avoided.(2) “Undefined by default” transitions.This convention means that a partial FSM is inter-preted as a set of complete FSMS taking into ac-count that the result of a “don’t care” transitionmight be any state and any output. This interpreta-tion is suitable for conformance testing of thegiven implementation when information about theconvention used by an implementator is absent. Incontrast to the first convention, all possible optionsare left up to the implementor of the partial ma-chine. A conforming implementation is, in fact, acomplete FSM that implements any of these options~etr91]. The quasi-equivalence conformance rela-

115

Page 8: Protocol Testing: Review of Methods and Relevance for ...bochmann/Curriculum/Pub/1994 - Protocol tes… · Review of Methods and Relevance for Software Testing Gregor v. Bochmann

tion exactly captures this interpretation. It is alsocalled weak conformance [Sidh89].(3) “Forbidden” transitions. In some cases,the behavior of a system is not completely specifkdbecause its environment will never execute certaininput sequences. Invalid entries in state tables usedin the 0S1 protocol standards exemplify this situa-

tion. The set of acceptable input sequences Xl ofan FSM A represents all “admissible” sequences,the remaining ones are “forbidden”, they define“forbidden” transitions in the given partial FSM.This convention is sirniku to the one of “undefinedby default” transitions, the difference is that in thiscase, a tester cannot submit certain inputs to certainstates of the given partial FSM. An example is alower service provider that cannot deliver any mes-sages to certain states of a tested protocol entity andan upper protocoI entity that cannot deliver an un-expected service primitive to any state of the IUT.In all these situations we have to treat “don’t care”transitions as “forbidden” and to avoid them in testderivation. In contrast, transitions that are unde-fined by default can be tested to determine the reac-tion of the implementation for these “unexpected”inputs; this is called “robustness testing” or testingfor strong conformance [Sidh89].

The conclusion is that to derive an abstract testsuite from even a completed specflcation of an em-bedded IUT we might have to come back to a par-tially specified machine. Therefore, even thoughsome specification languages based on the FSMmodel impose complete specifications by a build-inmechanism of implicitly defined transitions, there isa need for deriving conformance tests directly frompartial machines. Note that grey-box testing, ingeneral, requires the model of a partial FSM[Petr94].

Constructing a complete test suite for a machinefrom this class is complicated by the fact that mini-mality of a given FSM cannot anymore be taken forgranted, unlike in the case of complete machines. Ifthe machine is not reduced then some of its statescannot be distinguished neither on the specificationleveI nor on the implementation level. The tradi-tional transition identification approach is no moreapplicable to these machines. A more general ap-proach based on the idea of counting distinguish-able states traversed by a test sequence gives meth-ods [Yevt90b], [Petr9 1], [Yevt9 1], [Lu094] fortest derivation which can treat partial as well ascomplete deterministic machines.

2.5. Test derivation from nondeterministicFSM specifications

All theprotocols,

three major specification languages forLOTOS, ESTELLE, and SDL support

116

the description of nondeterminism. So a machineabstracted from the formal description of a protocolmight also be nondeterministic. Generally speak-ing, a nondeterministic FSM model may be used torepresent different situations, such as the follow-ing:- a protocol entity with inherent nondeterminism,- a set of deterministic protocol entities consideredas options of a given protocol;- a deterministic IUT embedded in a given systemin such a way that a tester cannot directly observe(see [Petr94] for more detail);- nondeterrnini sm due to concurrency.

For nondeterministic protocol implementations,a given test may give rise to different observationsof the output reaction. It is therefore not enough toexecute a test once, the same test should be repeat-edly executed until all possible observations areobtained. If the nondeterminism within the IUTcannot be influenced from the testing environment,it may be difficult, in general, to determine afterhow many executions of the given test all possibleobservations have been obtained. In order to de-termine the verdict for a given test, it is thereforenecessary to assume the so-called complete testingassumption ~uo94J which supposes that all pos-sible observations have been obtained (see also the“all weather conditions” assumption for LTS’S in[Miln80]). Note that so-called intermittent faultscan also be modeled as an NFSM, however, it isnot clear how the complete testing assumption canbe satisfied. Their detection is not guaranteed

The equivalence, quasi-equivalence, and reduc-tion relations defined for NFSMS seem suitable toserve as conformance relations in protocol testing.We note once more that for complete NFSMS, thequasi-equivalence relation coincides with theequivalence relation, and the choice has to be madebetween the two relations. If, however, an imple-mentation submitted for conformance testing isknown to exhibit a deterministic behavior only,then the reduction relation has to be taken as a con-formance relation between a nondeterministic spec-ification and its deterministic implementation.

Test derivation flom such machines is currentlyan active research area. Several methods have al-ready been reported, some of them are guided bycertain heuristics, the others, like [Yevt9 1],[Petr93], [Petr91], [Lu094], are based on faultmodels and provide complete test suite w.r.t. thechosen conformance relation. Similar to determin-istic partial machines, these methods exploit theidea of counting distinguishable (in the sense of thechosen relation) states traversed by a test sequence,to ensure the required relation between the specit3-cation and implementation. Different conformancerelations yield usually different complete test suites

Page 9: Protocol Testing: Review of Methods and Relevance for ...bochmann/Curriculum/Pub/1994 - Protocol tes… · Review of Methods and Relevance for Software Testing Gregor v. Bochmann

for the same specflcation machine, since stateswhich are not (quasi-) equivalent may satisfy thereduction relation.

2.6. Length of complete test suitesThe expected length of a complete test suite for

an arbitrary given (partially specified and nonde-terministic) FSM w.r.t. the chosen conformancerelation is usually well within the upper bound ofthe corresponding test derivation method. In theworst case, it does not exceed the value IXlnrn,where X is an input alphabet, n is the number ofstates in the given machine, and M is the upperbound on the number of states in possible imple-mentations [Petr93]. The complete deterministicFSMS have better upper bounds, see for instance[Vasi73], [Yann91], [PBD93].

However, the actual complexity of the test suitefor FSMS modeling protocols seems much lessthese upper bounds. The complexity depends onproperties of the characterization set on which thetest derivation method is based. Even though theirconstruction is shown to be PSPACE-complete[Yann91], several empirical studies show that pro-tocol machines tend to have rather short state iden-tification sequences which result in test suites ofreasonable size. We give some examples in Section6.

2.7. Fault coverage analysisAs with other software testing, the evaluation of

fault coverage for a given test suite is an importantissue in testing based on the FSM model and hasbeen studied in the context of traditional hardwaretesting and, in recent years, in the context of proto-cols. Methods that have been proposed are essen-tially variations of the mutation analysis technique.For instance, instead of using the exhaustive muta-tion analysis approach, some researchers have in-troduced a number of classes of mutant machines~ahb88], ~ubu91], [Sidh89] and [Mott93]. Themutant machines in a class will contain a certainnumber of faults resulting from the changes of nextstates and/or outputs of some transitions. For eachclass, a limited numlxx of mutant machines arerandomly generated which are then executedagainst the given test suite. Apparently, the faultcoverage evaluated in such a way is an estimationof the real fault coverage of the test suite and its ac-curacy relies on the total number of mutant ma-chines randomly generated and executed.

Recently, a different procedure has been devel-oped which, without the need of explicitly generat-ing and then executing a (usually large) number ofmutant machines, can decide if the given test suiteprovides complete fault coverage ~PB94]. When

the test suite does not provide full fault coverage,the proposed approach can derive from the testsuite, by analyzing it against the specification ma-chine, an incorrect implementation machine whichcan pass the test suite and therefore allows an addi-tional test case to be generated to distinguish thisparticular implementation machine from the specifi-cation. As such, full fault coverage can be achievedby repeatedly applying this procedure. However,this approach does not provide a numeric measurefor a test suite which does not have fill fault cover-age. Consequently, it is impossible to use this ap-proach to compare the fault coverage of two testsuites, if none of them provides full fault coverage.A metric-based approach [Yao94] to the evaluationof fault coverage of a test suite in respect to a givenspecification machine can be used for this purpose.This approach avoids the necessity of explicit gen-eration and execution of mutant machinesrepresenting possible implementations of the givenspecification machine. It provides a numeric mea-sure for a test suite no matter whether the test suitehas complete fault coverage (i.e., 100%) or not.The experiments show [Yao94] that this methodgives a good approximation of the real fault cover-age of the given test suite.

3. Testing based on specifications in theform of labeled transition systems

Labeled transitions systems (LTS’S) are in somesense a more general specification model than FS-Ms, since interactions of a specified subsystemwith its environment are usually considered as ren-dezvous interactions making no distinction betweeninput and output (see for instance the specificationformalism CCS [Miln80] or CSP ~oar85]). In thecase of the LOTOS language, more than two pro-cesses may participate in a given rendezvous inter-action. For an interaction to occur, it is necessarythat all participating processes have a specifiedtransition that leads from their present state to anext state involving the interaction in question. Ifsuch a transition is not defined for the present state,we say that the interaction is blocked for the pro-cess.

An LTS may be considered as a partially speci-fied FSM without outputs where the interactions ofthe LTS correspond to the inputs of the FSM, andthe meaning of an “undefined” transition is theblocking behavior, as explained above. In addition,LTS’S usually allow for internal events that are not

observable from the environment, denoted T (or i,in the case of LOTOS). Nondeterminism may beintroduced by internal events or by several transi-tions, for the same interaction, leading to differentstates.

117

Page 10: Protocol Testing: Review of Methods and Relevance for ...bochmann/Curriculum/Pub/1994 - Protocol tes… · Review of Methods and Relevance for Software Testing Gregor v. Bochmann

Different kinds of conformance relations maybeconsidered for LTS’S. If only the traces of ob-served interactions are of interest (like the traces ofinputs and outputs as considered for FSMS), onemay consider the trace equivalence and trace inclu-sion relations, which correspond to the equivalenceand reduction relations defined for FSMS.

In the case of nondeterministic systems, theblocking behavior cannot be deduced from thepossible traces. Therefore it becomes important toconsider explicitly the blocking behavior. This isusually done by considering, for each possibletrace, a set of associated refusal sets; a refusal setfor trace t is a set of interactions that may be all re-fused (i.e. blocked) by the system after executingthe trace t. The refusal behavior of an implementa-tion may be tested by a testing environment that ex-ecutes, in rendezvous with the IUT, the trace t andthen offers all the interactions in the refusal set. Ifan execution of this test leads to deadlock after thetrace t, it is shown that the IUT includes the refusalset in question. Note that in general, because of thenondeterrninistic nature of the specification (andpossibly the implementation) each test must be exe-cuted several times, as in the case of testing nonde-terministic FSMS, since after a given trace t, theIUT maybe in different states, each correspondingto a different refusal set. Based on this testingmethod, also called failure semantics, the confor-mance relations test equivalence, reduction, exten-sion and “confonrumce” maybe defined [Tret9 1].

Many other conformance relations may be con-sidered (see for instance [Miln80], [Miln8 1],[Glab90]). Depending on the properties of a testerand the interface between the tester and the IUT,different kinds of experiments on processes can bedefined, giving rise to an abundance of semanticsfor the relations between processes. In the failuretrace semantics, for instance, a tester can not onlyobserve deadlocks but also continue testing after-wards by offering another set of interactions,which is not allowed for the failure semantics. Inseveral simulation-related semantics, the tester is, atany time during a run of the IUT, capable of mak-ing arbitrary many copies of the IUT in its currentstate and observe them independently. This as-sumption is, in fact, much stronger than the reliablereset assumption in the realm of FSMS. It seems tous that test hypotheses deeply influence the choiceof a conformance relation among this hierarchy ofthe preorder relations and the associated equiva-lence relations. We refer to work of R.J. vanGlabbek [Glab93], [Glab90], where 155 notions ofobservability have been reviewed. The relationshave been mainly used for verification purposes inprocess algebra and concurrent systems.

To our knowledge, current research in testing

implementations against a given reference LTS hasbeen concentrated on relations based on trace andfailure trace semantics. In recent years, much workon the derivation of conformance tests from a givenLOTOS, or corresponding LTS specification, hasbeen done in the protocol engineering community[Alde90], [Brin87], [Drir93], [FuBo 91],lLa.ng89], mdu91], [pit@O], Neze90], mrin89],~ret91]. Most of this work has addressed the dif-ficult problem of dealing with nondeterministicspecifications.

It seems that the theories of test derivation fromFSM and LTS specifications have been developedalmost independently. Recently, it has been shown~BD93] that it is possible to transfer the problemof deriving a conformance test suite for an LTS tothe realm of the input/output FSM model, wherethe test derivation theory has been elaborated forseveral decades and a number of useful results havebeen obtained already. The idea is to define, for agiven LTS specification and given conformancerelation, a corresponding FSM specification suchthat the application of the FSM test suite (based ontrace conformance) is equivalent to the veritlcationof the given conformance relation in respect to theLTS specification. Unfortunately, the number oftransitions in the corresponding FSM may be muchlarger than in the original LTS. Further work is re-quired to determine whether this approach can leadto practical results.

4. Testing of extended FSM modelsThe FSM model is often too restrictive for

defining all aspects of a protocol or any other kindof spedlcation. Therefore an extended FSM modelis often used. With this model, the spec~lcation ofa system module, represented as an extended FSM,includes additional state variables (in addition to theFSM state variable) and the input and output inter-actions include parameters. A transition is in gen-eral characteriz~ in addition to the FSM character-ization of present and next state and input and out-put interactions, by a so-called enabling predicateand a transition action. The enabling predicate mustbe satisfied for the transition to be possible. It de-pends on the pnxent module state (FSM and addi-tional state variables) and the effective input pa-rameters. The action defines the output produced,including the effective output parameters, and pos-sibly some updates of the additional state variables.

The notation used for the description of thetransition predicates and actions are usually bor-rowed from some high-level programming lan-guage. Therefon3, the issues of the systematic test-ing of an implementation based on a given extendedFSM specification are closely related to the issuesof software testing. If certain limiting assumptions

118

Page 11: Protocol Testing: Review of Methods and Relevance for ...bochmann/Curriculum/Pub/1994 - Protocol tes… · Review of Methods and Relevance for Software Testing Gregor v. Bochmann

are made about the form of the predicates and ac-tions, the analysis of the behavior of the specii3ca-tion and systematic test selection remains decidable~ga94], but in general, in particular when the ac-tions may include loops, the question of decidingwhich input parameters should be used in order toforce the selection of a particular transition becomesundecidable, like the question of deciding the exe-cutability of a given branch of the program in soft-ware testing.

Several researchers have proposed to usedataflow analysis for the systematic selection of testcases for extended FSM specifications. Thedataflow analysis involves the input and output pa-rameters and the additional state variables. By se-lecting appropriate test cases involving appropriatetransitions, it is possible to satisfy the testing crite-ria that have been developed for software testing,such as “all definition-use pairs”, etc. ~rank93]. Anumber of methods are based on the constructionof control and data flow graphs [Sari87], wra187],[Ura191], [Mil192], [Ura193]. By analyzing thegraphs, some guidelines are provided for choosingtest sequences based on traditional software testingcriteria. In order to clearly separate the control anddata flow aspects of a specification, the transfor-mation into a normal form specification has beenproposed [Sari87]. In such a specification, all con-trol flow aspects are represented by the underlyingFSM model, whereas the actions can be repre-sented by straight-line code without branchingstatements nor loops.

Clearly, the selection of the transition to be exe-cuted cannot be done in an arbitrary manner, sincethe FSM part of the specification prescribes certainrules about the possible sequencing of the statetransitions. At the same time, it is desirable to coverthe faults of the FSM part of the specification, us-ing one of the test selection methods developed forpure FSM specification discussed in Section 2.This interplay between the control part of the spec-ification (represented by the pure FSM model) andthe data part (represented by the transition predi-cates and actions) makes these test selection meth-ods interesting.

The test selection methods for extended FSMmodels usually ignore the problem of selecting ap-propriate input parameter values in order to assurethat the desired transition predicates become en-abled. The selection of appropriate input parametervalues is also important for the other reason that adataflow fault may not be revealed for a givencombination of input parameter values if the result-ing output parameter values are (by chance) not af-fected by the dataflow fault. Therefore the methodof repeating tests with different input parametervalues, selecting for each parameter the extreme

and some intermediate values, well-known in soft-ware testing, is also proposed to be used for proto-col testing [1S9646].

An EFSM can be viewed as a compressed nota-tion of an FSM. It is possible to unfold it into apure FSM by expanding the values of the parame-ters [Favr86], [Cast86], [Vuon89], [Faro90],[Chun90]. Their domains have to be reduced in or-der to avoid state and input explosion problem. Inthis case, the FSM-based methods cover not onlythe control part of a specification but also its datapart.

The mutant-killing technique has also been ap-plied to derive test sequences from an EFSM spec-ification of a protocol. A limited number of mutantEFSMS are first constructed based on the givenEFSM, and then by comparing the behavior of twomachines a sequence which can distinguish them isfound, see for instance [Gu091], [Wang93]. TheBDD technology ~rya86] and implicit state enu-meration methods can be used to save space forconstructing distinguishing sequences, see, e.g.[Cho91].

A similar problem is encountered in the area ofhardware design verification. To verify the designone has finally to compare the two EFSMS and totry to distinguish them, one represents the project,the second one - the actual design.

The weak point of this technique (and possiblyall the EFSM-based methods) is the exhaustivesearch for suitable paths. Fortunately, most of theprotocols do not contain complicated computationand reveal the content of their variables within sev-eral transitions, moreover, they tend to have simplepredicates.

5. Test architectures and testabilityA typical protocol entity has two service access

points (SAP) to communicate with adjacent proto-col layers, as shown in Figure 1(b). Their existenceis a distinctive feature of communication software,it gives rise to a variety of architectures for protocoltesting. Several system architectures for confor-mance testing have been identified in the context of0S1 standardization ~ayn87]. These architecturescan also be used for implementation assessment.The international standard [1S9646] distinguishesfour basic types of a test architecture: local, dis-tributed, coordinated and remote. Local test archi-tecture corresponds to traditional software testing,here both SAP’s of the implementation under test(TUT) are accessed by a single tester. In this archi-tecture, the SAP’s can be viewed as parts of an in-terface between the tester and the protocol imple-mentation since the IUT and the tester reside withinthe same computer system. The other (external) ar-chitectures assume that the lower SAP is accessed

119

Page 12: Protocol Testing: Review of Methods and Relevance for ...bochmann/Curriculum/Pub/1994 - Protocol tes… · Review of Methods and Relevance for Software Testing Gregor v. Bochmann

via an underlying communication service by the so-called “lower tester” which is implemented in aseparate test system computer. The so-called dis-tributed and coordinated test methods require alongwith the lower tester the “upper tester” which has tobe interfaced with the IUT on its upper SAP. Fig-ure 2 shows the distributed architecture, where theKIT and a test user, called “upper tester”, reside ina computer that is connected through a network to aNmote test system computer.

The difference between the two methods is thatthe coordinated test architecture relies on a specialtest coordination protocol which supports coordi-nation or synchronization of the actions of bothtesters. A separate communication channel is usu-ally introduced, sometimes in the form of a terminalconnection to the remote test system, which allowsthe operator at the system under test to coordinatethe activities of the test system with the operation ofthe system under test. Various test coordinationprotocols, sometimes using a separate channel,have also been developed for automatically coordi-nating the actions of the upper and lower testers.

EFigure 2: Distributed test method.

The remote test architecture excludes the upperSAP from the testing process since in a number ofpractical situations, the IUT is embedded into acomplex system in such a way that the upper testersimply cannot be included into the system. It is im-portant to note that complete testing of a protocolimplementation implies the observation of the inter-actions at the upper w lower interfaces. Neverthe-less, in many cases the remote test architecture isused which does not have a special upper tester andtherefore does not check the communication serviceprovided to the user. The remote and distributedtest architectures have the advantage that the testsystem resides in a separate computer and can beaccessed over distance from a variety of systemsunder test. In the context of 0S1, certain test cen-ters provide public conformance testing serviceswhich are accessed over public data networks.

The distributed test architecture presents somediMculty to test execution concerning the synchro-nization between the upper and lower testers, sincethey reside indifferent computers and communicateonly indirectly through the IUT. Since the syn-chronization becomes an important issue in the

cases when the upper and lower testers do not re-side in the same computer, several FSM-methods[Sari84], [Boyd91], [Lu093] attempt to produceso-called synchronizable test suites which do notneed an external coordination of the two testers atthe penalty of an increased length and worse faultcoverage. Some protocols are intrinsically nonsyn-chronizable, hence the distributed test architecturewithout coordination between testers cannot guar-antee complete fault coverage with any given testsuite.

Each of the external test methods comes in threevariant forms: single-layer, multi-layer or embed-ded. Single-layer methods are designed for testinga single-layer protocol implementation without ref-erence to the layers above it, but assuming that theprotocol below has already been tested Multi-layermethods are designed for testing a component thatimplements several protocol layers as a whole.Embedded test methods tune tests to a single proto-col assuming that the protocols above and belowthis layer have already been tested. The variety oftest architectures is suitable for different testing sit-uations. It is obvious that these architectures havedifferent impacts on testability, i.e. controllabilityand observability of the IUT.

As may be expected, testability of a protocolimplementation usually deteriorates when it is beingtested through an environment, e.g. other protocollayers. The use of additional points of control andobservation (PCO) could compensate such a deteri-oration, moreover, these PCOS could significantlydecrease the length of a required test suite for anisolated protocol implementation. PCO’S fortestability are one of the techniques considered inthe design for testability of communication proto-cols, which is an active research area [Dss090],[Dss091], [Vuon93]. However, a third party(conformance) testing usually does not rely on suchspecial features since most existing protocols havebeen specified and implemented without confor-mance testing in mind.

External and embedded test architectures entailserious testing problems which are best formalizedin the context of grey-box testing. Opposed to theblack-box approaches, grey-box testing attempts toprofit from the internal knowledge of the systemaccessed by testers. The major issue is that amodel, e.g. an LTS or an FSM, used to representthe behavior of a given protocol entity alone, doesnot characterize its behavior in such a test architec-ture. Some actions are no longer directly controlledor observed, certain faults are not easy to activateby and/or to propagate to a tester, i.e. they evenmay go undetected. At the same time, testability inthis case has its limits in the sense that there areimplementations that would not conform to the

120

Page 13: Protocol Testing: Review of Methods and Relevance for ...bochmann/Curriculum/Pub/1994 - Protocol tes… · Review of Methods and Relevance for Software Testing Gregor v. Bochmann

isolated protocol specification, but when consid-ered in the given architecture, become conforming.This problem has been formally treated in terms ofthe FSM model in [Petr93], [Petr94] and the LTSmodel in ~r93]. As shown in [Petr94], the be-havior of a component under test even if it is de-terministic and completely specifkxl, when it can beexternally controlled and observed, may be ade-quately represented by a partially specified andnondeterministic FSM within a particular context.Tests derived fkom such a testable representationensure that all faults detectable within the given testamhitecture are revealed. Similar considerations arereported in the area of hardware, see for instance~ata93].

6. The practice of protocol conformancetesting

Testing of distributed systems is required at sev-eral levels, such as in-house validation during thedesign and implementation process, protocol con-formance and implementation assessment testing,inter-operability testing between different imple-mentations of the same protocol, and quality ofservice testing concerning specific performancequestions. For conformance testing of standardizedprotocols, standardized conformance test suiteshave been developed which are intended to be ap-plied in a standardized test architecture and envi-ronment. Most of these test suites have been devel-oped using ad hoc techniques inspired by ideas ofthe existing formal methods.

Based on standardized test suites conformancetesting services are offered by a number of protocoltest centers throughout the world. According todata provided by the Open Systems Testing Con-sortium (OSTC), the test campaign for a protocolimplementation may last ffom a few days to severalweeks depending on the complexity of the proto-col. For example, 7 days are required to performconformance testing of an 0S1 session protocolimplementation, according to the OSTC. Unfortu-nately, not much data is currently available on thestatistics of real faults in tested protocol implemen-tations. One of the possible reasons for this is thatprotocol implementators areas usually reluctant todisclose such information.

Protocol test derivation and execution requiresspecflc support tools. Due to space limitations, wecannot discuss these issues here; the interestedreader is referred to a recent review [Chan93]. Ac-cording to some data, the cost of test suite produc-tion can be reduced by at least 40% by using suchtools. Note that tools for automated test derivationquite often constitute a part of a protocol develop-ment environment based on one of the FDT’s[Prob89], [Chan93].

Software tools based on formal methods for testderivation were used in a number of protocol casestudies; we mention only a few. The Wp-method~uji91], for example, has been applied in Liu93]to derive test sequences for a control subset of theXTP protocol. This subset is modeled by an FSMwith 10 states. Altogether, 458 test cases were gen-erated, each of lengths is shorter than 10 inputs.Another case study [Shev91] dealt with the 0S1Session protocol, which can be represented by astate table with 29 control states 28 context vari-ables, among them 9 are integers and the othersBoolean; and 71 predicates. Based on this statetable, a number of FSMS for different functiomilblocks of the protocol have been constructed by un-folding and decomposing the table. Domains of in-put parameters and integers were fwst reduced insuch away that all the predicates can be execut.d atleast once. Finally, the method [Petr91] was ap-plied to each of these FSMS and a test suite of ap-proximately 11700 test cases was generated. Thetotal number of input test events is about 67000.The test suite provides complete fault coverage interms of the constructed FSM models. The ISDNprotocol LAPD was used in a case study at AT&T[Sher90]. An FSM model with 9 states was con-structed for the protocol and a test suite of 1500 in-put test events was formally generated by applyingthe UIO-method [Aho88]. Later on, a documentcontaining the standardized test suite for ISDNLAPD became available, which consists of morethan 1200 pages containing tables of test casesspecified in ‘ITCN. These applications and others[Sidh89] have demonstrated that formal methodsfor conformance test generation are quite effectivefor rather complex communication protocols.However, the data portion of protocols remains themost difficult problem for the existing formalmethods.

7. ConclusionsA protocol entity is a reactive system, therefore

the temporal ordering of the interactions, with theuser and with the remote peer entity, is an impor-tant aspect of each protocol specification. For thisreason, FSM models are often used for protocolspecifications. However, in order to capture thereal requirements of a protocol specification, themodel of a partially specifkxl, partly nondetermin-istic FSM must be used. In addition to this “as-pect”, there are usually other aspects which aremore easily specified in terms of data parametersand operations; in the specification of protocolstandards, these aspects are usually described inEnglisly however, they maybe formalized using anextended FSM formalism. Several so-called formaldescription techniques (PDT’s) have been devel-

121

Page 14: Protocol Testing: Review of Methods and Relevance for ...bochmann/Curriculum/Pub/1994 - Protocol tes… · Review of Methods and Relevance for Software Testing Gregor v. Bochmann

oped for the specification of communication proto-cols and services (SDL, LOTOS, and Estelle)which include facilities for the control and data as-pects of system specifications. These specificationrequirements are similar to other kinds of reactivesystems.

Another specification aspect speciilc to commun-ication protocols is the coding of the protocol dataunits which are exchanged between the peer enti-ties. Usually, ad hoc specification methods areused for this purpose. However, in the case ofmost application layer protocols, generic rules forcoding data structures are used for this purpose;some rules of this type are defined by the ASN. 1notation used for 0S1 application layer protocols.

Protocol conformance testing uses a black-boxapproach based on the protocol specification. Mostmethods for the development of a test suite from agiven specification are based on a fault model ofpossible erroneous implementations that should bedetected by the test suite. Guarantees for the cover-age of the test suite are therefore dependent on theassumption of the fault model. These fault modelsdepend on the nature of the specification. Corre-sponding to the different aspects of a protocolspecification, different fault models relate to thecontrol part (FSM), data part (software related faultmodels) and coding (specific fault model dependingon the coding scheme, not much work has beendone in this area). The authors do not know of anysystematic study showing how realistic these faultmodels are.

The methods for test suite development are notonly specific to the fault model and specificationparadigm used for the protocol specification, butdepend also on the conformance relation thatshould hold between the implementation under testand the specification. In the case of partial andnondeterministic specifications, different confor-mance relations may be considered. The test meth-ods based on deterministic, completely specifiedFSM models are best investigated, however, mostprotocol specifications do not satisfy the underlyingassumptions. Methods for partial and nondetermin-istic systems have been developed recently, includ-ing work on testing based on LTS specifications(e.g. written in LOTOS) which are based on ren-dezvous interactions.

For the testing based on extended FSM models,two approaches can be adopted: (1) unfolding thespecification into the form of a pure FSM model(with the danger of state explosion), or (2) separat-ing the control and data aspects of the specification,and using FSM and software testing methods forthese two aspects, respectively.

Testing of distributed systems is required at sev-eral levels, such as in-house validation during the

12

design and implementation process, protocol con-formance and implementation assessment testing,inter-operability testing between different imple-mentations of the same protocol, and quality ofservice testing concerning specific performancequestions. For conformance testing of standardizedprotocols, standardized conformance test suiteshave been developed which are intended to be ap-plied in a standardized test architecture and envi-ronment. Most of these test suites have been devel-oped using ad hoc techniques inspired by the for-mal methods.

The complexity of real protocol specificationsare such that the systematic test development meth-ods described in Sections 2 through 4 could pro-vide good benefit. Containing typically between 10and 20 control states and a few data variables, thecomplexity of the specification can be handled bythe automated tools, but is usually too big to beprocessed “by hand”. The length of the obtainedtest suites are usually much shorter than those pre-dicted by the “upper bound’ formulas which makeworst case assumptions on the spectilcation.

It seems that the methods developed for protocoltesting have a much larger applicability. In the areaof software testing, application domains such as re-active systems, control systems, possibly designedin an object-oriented fkamework, could make use ofthese methods. They can also be applied to thefunctional testing of sequential circuits.

Areas for future research include design fortestability and test architectures; systematic testsuite development for nondeterministic and partialspecifications; the integration of the FSM faultmodel for control flow and the software fault modelsuitable for the extensions of the FSM modelwithin the context of extended FSM models, in-cluding test derivation based on specificationstitten in languages such as SDL, Estelle or LO-TOS; test development in the case of object-ori-ented specialization hierarchies; and testing of real-time aspects.

It is difficult to describe in a short paper all di-rections of research related to protocol testing.Further details can be found in the proceedings ofIFIP-sponsored conference series, such as Interna-tional Symposiums on Protocol Specification,Testing and Verification (PSTV), InternationalWorkshops on Protocol Test Systems (IWPTS),International Conferences on Formal DescriptionTechniques for Disrnbuted Systems and Communi-cation Protocols (FORTE).

Acknowledgments: This research was sup-ported by the IDACOM-NSERC-CWARC Indus-trial Research Chair on Communication Protocolsat the University of Montreal. The authors wish to

2

Page 15: Protocol Testing: Review of Methods and Relevance for ...bochmann/Curriculum/Pub/1994 - Protocol tes… · Review of Methods and Relevance for Software Testing Gregor v. Bochmann

thank Tom Ostrand for a list of questions whichstimulated the prepsration of this article.

References[Aho88] A. V. Aho et al., “An Optimization Technique forProtocol Conformance Test Generation basedon UIO Se-quences and Rural Chinese Postman Tours”, PSTV’88.[Alde90] R. Alderden. “COOPER, the Compositional Con-struction of a CanonicalTester”, FORTE’90.CEteli89]F. Belina and D. Hogrefe, ‘The CCI’lT-Speciflca-tion and Description Language SDL”, Computer Networksand ISDN Systems,Vol. 16, 1989.@och90] G. v. Boehmann,“Protocol specification for ON”,Comp. Networks and ISDN Systems18April, 1990.Boch91] G. v. Bochmann,et al. “Fault Models in Testing”,IWPTS’91.@Mch93]G. v. Boehmann, “Protocol Engineering”, contri-bution to Concise Encyclopedia of Software Engineering,Derrick Morris and Boris Tamm eds., Pergarnon Press,1993.Po1o87] T. Bolognesi andE. Brinksma, “Introduction to the1S0 Specification Language Lotos”, Computer Networksand ISDN Systems,vol. 14, No. 1, 1987.llloyd91] S. Boyd, H. Ural, “The Synchronization Problemin Protocol Testing and its Complexity”, Inf. Proc. Letters,Vol.40, No. 8, 1991.[Brin87] E. Brinksma, “On the Existence of CanonicalTesters”,MemorandumINF-87-5, Univ. of Twente, 1987.@Mn89] E. Brinksma, et al, “A Formal Approach to Con-formance Testing”, IWPTS’89.@3rin91] E. Brinksm~ et al, “A Framework for Test Selec-tion”, IWPTS’910@rya86] R. E. Bryant, “Graph-based Atgorimths for BooleanFunction Manipulation”, IEEE Trans., vol. C-35, No.12,1986.i@so90] R. Dssouli, R. Fournier, G. v. Bochmann, “Con-formance Testing and FIFO Queues”, FORTE’90.[Dss091] R. Dssouli and R. Fournier, “CommunicationSoftware Testability”, PTS’91.lBudk871 S. Budkowski and P. Dembinski, “An Introductionto Estelle a Specification Language for Distributed Sys-tems”, Computer Networks and ISDN Systems, vol. 14,No. 1, 1987.[Cast86] R. Castanet et al, “ Methods and Semi-automaticTools for Preparing Distributed Testing”, PSTV’86.[Chan93] S. T. Chanson, A. A. F. Loureiro, S. T. Vuong,“On Tools Supporting the Use of Formal Description Tech-niques in Protocol Development”, Computer Nehvorks andISDN Systems, 25, 1993.[Chen93] K.-T. Cheng, A. S. Krishnakumar, “AutomaticFunctional Test Generation using the Extended Finite StateMachine Model”, Proc. of DAC’93.[Cho91] H. Cho, G. Hachel, F. Somenzi, “Fast SequentialATPG Based on Implicit State Enumeration”, Intl. TestConference, 1991.[Chow78] T.S. Chow, “Test Design Modeled by Finite-State Machines”, IEEE Trans. SE-4, 3,1978.[Chun90] W. Chun and P. Amer, “Test Case Generation forProtocols Spedlled in ESTELLE”, FORTE’90.[Dahb88] A. Dahbura and K. Sabnani, “Experience in Esti-mating Fault Coverage of a Protocol Test”, INFOCOM’88.

@)rir93] K. Drira, et al., “Testability of a CommunicatingSystem through an Environment”, LNCS, 668, 1993.lllubu91] M. Dubuc, R. Dssouli and G.V. Bochmann,‘TEST’L A Tool for Incremental Test Suite Design Basedon Finite State Model”, IWPTS’91.~aro90] A. Faro and A. Petrenko, “Sequence Generationfrom EFSMS for Protocol Testing”, COMNET90.l_Favr86] J.-P. Favreau, R. J. Linn, “Automatic Generationof Test Scenario Skeletons from Protocol SpecificationsWritten in ESTELLE, ISPTSV’86.Frank93] P. G. Frankl, S. N. Weiss, “An ExperimentalComparison of the Effectiveness of Branch Testing and DataFlow Testingn, IEEE Trans., vol SE-19, No.8, 1993lFuBo91] S. Fujiwara, G. v. Bochmann, “Testing Non-deterministic State Machines with Fault Coverage”,IWPTS’91.[Glab90] R. J. v. Glabbeek, “The Linear Time-BranchingTime Spectrum”, LNCS, 458, 1990.[Gu091] F. Guo, R. Probert, “E-MPT Protocol Testing:Preliminary Experimental Results”, PSTV’91.ll?uji91] S. Fujiwara, G. v. Bochmann, F. Khen&k, M.Amalou, A. Ghedamsi, “Test Selection Based on FiniteState Models”, IEEE Trans., SE-17, No.6, 1991.[Glab93] R. J. v. Glabbeek, “The linear Time-BranchingTime Spectrum II”, LNCS, 715,1993.@enn64] F. C. Hennie, “Fault Detecting Experiments forSequential Circuits”, IEEE 5th Ann. Symp. on SwitchingCircuits Theory and Logical Design, 1964.Wga!kl] T. Higashino, G. v. Bochmann, “Automatic Anal-ysis and Test Derivation for a Restricted Class of LOTOSExpnxwions with Data Parameters”, IEEE Tran., SE-20, No.1, 1994.llloar 85] C. A. R. Hoare, Communicating Sequential Pro-cesses,Prentice Hall, 1985.fHoff93], D. Hoffman and P. Strooper, “A Case Study inClass Testing”, in CASCON’93.[1S9646] 0S1 Conformance Testing Methodology andFramework.DS7498] Reference Model for 0S1.wg89] R. Langerak, “A Testing Theory for LOTOS Us-ing Deadlock Detection”, PSTV’89.WU91] G. Leduc, “Conformance Relation, AssociatedEquivalence, and New Canonical Tester in LOTOS”,PSTV’91.ziu93] F. Liu, “Test Generation Based on the FSM Modelwith Timers and Counters,”, M. Sc. Theses, Universit4 deMontr6al, DIRO, 1993.Lu093] G. Luo, R. Dssouli, G. v. Bochmann, P.Venkatamm, A, Ghedamsi, “Generating SYnehronizable TestSequences based on Finite State Machines with DistributedPorts”, IWPTS’1993.Lu094] G. Luo, A. Petrenko, G. v. Bochmann, “Test Selec-tion based on Communicating Nondeterministic Finite StateMachines using a Generalized Wp-Method”, IEEE Trans.,Vol. SE-20, No. 2, 1994.~iln80] R. Milner, A Calculus of Communicating Sys-tems, LNCS, 92, 1980.[Mil192] R. E. Miller and S. Paul, “Generating Confor-mance Test Sequences for Combined Control and Data Flowof Communication Protocols”, PSTV’92.llfoor56] E. F. Moore, “Gedanken-Experiments on Sequen-tial Machines”, Automata Studies, Princeton University

123

Page 16: Protocol Testing: Review of Methods and Relevance for ...bochmann/Curriculum/Pub/1994 - Protocol tes… · Review of Methods and Relevance for Software Testing Gregor v. Bochmann

Press, Princeton, New Jersey, 1956.[Mott93] H. Motteler, A. Chung and D, Sidhu, “Fault Cov-erage of UIO-based Methods for Protocol Testing”,IWPTS’1993.l?leuf92] G. Neufeld and S. Vuong, “An overview ofASN.1”, Computer Networks and ISDN Systems, 23, 1992.mh91] A. Petrenko, “Checking Experiments with ProtocolMachines”, ms’1991.retr92] A. Petrenko, N. Yevtushenko, “Test Suite Genera-tion for a FSM with a Given Type of Implementation Er-rors”, ISPTSV’92.~BD93] A. Petrenko, G. v. Bochmann, R. Dssouli, “Con-formance Relations and Test Derivation”, IWPTS’1993.~etr93] A. Petrenko, N. Yevtushenko, A. Lebedev, A. Das,“Nondeterministic State Machines in Protocol ConformanceTesting”, IWPTS’1993.retr94] A. Petrenko, N. Yevtushenko, R. Dssouli, “Grey-Box FSM-based Testing Strategies”, Department Publication911, University de Montrd.al, 1994, 22p.p%ob89] R. L. Probert, H. Ural, M. W. A. Hombeek, “AComprehensive Software Environment for Developing Stan-dardized Conforrnanee Test Suites”, Computer Networks andISDN Systems, 18, 1989/1990.lRayn87] D. Rayner, “0S1 Conformance Testing”, Com-puter Networks and ISDN Systems, 14,1987.[Sari84] B. Sarikaya G. v. Bochmann, “Synchronization andSpecification Issues in Protocol Testing”, IEEE Trans., vol.COM-32, No.4, 1984.[Sari871 B. Sarikaya, G. v. Bochmann, E. Cemy, “A TestDesign Methodology for Protocol Testing”, IEEE Trans.,VO1. SE-13, No.5, 1987.[Sari92] B, Sarikaya and A, Wiles, “Standard ConformanceTest Specification Language TTCN”, Computer Standards&Interfaces, VO1.14, No.2, 1992.[Sher90] M. H. Sherif, M. U. Uyar, “Protocol Modeling forConformance Testing: Case Study for the ISDN LAPD Pro-tOCOl”, AT&T Technical Jourmd, January 1990.[Shev91] V. Shevelkov, “Development of Methods andTools for Session Interconnection Provision in Open Sys-tems Networks”, Ph.D. Theses, Riga, 1991.[Sidh89] D. P. Sidhu and T. K. Leung, “Formal Methods forProtocol Testing A Detailed Study”, IEEE Trans. SE-15,4,1989.[Star72] P. H. Starke, Abstract Automata, North-Hol-land/American Elsevier, 1972, 419P.~ret!)l] J. Tretmans, P. Kars, E. Brinskma, “Protocol Con-formance Testing: A Formal Perspective on 1S0 IS-9646,IWPTS’91.rum92] C.D. Turner and DJ. Robson, “The Testing of Ob-ject-oriented Programs”, TR-13/92, Univ. of Durham, 1992.Wm187] H. Ural, “Test Sequence Selection based on StaticData Flow Analysis”, Computer Communications, Vol. 10,No. 5, 1987.llJra191] H. Ural, “Formal Methods for Test Sequence Gen-eration”, Computer Comm., Vol. 15, No. 5, 1992.lJ.ha193] H. Ural, A. Williams, “Test Generation by Explos-ing Control and Data Dependencies within System Specitlca-tions in SDL”, FORTE’93.[Vasi73] M. P. Vasilevski, “Failure Diagnosis of Au-tomata”, Cybernetics, Plenum Publ. Corporation, N.Y., No.4, 1973.[Vuon89] S. T. Vuong, W. L. Chan, and M. R. Ito, “The

UIOv-method for Protocol Test Sequence Generation”,IWPTS’89.lVuon93] S. Vuong, A. A. F. Loureiro, S. T. Chanson, “AFramework for the Design of Testability of CommunicationIWOCOIS”, ms’93.wang93] C.-J. Wang and M. T. Liu, “Generating TestCases for EFSM with Given Fault Model”, INFOCOM93.lJVata93]Y. Watanabe,R. Brayton, “The Maximum Set ofPermissible Behavior for FSM Networks”, CAD’93.Weze90] C. D. Wezeman, “The CO-OP Method for Com-positional Derivation of ConformanceTesters”,PSTV’90.[Yann91] M. Ymmakakis, “Testing Finite State Machines”,Proc. of the 23d Annual ACM Symposium on Theory ofComputing, 1991.mao94] M. Yao, A. Petrenko, G. v. Bochmann, “ A Struc-tural Analysis Approach to Evaluating Fault Coverage ofSoftware Testing in Respect to the FSM Model”, Dep.Publ. #920, University de Montreal, 1994.mevt%la] N. Yevtushenko, A. Petrenko, “Synthesis of TestExperiments in Some Classes of Automata”, AutomaticControl and Computer Sciences, Allerton Press, Inc., N.Y.,VO1.24, No.4, 1990.Wevt90b] N. Yevtushenko, A. Petrenko, “Method of Con-structing a Test Experiment for an Arbitrary DeterministicAutomaton”, Automatic Control and Computer Sciences,Allerton Press, Inc., N.Y., VO1.24, No.5, 1990.Nevt91] N. Yevtushenko, A. L&edev, A. Petrenko, “Onthe Checking Experiments with Nondeterministic Au-tomata”, Automatic Control and Computer Sciences, Aller-ton Press, Inc., N.Y., VO1.25, No.6, 1991.[YPB93] M. Yao, A. Petrenko and G,v. Bochmann, “Con-formance Testing of Protocol Machines without Reset”,PSTV’93.~B94] M. Yao, A. Petrenko and G.v. Bochmann, “FaultCoverage Analysis in Respect to an FSM Specification”,INFOCOM’94.

124