FORMAL ANALYSIS OF HETEROGENEOUS EMBEDDED …cse.iitkgp.ac.in/~soumya/thesis.pdfCertified that the...
Transcript of FORMAL ANALYSIS OF HETEROGENEOUS EMBEDDED …cse.iitkgp.ac.in/~soumya/thesis.pdfCertified that the...
This file has been cleaned of potential threats.
If you confirm that the file is coming from a trusted source, you can send the following SHA-256
hash value to your admin for the original file.
9c0c02816ae748213eac79f68605f46a2f08e879414a5520e4dee952340be169
To view the reconstructed contents, please SCROLL DOWN to next page.
FORMAL ANALYSIS OF HETEROGENEOUS EMBEDDED
SYSTEMS USING TAGGED SIGNAL MODELS
Soumyajit Dey
FORMAL ANALYSIS OF HETEROGENEOUS EMBEDDED
SYSTEMS USING TAGGED SIGNAL MODELS
Thesis submitted to the
Indian Institute of Technology, Kharagpur
for award of the degree
of
Doctor of Philosophy
by
Soumyajit Dey
Under the guidance of
Prof. Anupam Basuand
Prof. Dipankar Sarkar
Department of Computer Science and Engineering
INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR
August 2011
c© 2011 Soumyajit Dey. All Rights Reserved.
APPROVAL OF THE VIVA-VOCE BOARD
Certified that the thesis entitled “Formal Analysis of HeterogeneousEmbedded Systems using Tagged Signal Model” submitted bySoumyajit Dey to the Indian Institute of Technology, Kharagpur, forthe award of the degree of Doctor of Philosophy has been accepted by theexternal examiners and that the student has successfully defended thethesis in the viva-voce examination held today.
(Member of the DSC) (Member of the DSC)
(Member of the DSC) (Member of the DSC)
(Supervisor) (Supervisor)
(External Examiner) (Chairman)
Date:
CERTIFICATE
This is to certify that the thesis entitled “Formal Analysis of Hetero-geneous Embedded Systems using Tagged Signal Model”, submit-ted by Soumyajit Dey to the Indian Institute of Technology, Kharagpur,for the partial fulfillment of the award of the degree of Doctor of Philos-ophy, is a record of bona fide research work carried out by him under oursupervision and guidance.
The thesis in our opinion, is worthy of consideration for the award of thedegree of Doctor of Philosophy in accordance with the regulations of theInstitute. To the best of our knowledge, the results embodied in this the-sis have not been submitted to any other University or Institute for theaward of any other Degree or Diploma.
Anupam BasuProfessorCSE, IIT Kharagpur
Dipankar SarkarProfessorCSE, IIT Kharagpur
Date:
DECLARATION
I certify that
(a) The work contained in the thesis is original and has been done bymyself under the general supervision of my supervisors.
(b) The work has not been submitted to any other Institute for anydegree or diploma.
(c) I have followed the guidelines provided by the Institute in writingthe thesis.
(d) I have conformed to the norms and guidelines given in the EthicalCode of Conduct of the Institute.
(e) Whenever I have used materials (data, theoretical analysis, andtext) from other sources, I have given due credit to them by cit-ing them in the text of the thesis and giving their details in thereferences.
(f) Whenever I have quoted written materials from other sources, I haveput them under quotation marks and given due credit to the sourcesby citing them and giving required details in the references.
Soumyajit Dey
CURRICULUM VITAE
Name : Soumyajit Dey
Education :
• Indian Institute of Technology, Kharagpur, India(July 2007 - Present)Degree: PhD student in Computer Science and Engineering.At: Department of Computer Science and Engineering
• Indian Institute of Technology, Kharagpur, India(August 2004 - November 2006)Degree: M.S. in Computer Science and Engineering.At: Department of Computer Science and EngineeringCGPA: 9.79/10
• Jadavpur University, Kolkata, India(August 2000 - May 2004)Degree: B.E. in Electronics and Telecommunications EngineeringPercentage: 85.3
Disseminations out of this work :
1. Soumyajit Dey, Dipankar Sarkar, Anupam Basu; ”A Tag Ma-chine based Performance Evaluation Method for Job-Shop Sched-ules”; IEEE Transaction CAD, vol 29(7), pp 1028-1041, 2010.
2. Soumyajit Dey, Dipankar Sarkar, Anupam Basu; ”A Kleene Al-gebra of Tagged System Actors”; IEEE Embedded Systems Letters,vol 3(1), pp 28-31, 2011.
3. Soumyajit Dey, Dipankar Sarkar, Anupam Basu; ”A Kleene Al-gebra of Tagged System Actors for Reasoning about HeterogeneousEmbedded Systems“ ; IEEE Transaction Computers, (Under Majorrevision)
ACKNOWLEDGMENTS
A PhD thesis is not necessarily a document written with great enthu-siasm and enjoyment throughout. Once the supervisors ask you to startwriting the thesis, a feeling of the type “yes, I have done something”comes to the mind, but gradually that diminishes with the never endingnumber of corrective iterations. However, the rigor involved in the processhelps one come out as a more confident individual, one who will probablybe able to think through problems with reasoning and prudence. It isimperative that such qualities are likely to be imbibed in oneself underthe influence of goodness emanated by the interactions with people onemeets in the long journey.
Enumerating such people is a tough job; first and foremost I mustmention my parents without whose constant support and good faith Iwould have never been able to pursue a PhD. I had been very fortunateto have Prof. Anupam Basu and Prof. Dipankar Sarkar as my PhD advi-sors. Their constant guidance, technical acumen and encouragement havevirtually carried me through all the tough times of problem identification,problem solving and thesis preparation. However, the hallmark of my IITlife as a research scholar has been the association with some great mindsand nice human beings as peers. It is impossbile to exhaustively list suchpeople in this limited space but I must specially mention personalitieslike Tirthankar, Renis, Plaban, Sumit, Soumyadip, Sandipan, Prosenjitand Chandan. These people (and also my mess-mates, both old and new)have reguraly broken the monotony of intensive work hours with warmth,humour, co-operation, partying and sometimes inspiring levels of stupid-ity simply for the sake of it!
Soumyajit Dey
Contents
List of Figures xix
List of Symbols xxi
List of Tables xxiii
Abstract xxv
1 Introduction 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.3 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.1 Performance Evaluation of Tagged Systems . . . . . . . . . 121.3.2 Translation from Other MoCs to Tag Machines . . . . . . 131.3.3 A Kleene Algebraic Formulation of TSM Actors . . . . . . 141.3.4 Algebraic Verification of Actor Networks . . . . . . . . . . 14
1.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151.5 Organization of the Thesis . . . . . . . . . . . . . . . . . . . . . . 15
2 Tagged Signal Model - Operational Semantics 192.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.2 An Intuitive Insight . . . . . . . . . . . . . . . . . . . . . . . . . . 202.3 Entities of a Tagged Signal Model . . . . . . . . . . . . . . . . . . 222.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3 Asymptotic Performance Analysis of Embedded Systems : Lit-erature Survey 353.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.2 The Problem of Job-shop Scheduling . . . . . . . . . . . . . . . . 363.3 Petri Net based Performance Evaluation of Job-Shops . . . . . . . 393.4 Heap based Method for Performance Evaluation . . . . . . . . . . 433.5 Performance Evaluation of SDFGs . . . . . . . . . . . . . . . . . . 443.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4 Tag Machine based Performance Analysis of Job-shop Schedules 494.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494.2 Modeling Job-shop Schedules using Tag Machines . . . . . . . . . 51
4.2.1 Composition of Tag Machines . . . . . . . . . . . . . . . . 53
xv
xvi CONTENTS
4.2.2 Job-Shop Schedules as Tag Vectors . . . . . . . . . . . . . 554.3 Inadequacy of known tag structures . . . . . . . . . . . . . . . . . 58
4.3.1 Inadequacy of the Algebraic Tag Structure Tdep . . . . . . 594.4 Proposing New Tag Structures . . . . . . . . . . . . . . . . . . . . 624.5 Performance Evaluation of Job-shops using Tag Machines . . . . . 68
4.5.1 Tag Machine based Performance Evaluation of Job-shops . 684.5.2 A Relative Comparison of the Approaches . . . . . . . . . 73
4.6 Performance Evaluation of SynchronousDataflow Graphs (SDFG) . . . . . . . . . . . . . . . . . . . . . . 75
4.7 Modeling Heterogeneous Composition . . . . . . . . . . . . . . . . 784.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5 Translation of Timed Automata to Tag Machines 835.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835.2 Theory of Timed automata . . . . . . . . . . . . . . . . . . . . . . 845.3 Zone Automata Construction . . . . . . . . . . . . . . . . . . . . 875.4 Characterizing Zone Automaton as Tag Machine . . . . . . . . . . 89
5.4.1 Characterizing Locations as Variables . . . . . . . . . . . . 895.4.2 Characterizing Transitions as Tag Pieces . . . . . . . . . . 915.4.3 Characterizing the Transition Relation . . . . . . . . . . . 97
5.5 Correctness of the Translation . . . . . . . . . . . . . . . . . . . . 985.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6 Reasoning about Embedded Systems : Literature Survey 1016.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016.2 Model Checking based Methods . . . . . . . . . . . . . . . . . . . 1036.3 Equivalence Checking for Behavioural Verification . . . . . . . . . 1046.4 Deductive Verification Techniques . . . . . . . . . . . . . . . . . . 1076.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
7 TSM Actors and their Kleene Algebra 1117.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1117.2 Some Important Definitions . . . . . . . . . . . . . . . . . . . . . 1127.3 The Order Structure of Tagged Signals . . . . . . . . . . . . . . . 113
7.3.1 Denotational and Operational Semantics of TSM . . . . . 1157.4 Actors in Tagged Signal Models . . . . . . . . . . . . . . . . . . . 116
7.4.1 Sequential Composition of Actors . . . . . . . . . . . . . . 1207.4.2 Concurrent Composition of Actors . . . . . . . . . . . . . 1247.4.3 The Star Operation on Actors . . . . . . . . . . . . . . . . 126
7.5 A Kleene Semantics for TSM actors . . . . . . . . . . . . . . . . . 1297.5.1 Kleene Algebra : some derived rules . . . . . . . . . . . . . 134
7.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
8 Equivalence Checking of Actor Networks 1378.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1378.2 Kleene Algebra with Test . . . . . . . . . . . . . . . . . . . . . . . 1388.3 Equivalence of Actor Networks . . . . . . . . . . . . . . . . . . . . 140
CONTENTS xvii
8.4 Equivalence of Synchronous Reactive Systems . . . . . . . . . . . 1418.4.1 The Reflex Game . . . . . . . . . . . . . . . . . . . . . . . 1438.4.2 TSM Actors for the Reflex Game . . . . . . . . . . . . . . 144
8.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
9 Property Verification of Actor Networks 1519.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1519.2 Discrete and Continuous TSM actors . . . . . . . . . . . . . . . . 151
9.2.1 Discrete TSM actors . . . . . . . . . . . . . . . . . . . . . 1529.2.2 Continuous TSM actors . . . . . . . . . . . . . . . . . . . 153
9.3 The European Train Control System . . . . . . . . . . . . . . . . 1559.3.1 Overview of ETCS protocol . . . . . . . . . . . . . . . . . 1559.3.2 A Formal Model for ETCS . . . . . . . . . . . . . . . . . . 157
9.4 A TSM Actor Model for ETCS . . . . . . . . . . . . . . . . . . . 1599.4.1 The Notion of Controllability . . . . . . . . . . . . . . . . 1639.4.2 KAT based Encoding of Safety properties . . . . . . . . . 1649.4.3 A Refined TSM Model for ETCS . . . . . . . . . . . . . . 177
9.5 Formal Property Verification using Kleene Algebra . . . . . . . . 1799.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
10 Conclusion 18510.1 Summarizing the contributions . . . . . . . . . . . . . . . . . . . . 18610.2 Future scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Bibliography 191
List of Figures
1.1 An overview of the formal analysis framework. . . . . . . . . . . . 12
2.1 A typical concurrent behaviour. . . . . . . . . . . . . . . . . . . . 212.2 The corresponding tagged system. . . . . . . . . . . . . . . . . . . 212.3 Pictorial representation of tag piece µ5. . . . . . . . . . . . . . . . 222.4 A case based analysis of tag vector computation. . . . . . . . . . . 282.5 A Tag Machine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.1 The example job-shop problem shown as a Petri net. . . . . . . . 403.2 The event graph for an infinite schedule (v′)ω with periodic occur-
rences of v′. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403.3 A sample SDFG G, the numbers at the ports indicate the produc-
tion (consumption) rate. . . . . . . . . . . . . . . . . . . . . . . . 453.4 The self-timed execution of G (with tail σ say) . . . . . . . . . . . 46
4.1 Tag machines for J1 and J2. . . . . . . . . . . . . . . . . . . . . . 524.2 Unification of event tag pieces. . . . . . . . . . . . . . . . . . . . . 544.3 Composite Tag machine for the tag machines in Fig 4.1. . . . . . 554.4 Heaps of tag pieces for the schedules v and v′ for the job-shop J
in Eq. 4.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594.5 Set of direct dependencies for schedules in [ν] and [ν ′]. The de-
pendency pairs are connected by the directed edges. . . . . . . . . 624.6 A sample behaviour modeled using Timm. . . . . . . . . . . . . . . 634.7 Incoming and outgoing dependency. . . . . . . . . . . . . . . . . . 664.8 A sample SDFG G. . . . . . . . . . . . . . . . . . . . . . . . . . . 754.9 The self-timed execution (σ say) of G . . . . . . . . . . . . . . . . 754.10 Tag pieces for actor firings (a ↑, c l) followed by (a ↓, b l). . . . . . 774.11 The heap of tag pieces for the periodic execution. . . . . . . . . . 774.12 A Heterogeneous scenario: Data Flow within Discrete Event MoC. 794.13 Composite tag pieces for a single execution cycle. . . . . . . . . . 80
5.1 An example timed automaton (Alur and Dill, 1994) . . . . . . . . 855.2 The region automaton for the timed automaton shown in Fig. 5.1. 865.3 The zone automaton (obtained from the region automaton in Fig.
5.2) of the timed automaton in Fig. 5.1. . . . . . . . . . . . . . . 875.4 Labeled Event Tag Pieces . . . . . . . . . . . . . . . . . . . . . . 945.5 Labeled Delay Tag Piece with the valuation function, ς. . . . . . . 965.6 The tag machine for the automaton A. . . . . . . . . . . . . . . . 97
xix
xx LIST OF FIGURES
7.1 The order Structure of D⊥ = {d1, d2 · · · , dk, · · · } ∪ {⊥} . . . . . . 1137.2 Demonstration of the ‘⋄’ operation for actors f1 and f2 with ex-
ternal inputs I = {i1, i2, i3} and outputs O = {o1, o2, o3, o4}. . . . 1217.3 Concurrent composition of actors modeling interleaved execution. 1247.4 Actor with feedback. . . . . . . . . . . . . . . . . . . . . . . . . . 1277.5 Violation of distributivity . . . . . . . . . . . . . . . . . . . . . . 132
8.1 Equivalent actor networks. . . . . . . . . . . . . . . . . . . . . . . 1408.2 A typical SR actor. . . . . . . . . . . . . . . . . . . . . . . . . . . 1428.3 The SR actor’s second firing instance. . . . . . . . . . . . . . . . . 1428.4 The reflex game board . . . . . . . . . . . . . . . . . . . . . . . . 1448.5 Reflex game : Implementation 1. . . . . . . . . . . . . . . . . . . 1488.6 Reflex game : Implementation 2. . . . . . . . . . . . . . . . . . . 148
9.1 ETCS train co-operation protocol overview . . . . . . . . . . . . . 1569.2 Braking distance SB for ensuring safety . . . . . . . . . . . . . . . 1579.3 A trajectory with speed well under m.v for ensuring safety. . . . . 1589.4 TSM model for a) the ETCS system and b) the component Train 1599.5 Controllable region in m . . . . . . . . . . . . . . . . . . . . . . . 1659.6 Assignment of new MA (m′ and m′′). . . . . . . . . . . . . . . . . 1709.7 The rbc assigns a new MA at tag τ ′ . . . . . . . . . . . . . . . . . 1729.8 Drive controllability : worst case estimate . . . . . . . . . . . . . 1739.9 The drive actor extends the tagged signal for tr uptill τ ′1. . . . . . 175
List of Symbols
T The set of tags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5R The set of Reals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5τ Tag vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20µ Tag piece . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21τ 0 Initial tag vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21τ1 ≤ τ2 Tag τ1 is related with tag τ2 by the timestamp order ≤ . . . . . . . . . . . .22τ1 ⊑ τ2 Tag τ1 is related with tag τ2 by the unification (causal) order ⊑ . . . 22τ1 ⊲⊳ τ2 Tag τ1 is unifiable with tag τ2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22τ1 ⊔ τ2 Unification map (least upper bound) of tags τ1 and τ2 . . . . . . . . . . . . .23N The set of natural numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23Tdep A tag structure modeling dependency relations . . . . . . . . . . . . . . . . . . . .23
T An algebraic tag structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26+τ Component-wise addition of tags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26τ = (τ , κ) Labeled tag vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31µ = (µ, ς) Labeled tag piece . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31Πk
j Projection of the j-th entry from a tuple of size k . . . . . . . . . . . . . . . . . 32Vµ Support of tag piece µ in V . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32ζ A function assigning tasks to machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36d A function assigning tasks with execution times . . . . . . . . . . . . . . . . . . . 36≺J The precedence relation among jobs in a job-shop J . . . . . . . . . . . . . . .37λJi Asymptotic throughput of job J i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37vω Infinite repetition of finite schedule v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38Rmax A (max,+) semiring over Rmax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38a⊕ b Maximum of a, b ∈ Rmax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38ρ(A) Eigen value of matrix A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39M(t) Initial matrix for task t in heap based performance evaluation . . . . . 43(γ, ϑ) State of an SDFG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45a ↑ (a ↓) Beginning(End) of firing of actor a in an SDFG . . . . . . . . . . . . . . . . . . . 46V sa Collection of simultaneously activable variables from V . . . . . . . . . . . 53V nsa Collection of non-simultaneously activable variables from V . . . . . . . 53t1It2 Independence of tasks t1 and task t2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58v1 ∼ v2 Equivalence of of job-shop schedules v1 and v2 . . . . . . . . . . . . . . . . . . . . 58⇀ A direct dependency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61# A transitive dependency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61[α] The equivalence class of job-shop schedule α . . . . . . . . . . . . . . . . . . . . . . 61⇀[ν] Set of direct dependencies for [ν] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62#[ν] Set of transitive dependencies for [ν] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
xxi
xxii LIST OF FIGURES
Timm A tag structure modeling immediate dependency . . . . . . . . . . . . . . . . . . 62≤imm Timestamp order over Timm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62⊑imm Unification (causal) order over Timm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62T O
imm A tag structure modeling immediate outgoing dependency . . . . . . . . 65T I
imm A tag structure modeling immediate incoming dependency . . . . . . . . 65Tperf A tag structure for performance evaluation of tagged systems . . . . . 67Φ(X) A set of clock constraints over the set X of clocks . . . . . . . . . . . . . . . . . 84R+ The set of positive real numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84ResetY A function which resets the clocks in Y . . . . . . . . . . . . . . . . . . . . . . . . . . . 85V The set of clock valuations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85z ⇑ Elapsing of time starting with clock valuations ∈ z . . . . . . . . . . . . . . . . 88+1 Unit increment function for clock valuations . . . . . . . . . . . . . . . . . . . . . . 96∨
Least upper bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
[Pm→ Q] Set of all Monotone functions from set P to set Q . . . . . . . . . . . . . . . .112
x ≍ y There exists no ordering among the elements x and y of a poset . .113graph(s) Graph of tagged signal s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113dom(s) Domain of tagged signal s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114Sv Set of all possible tagged signals on variable v . . . . . . . . . . . . . . . . . . . 114s1 ⊑ s2 Signal s1 is a prefix of signal s2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114SI(SO) Set of all signals on the input(output) variables . . . . . . . . . . . . . . . . . . 116σ(I) An input situation, i.e., collection of signals on variables ∈ I . . . . . 116σ(O) An output situation, i.e., collection of signals on variables ∈ O . . . 116Σa(O) A set of possible output situations due to an actor a . . . . . . . . . . . . . 117P(S) The powerset of set S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117⊑p A pre-order defined on P(Sv) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117≡ An equivalence relation defined on P(Sv) . . . . . . . . . . . . . . . . . . . . . . . . 118A Set of weakly Scott-continuous TSM actor functions . . . . . . . . . . . . . 1191A Identity actor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .124⊥A Empty actor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125⊑A Pointwise ordering on A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120≡A An equivalence relation defined on A . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120f1 ⋄ f2 Sequential composition of actors f1 and f2 . . . . . . . . . . . . . . . . . . . . . . . 121f1 + f2 Concurrent composition of actors f1 and f2 . . . . . . . . . . . . . . . . . . . . . . 124f ∗ Finite iteration of actor f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .126KA Kleene algebra of TSM actors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .130test(A) Sub-identity actors in A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138στ (v) A signal on variable v defined upto tag τ . . . . . . . . . . . . . . . . . . . . . . . . 118ld(σ, τ, v, τ1) The signal σ(v) is undefined for all tags > τ1 upto τ . . . . . . . . . . . . . 160τ |R The real part of tag τ ∈ R× N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161τ |N The natural part of tag τ ∈ R× N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
List of Tables
4.1 Asymptotic throughput in the examples discussed. . . . . . . . . . 81
7.1 Note that s2 ⊑ s1 and s3 is an empty signal. . . . . . . . . . . . . 114
xxiii
ABSTRACT
Modern day embedded systems are heterogeneous. The heterogeneity arises dueto the fact that such systems typically comprise multiple sub-systems which havewidely varying functionalities. For example, there may be sensors activated byenvironmental variations in pressure or temperature, there may be mechanicalmoving parts like gears which are actuated by electronic signals generated bycontrol units; the signal generation by control units may also depend on GPSdata gathered by satellite links, etc. Formal techniques for system synthesisand validation require appropriate Models of Computations (MoCs) for specify-ing such sub-systems. Consequently, the overall system specification becomes amix of specifications involving multiple such MoCs. Methods for formal analy-sis of such heterogeneous specifications are still at their infancy although somesimulation frameworks have been proposed till date.
The present work tries to provide some insight in this regard by advocatinga generalized meta-model of computation as an all-encompassing specificationmethod which can also be used in formal analysis as demonstrated here. Such awell known meta-model which may capture multiple MoCs and their interactionscompositionally is the tagged signal model (TSM). The model has been used forproviding sound execution semantics to heterogeneous system simulation toolslike Ptolemy (Eker et al., 2003). However, the scope of formal analysis techniqueslike performance evaluation and correctness verification of TSM based systemspecification has not been explored till date.
Using TSM as an underlying MoC, we propose a methodology for performanceevaluation of schedules for job-shops modeled using tag machines. Comparisonof the method with existing ones reveals that the proposed method has no depen-dence on schedule length in terms of modeling efficiency and it shares the sameorder of complexity with existing approaches. The proposed method, however,is shown to bear promise of applicability to performance evaluation of systemsspecified using other models of computation (MoCs) and heterogeneous systemmodels having such multiple constituent models.
For correctness verification of TSM based system models, we provide a rep-resentation of tagged systems using the semantics of Kleene algebra. We fur-ther illustrate mechanisms for both behavioural verification through equivalencechecking and property verification of heterogeneous embedded systems based onthis algebraic representation.
Keywords: Heterogeneous Embedded Systems, Tagged Signal Model, Asymp-totic Performance Evaluation, Job-shop Scheduling, Petri Nets, Heaps of Pieces,Formal Verification, Actor Theory, Kleene Algebra.
Chapter 1
Introduction
Modern day embedded systems are interconnected, software enabled, concurrent,
reactive and cyber-physical in nature (Lee and Seshia, 2011). Such a complex
characterization of embedded systems is largely because they include sub-systems
with grossly differing characteristics such as mechanical, optical, hydraulic, ana-
log and digital hardware as well as software specializing in dataflow management,
control logic realization, task scheduling, etc. with an overall requirement of ac-
ceptable real-time performance. To ensure reliable operation of such systems
in the face of growing design complexity, designers have to make use of formal
methods for analysis and verification of such systems. Many such methods for
both performance evaluation and formal verification of embedded systems have
been proposed till date. A wide variety of models are used for this purpose
depending upon their suitability for the application domain. However, one im-
portant requirement from embedded system design methodologies is the ability
to handle complex specifications which may contain multiple system description
formalisms. One possible way to accomplish this goal is to use a meta-model of
computation for formal analysis of embedded systems which is powerful enough
to capture and compose the commonly used description formalisms used for em-
bedded system design.
The framework of tagged signals provides a formal foundation towards de-
scribing both physical and computational systems along with their composition.
Further, it provides a meta-model to compare and relate the various models
1
2 Chapter 1 Introduction
of computations that are developed to study such systems (Liu, 2005; Lee and
Sangiovanni-Vincentelli, 1998). The Ptolemy modeling and simulation software
(Eker et al., 2003; Lee and Xiong, 2000) developed for the design of concurrent,
real-time embedded systems is based on this model. This dissertation aims at
providing frameworks for logical reasoning and performance evaluation of sys-
tems described using the models of tagged signals.
1.1 Motivation
A fundamental step in embedded system design is the specification process. The
requirement for a specification process is that it should be formal i.e., it should
have a syntax and a set of semantic rules for simulation of the behaviour described
in the syntax (Lavagno et al., 1999). Typical models of computations (MoCs)
used for embedded system specification include finite state machines (FSMs),
Petri-nets (PNs), Kahn process networks (KPNs), synchronous dataflow graphs
(SDFGs), etc. For system level design pertaining to a specific application domain,
the specification formalism is required to be expressive enough. However, at the
same time, the specification formalism is required to be devoid of unnecessary
constructs which are not relevant for that domain of application. Another key
requirement is that a specification formalism should have appropriate synthesis
and validation algorithms at the back-end.
In general, it has been observed that different MoCs are found to be suitable
for specifying systems belonging to different application domains (Eker et al.,
2003). The usage of different MoCs by designers for modeling different appli-
cation domains has been exemplified in the literature (Eker et al., 2003). Con-
tinuous time (CT) models, such as ordinary differential equations (ODEs), are
generally used for modeling physical systems like mechanical dynamics, analog
circuits, chemical plants, etc. In these domains, the popular tools are Spice
(Nagel, 1975), Simulink (Bishop, 1996), etc. Discrete-Event Models (DE) are
suitable for digital circuit design, network traffic and queuing system design.
Some popular languages and tools based on the DE model are VHDL (Nav-
abi, 1997), Verilog (Thomas and Moorby, 1991), NS (Breslau et al., 2000), etc.
DE models are also used for discrete control system design and cycle-accurate
1.1 Motivation 3
simulations. In certain design methodologies, time is abstracted out as syn-
chronous ‘ticks’. The underlying MoC in this case is synchronous reactive (SR)
model (Benveniste and Berry, 2002). The system reacts to the tick events and
the reaction is assumed to take zero time which is the synchrony hypothesis.
Languages like Esterel (Berry and Gonthier, 1992), Signal (Guernic et al., 1985),
Lustre (Halbwachs et al., 1991), Argos (Maraninchi, 1991), etc. are based on this
model. The design of software systems where causal ordering relations exist only
among a subset of events are sometimes done using synchronous message passing
models like Hoare’s communicating sequential processes (CSP) Hoare (1978) and
Milner’s calculus of communicating systems (CCS) (Milner, 1982). Also MoCs
like KPNs (Kahn, 1974b) and dataflow models (Lee and Parks, 2002) are used to
model systems based on asynchronous message-passing with FIFO-queues. The
events within a communication channel remain totally ordered while the order-
ing among events in different channels is unspecified in such MoCs. For specific
applications, the dataflow model may be specialized as synchronous dataflow
(SDF) (Lee and Messerschmitt, 1987), boolean dataflow (BDF) (Buck and Lee,
1993), cyclo-static dataflow (CSDF) (Lauwereins et al., 1995), etc. The usage of
dataflow models can be found in design tools like GRAPE-II (Lauwereins et al.,
1995), SPW, COSSAP (Kunkel, 1991; Allsop, 1994), etc.
Specifying a complete system using different MoCs leads to heterogeneous
specification. For example, consider the specification of an automotive system
where the power-train may be modeled using a continuous time block, the gear-
shift mechanism may be modeled as an FSM block while the audio-visual signal
processing components may be dataflow blocks (Balluchi et al., 2001). Thus,
heterogeneity is manifested naturally at the component level descriptions of em-
bedded systems where different models of computations (MoCs) may be used to
represent the operations of different components.
Various frameworks for modeling heterogeneous specifications have been re-
ported in the literature. Examples of such frameworks include CHARON (Alur
et al., 2003), Metropolis (Balarin et al., 2003; Burch, 2001), ForSyDe (Sander
and Jantsch, 2004), SML-Sys (D. Mathaikutty and Shukla, 2004; Mathaikutty
et al., 2008), etc. The Metropolis project (Balarin et al., 2003) advocates a plat-
form based system design methodology which supports hierarchical heterogene-
ity by abstraction and refinement (Burch, 2001). ForSyDe is a Haskell based
4 Chapter 1 Introduction
specification language for system level modeling and refinement of SoCs with
heterogeneous components (Sander and Jantsch, 2004). For both Metropolis
and ForSyDe, the development process starts with an initial specification model
and culminates into a final implementation model via semantic preserving re-
finements. ForSyde advocates a single unified model and Metropolis provides a
meta-model of computation for capturing and simulating a system description.
Both allow for heterogeneity at the component level, for example a common
platform for describing an application (the software) and an architecture (the
hardware). None of these support the mixing of MoCs in a system level specifi-
cation. This feature is available in SML-Sys (D. Mathaikutty and Shukla, 2004;
Mathaikutty et al., 2008) which provides a functional programming language for
simulation of heterogeneous specifications.
A common problem with heterogeneous specification frameworks is that the
interactions of such heterogeneous components are not well defined in general.
The problem with brute-force composition of heterogeneous models is that such
models may result in creating emergent behaviours (Eker et al., 2003), i.e., be-
haviours which emerge due to the interaction of different formal models but are
not intended to exist at all. As demonstrated in Eker et al. (2003), an exam-
ple of such emergent behaviours is the case of priority inversion among threads
in a real-time operating system. Thus, an important requirement from a for-
mal model which captures heterogeneous specifications is that the model should
support compositionality so that it is possible to determine the overall system
properties from known component properties. Also, in course of the design pro-
cess, it is essential to assess whether the lower abstraction levels conform to
certain properties honoured by the higher abstraction levels. Overall, the key
requirement for addressing the case of heterogeneous specifications is a mathe-
matical framework which may uniformly capture the different MoCs and their
composition in a typical embedded system design scenario.
The denotational framework of tagged signal models (TSMs) has long been
advocated as such a unified modeling framework (Lee and Sangiovanni-Vincentelli,
1998) which meets the above requirements. A TSM precisely defines events, be-
haviours and signals; it provides a framework for capturing the essential prop-
erties of MoCs like discrete-event systems, dataflow systems, rendezvous-based
systems and process networks. The semantics of tagged signals is based on the
1.1 Motivation 5
notion of using a partially ordered set (poset) of tags as a representation of time-
liness of a system. The tagged signal model emphasizes the fact that the key
difference among different MoCs is based on the representation of time. The
model incorporates the notion of time using a poset T whose elements are called
‘tags’. For capturing a specific MoC, the tags and their ordering relation are
interpreted suitably.
For a timed MoC, the set T of tags is totally ordered. For continuous time
MoCs, T = R; for MoCs based on super-dense time such as hybrid systems,
T = R × N. For untimed MoCs where the ordering of events is based on the
causality relationships only, T is simply assumed to be a poset with no further
restrictions. A tag structure represents an ordering relationship among events,
sequences of which describe the system behaviours. Depending on the choice
of tag structures, the model of tag systems is able to capture a wide range
of concurrency models like asynchronous, synchronous-reactive, time triggered
architectures (TTA) (Kopetz et al., 2003) and causality relationships (Benveniste
et al., 2005). It has been claimed in Benveniste et al. (2005) that a tag structure
based on dependencies between events is the most general case for modeling
using tag machines. Various MoCs, as stated above, are arrived at under certain
restrictions on the general dependency based modeling.
The denotational semantics of tagged systems deals with behaviours cap-
tured by “traces” which may, in general, be infinite and accordingly, a tagged
signal model does not represent a finite model of the underlying system. Based
on the concept of tags, the theory of tag machines has recently been proposed
(Benveniste et al., 2005) as a finitary representation of TSMs.
Formal techniques for composition of heterogeneous models interpreted as
tagged systems have been reported in Benveniste et al. (2008). The Ptolemy
modeling and simulation framework (Eker et al., 2003; Lee and Xiong, 2000) uses
the ordering of tagged events for heterogeneous system simulation. An example
of complex heterogeneous specifications using TSMs can be found in Balluchi
et al. (2000, 2001) where an engine power-train and its interaction with the
controller, sensors and actuators are modeled using TSMs. The model consists
of sub-systems conforming to different MoCs which are modeled by appropriate
choice of tag sets and tag structures.
6 Chapter 1 Introduction
From the literature, it becomes clear that multiple MoCs may be effectively
modeled using TSMs and composed by taking causality into account using the
technique of tag unification (Lee and Sangiovanni-Vincentelli, 1998; Benveniste
et al., 2008) and tag-conversion functions (Caspi et al., 2009). However, any mod-
eling paradigm calls for an associated methodology for analyzing the performance
and proving the correctness of the models. So far, no verification framework
based on TSMs has been reported in the literature to the best of our knowledge.
This requirement motivated us to develop formal analysis techniques for TSM
based specifications. More specifically, we focused on performance evaluation
and correctness verification in order to explore the scope of such techniques in
the context of heterogeneous embedded systems.
1.2 Objectives
The present work addresses two different aspects of formal analysis of embedded
system specifications - performance evaluation and correctness verification.
Different MoCs have corresponding methodologies for evaluating the asymp-
totic performance of infinitely repeating and periodic behaviours. For example,
MoCs like KPNs, SDFGs, etc, employ performance measures like “actor through-
put” or “overall throughput” (Kahn, 1974a; Ghamarian et al., 2006) which are
very similar to the computation of asymptotic throughput of jobs in a given job-
shop modeled using Petri-nets (Hillion and Proth, 1989). However, due to the
difference in the underlying MoCs, there exist different MoC-specific methodolo-
gies for computing such measures. The performance evaluation of a heteroge-
neous system specification cannot be performed by simply applying individual
performance evaluation mechanisms over the respective component MoCs, be-
cause due to the absence of compositionality, the interaction semantics among
the different sub-systems may not be captured. In the general case, the perfor-
mance evaluation of a heterogeneous system may involve an infinitely repeating
periodic schedule which is basically a concatenation of sub-schedules pertaining
to different sub-systems interleaved in an arbitrary manner. A sub-schedule may
not necessarily involve a complete execution cycle of a sub-system. Hence, it will
not always be the case that we may separately compute the asymptotic through-
1.2 Objectives 7
puts of the sub-systems, abstract away the execution details in a hierarchical
fashion and then compute the overall throughput. This clearly suggests that a
performance evaluation methodology based on TSM has the potential of being a
generic approach which remains uniformly applicable across multiple MoCs and
their compositions. The specific objectives of the present work with regard to
performance evaluation methodologies of embedded systems may be summarized
as follows.
• Formulation of a methodology for performance evaluation of tagged sys-
tems for computing the asymptotic throughput of any given periodic exe-
cution scenario of the system.
• Comparing the newly developed performance evaluation method with the
existing approaches like event graph based (Hillion and Proth, 1989) and
heap based methods (Gaubert and Mairesse, 1999).
• The above two objectives naturally necessitate exploring the possibilities of
translating specifications given using different MoCs into the corresponding
TSM representations so that the ordering of events over the input MoCs
is preserved for every run of the system. In other words, starting with a
heterogeneous specification, we need to have correct-by-construction trans-
lation mechanisms which will translate the specification of subsystems given
using different MoCs to tagged representations with appropriate choice of
tag structures. In this regard, there are two distinct possibilities :
– The specification is given in the form of an execution scenario of some
MoC which needs to be translated to the execution scenario of a tagged
system corresponding to that MoC.
– The specification is given in the form of a finitary model which needs
to be translated to a finitary tagged representation, i.e., a tag machine.
The tagged representations (which may have different underlying tag struc-
tures) are then composed using product tag structures or tag conversion
functions for succinctly representing the overall system.
As discussed previously, the tagged signal model offers two choices of semantics,
denotational and operational. In the denotational semantics, behaviours are
8 Chapter 1 Introduction
conceived as partial maps from tags to values (belonging to a certain domain).
In the operational semantics, behaviours are composed of partially ordered events
(where events over any particular variable are totally ordered) and there exist
finitary structures called tag machines which may generate such behaviours.
It may be noted that tag machines do not provide a natural initial specifi-
cation framework because they necessitate explicit encoding of the dependence
relation in all their transitions. Other finitary representations, in contrast, can
leave the dependence information explicit. One of the most widely used finitary
models for formal analysis in the domain of embedded real-time systems is the
model of timed automata. SDFGs are also popular in the domain of signal pro-
cessing applications where specifications are given as possible execution runs of
SDFGs. Furthermore, a common case of heterogeneous system modeling is spec-
ifications given as execution runs of dataflow modules combined with discrete
event modules. Hence, in order to facilitate the application of formal analy-
sis methods developed for TSMs in a heterogeneous specification, it is desirable
that MoCs which serve as natural representations in certain application domains,
may be translated using correct-by-construction translation methods to tagged
system representations.
It is possible to capture specifications given directly as execution scenarios
using the denotational semantics of tagged systems. The finitary representation,
however, is required when the original specification is also a finitary representa-
tion, given using MoCs like timed automata or finite automata. Hence, we choose
to base our performance evaluation methodologies on the operational semantics
of TSMs, i.e., the tag machines such that the set of all possible behaviours form
the language of the tag machine.
From the viewpoint of performance evaluation, an embedded system speci-
fication, when simulated for a specific schedule, can be observed as a collection
of tasks with inter-dependencies. The inter-dependencies may be thought of as
an ordering relation such that the set of tasks may be partitioned into a set of
chains in each of which the tasks are totally ordered with respect to the depen-
dence ordering. In effect, the problem resembles a job-shop scheduling problem
in case there is no ordering among tasks belonging to different chains and a
machine scheduling problem otherwise. The problem of job-shop scheduling is
1.2 Objectives 9
the most well-studied subclass of the more generic machine scheduling problem
(Abdeddaım et al., 2006). Hence a methodology for evaluating the asymptotic
performance of job-shop schedules using TSMs may prove to be easily adaptable
for performance evaluation of systems modeled using other MoCs. The theo-
retical possibility for being able to do so is apparent since the model of tagged
systems is flexible enough to capture different MoCs (using different or same tag
structures) (Benveniste et al., 2005) and their composition (Benveniste et al.,
2008).
In classical scheduling theory, methodologies for performance evaluation of
job-shop problems exist for Petri net models Baccelli et al. (1992). One such
approach consists in computing an event graph from the Petri net model for the
given schedule Hillion and Proth (1989). The problem with such an approach is
that a new timed event graph has to be constructed from the original Petri net
model of the job-shop for each schedule specification. Moreover, the approach
also suffers from the problem of growth of the state-space size with the schedule
length; longer schedules generate larger event graphs. An alternative approach,
based on heaps of pieces, was proposed in Gaubert and Mairesse (1999) which is
capable of computing the performance of infinite schedules with no dependency
on the given schedule and its length. Such an approach avoids computing the
timed event graph for every specified schedule although the order of computa-
tional complexity for both approaches is similar. The present work intends to
propose an algorithm for performance evaluation of job-shops using TSMs us-
ing the same (max, +) structures as reported in Hillion and Proth (1989) and
(Gaubert and Mairesse, 1999) and compare the performances of the different
algorithms.
Apart from performance evaluation, another important requirement in any
design methodology of embedded systems is the availability of formal analysis
methods for correctness verification. There are two primary aspects of formal
correctness verification of embedded systems - equivalence checking and property
verification.
In general, modern day embedded systems have complex specifications which
undergo step-by-step refinement procedures in typical high level synthesis flows
to arrive at the final implementation. In fact, high level synthesis flows (for
10 Chapter 1 Introduction
generating hardware and software components) are only a part of the overall
embedded system design methodology which involves many other steps like de-
sign space exploration involving hardware-software partitioning, multi-core task
mapping, transducer and device driver implementation, etc. It is necessary to
validate the behavioural equivalence among the inputs and the outputs of all
these steps. Another important aspect of embedded system design is the require-
ment for property verification. Such methodologies are required to ensure that
the final implementation satisfies certain desirable system properties in terms of
safety, liveness, etc.
There is a paucity of modeling frameworks which may capture heterogeneous
specifications as well as perform reasoning over such specifications for formal cor-
rectness verification of embedded systems. The specific objectives of the present
work with regard to formal correctness verification of embedded systems may be
listed as follows.
• Providing a reasoning framework for tagged systems for correctness verifi-
cation of heterogeneous embedded system specifications.
• Applying the reasoning framework towards behavioural verification via
equivalence checking of phase-wise transformations in an embedded sys-
tem design flow.
• Applying the reasoning framework towards property verification of embed-
ded systems.
For deriving methodologies of behavioural and property verification of tagged
systems, we have chosen the denotational semantics of tagged signals where
computing units or actors are conceived as continuous maps from input tagged
signals to output tagged signals. An actor based framework based on TSM is
envisaged as a good choice for modeling heterogeneous embedded systems since
actor based implementations combine the efficiency of object-style encapsulation
with concurrency (Agha and Hewitt, 1987). The actor theory of Hewitt and
Agha (Agha and Hewitt, 1987) has been extended for the model of tagged signals
by Liu and Lee (2008) where they propose a compositional framework of TSM
actors. An actor is functional if a corresponding actor function exists, receptive
1.2 Objectives 11
if the corresponding actor function is total and causal if it computes from the
past to the future.
One important requirement for an embedded system design flow is the ability
to capture system designs at different levels of abstraction. In such cases, certain
desirable properties (liveness, safety, etc.) can be verified even from a ‘less ac-
curate’ specification at a higher abstraction level which is nondeterministic due
to ‘under-specification’. An example of such an under-specification can be found
in Platzer and Quesel (2009) which discusses the verification of “European train
control system” protocol. Thus, in spite of the fact that embedded systems are
conceived to be deterministic, a natural requirement which emanates is a formu-
lation of actor functions (primarily targeted towards heterogeneous embedded
system modeling) which can capture nondeterminism in order to accommodate
under-specified system representations.
The denotational semantics of tagged signals lends itself to order theoretic
analysis (Liu and Lee, 2008). Our interest in the denotational semantics of tagged
signals is based on the intuition that an order theoretic analysis of the same may
reveal the underlying algebraic structure of the semantics which should be useful
in the context of formal verification efforts in this regard.
In the context of behavioural verification, our objective is to obtain the alge-
braic representations of two behaviours, one of which is obtained from the other
through compiler or manual transformation, and show their equivalence using the
axioms of the algebra. This scenario is typical in many system synthesis flows
where an original algorithmic behavioural specification is subjected to several
phases of optimizing transformations before it is mapped to an architecture.
In the context of property verification, our objective is to encode certain high
level properties of a given actor network using the algebra and prove them using
the axioms and certain known (low level) properties of the system (which are
also encoded using the same algebra).
Based on these objectives, we have the overall framework as shown in Fig.
1.1; the boxes in bold highlight the key areas we intend to address. The present
work envisages that the proposed frameworks for reasoning and performance
evaluation may provide valuable insights in the context of next generation soft-
12 Chapter 1 Introduction
Heterogeneous Specification (subsystems in different MoCs)
TSM representations
Translation Methods
PropertyVerificationFramework
PropertyPeriodicSchedule
Asymptotic Throughput
Composed Model
PerformanceEvaluator
Another model derived from a
YES / NO
Equivalence Checker
(Behavioural Verification)
transformed specification
subsystem 1 subsystem 2 subsystem n· · · · · ·
· · ·· · ·
Figure 1.1: An overview of the formal analysis framework.
ware tools which will support the hierarchical design of heterogeneous embedded
systems Lavagno et al. (1999); Jantsch and Sander (2005).
1.3 Contributions
We highlight the contributions that could really be arrived at in course of the
work as part of the overall objective.
1.3.1 Performance Evaluation of Tagged Systems
We address the problem of evaluating the asymptotic performance of job-shop
schedules using the formalism of tag machines. From an algebraic point of view,
the asymptotic performance evaluation of a schedule in any MoC consists in
extraction of the dependence relations among events in a given schedule along
with the timing information to form a set of mutually independent recurrence
1.3 Contributions 13
relations. We demonstrate how that can be done in the case of tagged sys-
tems using a job-shop scheduling problem as a running example. We establish
that the tag structure Tdep, which is known to be the most general one for de-
pendency based modeling Benveniste et al. (2005), is inadequate for computing
the asymptotic performance of job-shop schedules. Accordingly, we construct a
new tag structure for addressing the aforementioned problem. We next propose
an algorithm for performance evaluation of job-shop schedules modeled using
tag machines and perform complexity analysis to show that our algorithm for
performance evaluation does not suffer from any extra computational overhead
compared to the most efficient method known till date (Gaubert and Mairesse,
1999) for the same. More precisely, we show that our approach is also free from
any dependency on the schedule length similar to the approach using heaps of
pieces (Gaubert and Mairesse, 1999) and do not incur the overhead of computing
a new event graph for every schedule specification as in Hillion and Proth (1989).
1.3.2 Translation from Other MoCs to Tag Machines
We suggest how the proposed method of performance evaluation can be used
for modeling and throughput computation of the self-timed execution in SDFGs.
We model the periodic execution of an SDFG using tagged events.
We also take the example of a heterogeneous system comprising a discrete
event component and a data-flow component and show how the corresponding
tagged system can be derived using product of tag structures. This justifies the
fact that our performance evaluation mechanism may be applied even to such
complex modeling scenarios.
As an exercise to demonstrate that it is possible to provide correct-by-construction
methodologies for translating an MoC with finitary representation to a tag ma-
chine, we provide a methodology for structural translation from a timed au-
tomaton model to a tag machine. Subsequently, we prove the correctness of the
translation.
14 Chapter 1 Introduction
1.3.3 A Kleene Algebraic Formulation of TSM Actors
For the purpose of formal correctness verification of tagged systems, we model
such systems as networks of TSM actor functions which are partial maps from
one set of tagged signals (sequences of events) to another. Our treatment of
TSM actor functions differs from the one proposed in Liu and Lee (2008) in the
sense that our actors capture nondeterminism in a functional form in the lines
of Clinger’s denotational semantics of actors (Clinger, 1981) based on Plotkin’s
power domains (Plotkin, 1976).
We introduce the notion of weakly Scott-continuous nondeterministic actors.
In course of order theoretic analysis of such actors, the present work reveals that
the set of TSM actor functions, equipped with certain operations like sequential
and concurrent composition and finite iteration, forms a Kleene Algebra.
Since TSM actors can capture heterogeneous systems (involving different
models of computations for different subsystems), we propose Kleene Algebra
and its extensions as a basis for reasoning over such systems modeled as TSM
actor networks. Exploring the scope of such applications of the algebra leads to
the following contribution.
1.3.4 Algebraic Verification of Actor Networks
The Kleene algebraic formulation of TSM actors (described in functional forms)
reveals that any such actor network can be represented by Kleene expressions
defined over the component actors.
We apply the axioms of Kleene Algebra and its extensions for behavioural
verification through equivalence checking and property verification of actor net-
works. We prove the equivalence of two different specifications of a Reflex game
by constructing the corresponding actor networks and proving their equivalence
using the axioms of Kleene Algebra and its extensions. We also model the Eu-
ropean Train Control Protocol (ETCS) as an actor network implementation and
prove a safety property of the protocol using the same algebra.
1.4 Conclusion 15
1.4 Conclusion
In the present work, we propose a methodology for performance evaluation using
tag machines with complexity bounds identical to the two well known perfor-
mance evaluation algorithms for job shop schedules. For doing so, we proposed a
new tag structure for dependency based modeling of systems. Further, we show
the applicability of our approach for different MoCs and their composition that
can be captured by different kinds of tag structures and their products. At this
stage, however, it is worth underlining that although it is felt that the more gen-
eral problem of performance evaluation lends itself to this method, a full-fledged
substantiation of the claim necessitates a more exhaustive treatment. This will
be the natural direction of our future research efforts.
For performing reasoning over TSM actors, we propose a Kleene algebra of
such actors using which networks of TSM actors can be represented as KAT
expressions. It has been illustrated how algebraic reasoning over such networks
may lead to behavioural verification through equivalence checking and property
verification of heterogeneous embedded systems. Starting with the theoretical
foundations laid in the present work coupled with the recent success of KAT
based methods in program verification Kozen (2008), a possible future direction
is to create a software tool-suite which may be used for actor based modeling
of heterogeneous embedded systems. Such models may further be analyzed by
theorem provers which are enabled with specific proof tactics developed around
KAT to be used for equivalence checking and property verification of such models.
Another important work direction for behavioural and property verification of
tagged systems can be exploring the scope of model-checking methods in the
context of tag machines which are the finitary representations of tagged systems
1.5 Organization of the Thesis
The thesis is organized into ten chapters whose contents are summarized as
follows.
• In chapter 1, we provide a basic introduction to the requirement of formal
16 Chapter 1 Introduction
modeling and analysis tools for embedded system design followed by the
motivation, objectives and contributions of the present work.
• In chapter 2, we discuss the operational semantics of tagged signal models.
We begin with a general discussion regarding concurrent behaviours and
their inter-dependencies which motivates the different entities of TSMs and
finally provide the formal definitions.
• In chapter 3, we discuss the commonly used techniques for asymptotic
performance evaluation of embedded systems. More specifically, we elabo-
rate on well known methods for evaluating the asymptotic performance of
job-shop schedules and asymptotic throughput of actors in an SDFG.
• In chapter 4, we discuss our methodology of tag machine based modeling
and performance evaluation of job-shop schedules followed by case studies
performed over execution scenarios in SDFGs and heterogeneous models.
We model job-shop specifications as tag machines and compute tag vectors
for a given job-shop schedule. We propose novel tag structures which enable
capturing of all the necessary and sufficient dependency informations in a
tag vector so that recurrence relations over successive iteration of tasks can
be computed from such tag vectors. Finally, we provide an algorithm for
performance evaluation of job-shops modeled as tag machines. Next we
show how execution scenarios specified using other MoCs can be modeled
using tag vectors so that our method for performance evaluation remains
applicable across MoCs and their compositions.
• In chapter 5, we provide a correct-by-construction translation mechanism
from timed automata to tag machines in order to demonstrate the ap-
plicability of our approach in case of finitary MoCs. For a given timed
automaton, we construct a tag machine such that for any run of the timed
automaton leading to a configuration, there exists a corresponding run of
the tag machine leading to an equivalent configuration.
• Chapter 6 provides a short survey of commonly used reasoning mechanisms
in the domain of embedded systems. We briefly discuss about the well
known techniques for model-checking, equivalence checking and deductive
verification. We conclude our survey motivating the case for reasoning
1.5 Organization of the Thesis 17
mechanisms over TSMs as a possible technique for formal verification of
heterogeneous system specifications.
• In chapter 7, we discuss the underlying algebraic structure of tagged sig-
nals and functional maps over the set of such signals. We explore the order
structure of tagged signals and define the notion of weakly Scott-continuous
TSM actor functions and their equivalence. We define sequential and con-
current composition and finite iteration operations over such functions and
prove that the set of such functions is closed under these operations. Fi-
nally, we prove the axioms of Kleene algebra in the context of TSM actor
functions.
• In chapter 8, we demonstrate the application of Kleene algebraic axioms
for equivalence checking of actor networks. An actor network is represented
as a Kleene algebraic expression and the equivalence of two such networks
is deduced by applying the laws of Kleene algebra and its extensions.
• In chapter 9, we demonstrate the application of Kleene algebraic axioms for
property verification of actor networks. We define discrete and continuous
TSM actors and model the ETCS protocol using such component actors.
Subsequently, we verify a safety property of the protocol by equational
reasoning in Kleene algebra.
• Chapter 10 provides a concluding discussion about the work carried out
along with some insight towards certain directions of research which the
author feels to be worthwhile as natural implications of the present work.
Chapter 2
Tagged Signal Model -
Operational Semantics
2.1 Introduction
The tagged signal model was originally proposed by (Lee and Sangiovanni-
Vincentelli, 1998) and further extended by (Benveniste et al., 2005). The central
role of a tagged system is to capture the ordering among events, occurrences
of which cause variables to assume values from their respective domains. Tags
can be used to model time, precedence relationships, synchronization points and
other key properties of the events in an MoC, both untimed and timed. The
model of tagged systems have both an operational and a denotational semantics.
In the present chapter, we provide a formal introduction to the operational se-
mantics of tagged systems which will be used in the subsequent chapter as the
underlying model for system level performance evaluation. We begin by provid-
ing a motivational example for the different formal entities in a TSM in section
2.2 and then provide their formal definitions in section 2.3. We conclude the
discussions in section 2.4.
19
20 Chapter 2 Tagged Signal Model - Operational Semantics
2.2 An Intuitive Insight
An intuitive insight about various entities needed for a tagged signal model can be
developed as follows. A behaviour is a sequence of concurrent events; each event
is synonymous with a (tag, value)-pair, where the tag captures the positional
and the causal orders among the events in the sequence. Since, in general, the
events on a variable v, say, may causally depend on events on other variables, a
tag associated with (an event on) v may have as many components as there are
variables.
An important observation in this regard is that the dependency relations are
naturally transitive1. For example, consider a behaviour in which the j-th event
on a variable v2 causally depends on the i-th event on a variable v1, the k-th
event on a variable v3 causally depends on the j-th event on the variable v2.
The k-th event on v3 introduces a causal dependency from both the i-th event
on v1 and the j-th event on v2 to itself and all the subsequent events on v3
till new dependencies of an event on v3 evolve. Thus, each event (or multiple
concurrent events) along a behaviour introduces new dependencies among the
events on different variables. The evolution of such transitive dependencies with
each event along a behaviour can be captured by an ordered tuple of tags, called
a tag vector τ , having as many components as |V |, the cardinality of the set V of
the underlying variables. Thus, τ = (τ v)v∈V where τ v is the component in the tag
vector τ corresponding to the variable v. Let τ i represent the tag vector arrived
at after the i-th member of the sequence (behaviour); τ iv(v) = n ≤ i denotes
that so far, n events have occurred on v; τ iv(u)|u 6=v = k denotes that some j-th
event, j ≤ n (= τ iv(v)), on v has a causal dependency on the k-th event on u and
the (j + 1)-th to the n-th events on v do not have any dependence on any l-th
event on u, where l > k, because the question of dependency of these events on v
(subsequent to the j-th one) on some events on u earlier than the k-th one does
not arise. Also, τ v(u)|u 6=v = −∞ (denoted as ǫ) indicates that so far no event
on v has any dependency on any event on u. The positional order among tags is
captured by the partial order ≤ and the causal dependency by the partial order
⊑. Obviously, ⊑ ⊆ ≤.
1Although, we will be discussing other variants of dependency relations later on.
2.2 An Intuitive Insight 21
Each concurrent event in the sequence representing a behaviour can also be
captured by a tag vector. However, in order to capture the evolution of tag
vectors along a sequence algebraically, a different representation is adopted for
the members of the sequence. More specifically, this representation is intended
to capture the evolution as a transformation of the tag vectors and hence each
member of a sequence of concurrent events is represented by a square matrix
µ of order V × V of tags, called tag pieces. A binary operation ‘·’ is intro-
duced over tags so that the evolution of transitive dependencies along a sequence
〈µ1, · · · , µi, · · · 〉 can be captured by the equation τ i = τ i−1 · µi, i ≥ 1, where
τ 0 is the initial tag vector with τ 0v(v) = 0 (no event on v to start with) and
τ 0v(u) = ǫ, ∀u, v ∈ V, v 6= u (no dependency). We refrain from interpreting the
entries of the tag pieces at this stage any further; they are described shortly.
Let us consider, for example, a typical behaviour as depicted in Fig 2.1 with
the variable values at different time steps and the causal dependencies among
variables marked by arrows. Fig 2.2 depicts the corresponding tagged system
x:
z:
y:
T F T TF F
1
1
1
1
P
Figure 2.1: A typical concurrent behaviour.
for the behaviour in Fig 2.1 with each event associated with a (tag, value) pair.
The blank entries in Fig. 2.2 are the same as the previous non-blank entries in
τ 1 τ 2 τ 3 τ 4 τ 5 τ 6(1, ǫ, ǫ); T (2, ǫ, ǫ); F (3, 1, 1); T (4, 1, 1); F (5, 1, 1); T
(ǫ, 1, ǫ); 1 (ǫ, 2, 1); 1
(ǫ, 1, 1); 2 (ǫ, 2, 2); 0
(6, 1, 1); Fx
y
z
Figure 2.2: The corresponding tagged system.
the row. Specifically, τ 0 = [[0, ǫ, ǫ]; [ǫ, 0, ǫ]; [ǫ, ǫ, 0]], τ 1 = [[1, ǫ, ǫ]; [ǫ, 1, ǫ]; [ǫ, ǫ, 0]],
τ 2 = [[2, ǫ, ǫ]; [ǫ, 1, ǫ]; [ǫ, 1, 1]] and so on2. Also, the tag pieces are such that
2τ = [τx; τy; τz ]
22 Chapter 2 Tagged Signal Model - Operational Semantics
τ 1 = τ 0 · µ1 ; τ 2 = τ 1 · µ2, etc. The pictorial representation of the tag piece µ5 of
the behaviour in Fig 2.2 is given in Fig. 2.3. For understanding the dependencies
x y z
Figure 2.3: Pictorial representation of tag piece µ5.
as captured by a tag vector, let us consider τ 4 = [[4, 1, 1]; [ǫ, 2, 1]; [ǫ, 1, 1]]. τ 4x(x) =
4 ⇒ four events have occurred so far on x. τ 4x(z) = 1 ⇒ some event (the third
one) on x depends on the first event on z. It is clear from the figure that the
vectors along the sequence (behaviour) also depict the transitive dependencies.
For example, τ 3x(y) = 1, although there is no event on x which depends on any
event on y; however, the third event on x depends on the first event on z which, in
turn, depends on the first event on y thereby introducing a transitive dependency
of the (third) event of x on the first event on y.
Given the above insight, the different entities of a tagged signal model are for-
mally defined next. The following definitions are taken from (Lee and Sangiovanni-
Vincentelli, 1998; Benveniste et al., 2004, 2005).
2.3 Entities of a Tagged Signal Model
Definition 1. A tag structure is a triple (T ,≤,⊑), where T is the set of tags,
and ≤ and ⊑ are two partial orders on T such that ⊑⊆≤, where ≤ is the time
stamp order and ⊑ is the unification order.
For example, consider the tags τ 4y = (ǫ, 2, 1) and τ 2
z = (ǫ, 1, 1) in Fig. 2.2
which are ordered as (ǫ, 1, 1) ⊑ (ǫ, 2, 1) by the unification order since the second
event on variable y depends on the first event on variable z as shown in Fig. 2.2.
It is also the case that (ǫ, 1, 1) ≤ (ǫ, 2, 1) since the second event on variable y
occurs at a later time stamp w.r.t. the first event on variable z.
A derived relation ‘⊲⊳’ over tags is defined as follows. For any two tags τ1 and
2.3 Entities of a Tagged Signal Model 23
τ2, τ1 ⊲⊳ τ2 (τ1 is unifiable with τ2) iff τ1 and τ2 have a common upper bound in the
unification order ⊑. The least upper bound of two unifiable tags τ1 and τ2, called
the unification map of τ1 and τ2, is denoted as τ1 ⊔ τ2. From ⊑⊆≤, we have,
τ1 ⊲⊳ τ2 ⇒ τi ≤ (τ1 ⊔ τ2) for i = {1, 2} and τ1 ≤ τ ′1 ∧ τ2 ≤ τ ′2 ∧ τ1 ⊲⊳ τ2 ∧ τ′1 ⊲⊳ τ
′2
⇒ (τ1 ⊔ τ2) ≤ (τ ′1 ⊔ τ′2). The condition may be proved as follows. Note that,
τ1 ≤ τ ′1 ≤ τ ′1 ⊔ τ′2 and τ2 ≤ τ ′2 ≤ τ ′1 ⊔ τ
′2. Hence, τ ′1 ⊔ τ
′2 is an upper bound of τ1
and τ2. Since τ1⊔ τ2 is the least upper bound of τ1 and τ2, τ1⊔ τ2 ≤ τ ′1 ⊔ τ′2. This
condition is the key to ensuring that parallel composition of tagged behaviours
preserves the agreed order of tagged events in component behaviours (Benveniste
et al., 2003).
The tag structure is decided depending on the kind of analysis to be performed
for the target system. The tag structure (Tdep,≤,⊑) is defined as follows. Any
τ ∈ Tdep is a mapping τ : V → N0, where V is the underlying set of variables
with N0 =def N∪{ǫ}, where ǫ = −∞. The set T of tags is the set of all mappings
from V to N0 denoted as [V → N0] such that τ ≤ τ ′ iff ∀v, τ(v) ≤N τ′(v) where
≤N is the relation ≤ on the set N of non-negative numbers. It may be noted that
the behaviour shown in Fig 2.2 uses Tdep.
From Fig 2.2, note that the tags associated with consecutive events on the
same variable are related by ≤. For example, consider the second and the third
events on x with tags (2, ǫ, ǫ) and (3, 1, 1). However, the tags associated with
two events on two different variables may not be related by ≤ even if there is
a time sequence between them. For example, consider the second event on x
with tag (2, ǫ, ǫ) and the first event on y with tag (ǫ, 1, ǫ) which are not related
by ≤. This happens due to absence of any causal dependency between them.
Had there been such a causal dependency from the first event on y to the second
event on x, the tag of the second event on x would have been (2, 1, ǫ) such that
(ǫ, 1, ǫ) ≤ (2, 1, ǫ). The fact that unification ordering is contained in the time
stamp ordering, i.e., ⊑⊆≤, is general enough (and accordingly has been put
in the definition). However, in case of Tdep, it also holds that ≤⊆⊑ which is
demonstrated as follows.
Let V be set of underlying variables such that τ iu is the tag of some i-th event
on u ∈ V and τ jv is the tag of some j-th event on v ∈ V such that τ i
u ≤ τ jv .
Consider the case for u = v so that we have, τ iu ≤ τ j
u ⇒ τ iu(u) = i ≤ τ j
u(u) = j,
24 Chapter 2 Tagged Signal Model - Operational Semantics
since ≤ is defined component wise. Hence, the i-th event on u occurs earlier
than the j-th event on u. Since events on the same variable are always causally
dependent on the earlier events, we have, τ iu ⊑ τ j
v . In general, for u 6= v, it must
be the case that [τ iu(u) = i ≤ τ j
v (u)] ∧ [τ iu(v) ≤ τ j
v (v) = j]. Consider the first
conjunct τ iu(u) = i ≤ τ j
v (u). Let τ jv (u) = i + k, k ≥ 0. Hence, some event on v
depends on the (i+k)-th event on u; let this event on v be the (j− l)-th event on
v, l ≥ 0 (since τ jv (v) = j, the j-th event is the last event on v). Now, the (i+k)-th
event on u depends on the i-th event on u and the j-th event on v depends on
the (j − l)-th event on v giving us the dependency chain, τ iu ⊑ τ i+k
u ⊑ τ j−lv ⊑ τ j
v
so that τ iu ⊑ τ j
v . In general,∀τ1, τ2 ∈ Tdep, τ1 ≤ τ2 ⇒ τ1 ⊑ τ2 i.e., ≤⊆⊑. This
can also be observed from Fig 2.2. For example, consider the the first event
on z with tag (ǫ, 1, 1) and the third event on x with tag (3, 1, 1). We have,
(ǫ, 1, 1) ≤ (3, 1, 1) and it is also the case that (ǫ, 1, 1) ⊑ (3, 1, 1) as is evident
from the dependency arrows shown in Fig. 2.2. Since in general for any tag
structure, ⊑⊆≤ and in case of Tdep it also holds that ≤⊆⊑, we have ≤=⊑ for
Tdep. Also in case of Tdep, we have for any two tags τ and τ ′, τ ⊔τ ′ = max(τ, τ ′),
wherever it is defined (i.e., whenever τ ⊲⊳ τ ′). By definition of ≤ for Tdep, the
max operation is taken componentwise and thus it is defined for all possible tag
pairs of Tdep.
Definition 2. Let V be the underlying set of variables which assume values from
domain D. A V -behaviour is a mapping:
σ : V → N→ (T ×D) (2.1)
The map σ (v) ∈ N → (T ×D) is called a signal. For a given behaviour σ,
an event of σ is a tuple (v, n, τ, x) ∈ V ×N×T ×D such that σ(v)(n) = (τ, x),
which implies that the n-th event on the variable v in the behaviour σ has the
tag value τ ∈ T and value x ∈ D.
The relation ⊲⊳ and the unification map ⊔ extend point-wise to events as
well as behaviours. In the unification scenario described in definition 1, let
ei = (vi, ni, τi, xi) for i = 1, 2. Then, e1 ⊲⊳ e2 iff v1 = v2, n1 = n2, τ1 ⊲⊳ τ2 and
x1 = x2. Also, e1 ⊲⊳ e2 ⇒ e1 ⊔ e2 = (v1, n1, τ1 ⊔ τ2, x1). Similarly, we may define
σ1 ⊲⊳ σ2 (Benveniste et al., 2003).
Definition 3. A tagged system is a triple P = (V, T ,Σ), where V is a finite set
2.3 Entities of a Tagged Signal Model 25
of variables, (T ,≤,⊑) is a tag structure and Σ is a set of V -behaviours.
It may be noted that Fig. 2.2 depicts a behaviour as a sequence of tag
vectors obtained by applying a sequence of concurrent events captured by tag
pieces (µ’s), on the initial tag vector τ 0. More precisely, a tag piece µ comprises
concurrent events on different variables and also their dependence on some earlier
events on the variables. Let 〈1v〉 denote a |V |-tuple with 1 in the v-th component
and 0 everywhere else. Let 〈0〉 (〈ǫ〉) denote a tuple of size |V | with all entries
0 (ǫ).
Definition 4. A tag piece µ is a matrix µ : V × V 7→ T with the entries being
interpreted as follows.
µ(v, v) = 〈1v〉 : if there is an event on v in the tag piece µ
µ(v, v) = 〈0〉 : if there is no event on v in the tag piece µ
µ(u, v) = 〈0〉 : if there is a dependency of the event on v
in the tag piece µ on the last event on u, u 6= v
µ(u, v) = 〈ǫ〉 : if there is no dependency of the event on v
in the tag piece µ on the last event on u, u 6= v
Also, if µ(v, v) = 〈0〉, then µ(u, v)|u 6=v = 〈ǫ〉, for all u ∈ V .
Note that when we speak of the last event on a variable, we are excluding the
event, if any, on the variable in the tag piece under consideration. For example,
let us consider the different tag pieces in the tagged behaviour of Fig 2.2.
µ1 =
〈1x〉 〈ǫ〉 〈ǫ〉
〈ǫ〉 〈1y〉 〈ǫ〉
〈ǫ〉 〈ǫ〉 〈0〉
; µ2 =
〈1x〉 〈ǫ〉 〈ǫ〉
〈ǫ〉 〈0〉 〈0〉
〈ǫ〉 〈ǫ〉 〈1z〉
; µ3 =
〈1x〉 〈ǫ〉 〈ǫ〉
〈ǫ〉 〈0〉 〈ǫ〉
〈0〉 〈ǫ〉 〈0〉
µ4 =
〈1x〉 〈ǫ〉 〈ǫ〉
〈ǫ〉 〈1y〉 〈ǫ〉
〈ǫ〉 〈0〉 〈0〉
; µ5 =
〈1x〉 〈ǫ〉 〈ǫ〉
〈ǫ〉 〈0〉 〈0〉
〈ǫ〉 〈ǫ〉 〈1z〉
; µ6 =
〈1x〉 〈ǫ〉 〈ǫ〉
〈ǫ〉 〈0〉 〈ǫ〉
〈ǫ〉 〈ǫ〉 〈0〉
Specifically, considering the concurrent snapshot of the second time step in the
behaviour of Fig 2.1, we have two concurrent events on the variables x and z and
there is no event on variable y. Hence in the matrix representation of tag piece
µ2, we have µ2(x, x) = 〈1x〉, µ2(y, y) = 〈0〉 and µ2(z, z) = 〈1z〉. Also, the event
on z has a dependency on the last event on y and hence µ2(y, z) = 〈0〉.
26 Chapter 2 Tagged Signal Model - Operational Semantics
Similarly, considering the concurrent snapshot of the fourth timestep in the
behaviour of Fig 2.1, we have two concurrent events on the variables x and y and
there is no event on variable z. Hence, in the matrix representation of tag piece
µ4, we have µ4(x, x) = 〈1x〉, µ4(y, y) = 〈1y〉 and µ4(z, z) = 〈0〉. Also observe
that the concurrent events have no dependence on any other events. Hence all
other non-diagonal elements in µ4 are 〈ǫ〉.
As discussed previously, a binary operation over tags is required to be defined
in order to capture the evolution of concurrent events and their dependencies in
the form of a tag vector which can be computed from a behaviour represented
by a sequence of tag pieces. A tag structure enabled with such an operation is
called an algebraic tag structure which is defined formally as follows.
Definition 5. A tag structure (T ,≤,⊑) is called algebraic if T is equipped with
a binary operation “∗”, called concatenation, with the properties:
1. (T , ∗) is a monoid with unit 1 and
2. the operation “∗” is monotonic with respect to ≤ and ⊑ such that,
τ1 ≤ τ ′1 ∧ τ2 ≤ τ ′2 ⇒ (τ1 ∗ τ2) ≤ (τ ′1 ∗ τ′2)
τ1 ⊑ τ ′1 ∧ τ2 ⊑ τ ′2 ⇒ (τ1 ∗ τ2) ⊑ (τ ′1 ∗ τ′2)
Such an algebraic tag structure is denoted as T = (T ,≤,⊑, ∗). The algebraic
tag structure Tdep contains the operation ‘+τ ’ for ‘∗’ which means component-
wise addition of the tags for each variable v ∈ V . More precisely, ∀τ1, τ2 ∈ Tdep,
(τ1 +τ τ2)(v) =
{
τ1(v) + τ2(v) if τ1(v), τ2(v) 6= ǫ,
ǫ if τ1(v) = ǫ or τ2(v) = ǫ, ∀v ∈ V
With abuse of notation, we use ‘+’ for ‘+τ ’. The operation ‘+’ is defined on
N0 as usual addition in N for the case when none of the operands are ǫ. Otherwise,
‘+’ always returns ǫ. Thus the tag 〈0〉 is the identity of the monoid (Tdep,+).
Next we define the computation rule for tag vectors through the concatenation
of tag pieces.
Definition 6. Tag pieces operate on vectors of tags of dimensionality |V |. Given
τ = (τ v)v∈V as a vector of tags, τ · µ is a new vector τ 1 of tags defined as
(τ 1)v = (τ · µ)v = ⊔w∈V
(τw · µ(w, v)) (2.2)
2.3 Entities of a Tagged Signal Model 27
For Tdep, the generic tag vector computation rule specializes as :
(τ · µ)v = ⊔w∈V
(τw + µ(w, v)) (2.3)
The working of the above computation rule can be demonstrated as follows. Let
us consider a tag vector τ to be concatenated with a tag piece µ so that the
resultant tag vector τ ′ is given by τ ′ = τ · µ. Consider V = {v1, v2, v3, · · · }. We
analyse the computation of τ ′v1in τ ′ = τ · µ to show why the concatenation rule
is able to capture the event count and the last occurring dependencies for each
variable. We consider a case analysis based on the presence and absence of events
on v1 in µ and their dependencies as shown in Fig 2.4 depicting the desired value
of τ ′ in each case. For the component τ ′v1(v1), the computation rule becomes
τ ′v1(v1) = max
τ v1(v1) + µ(v1, v1)(v1)
τ v2(v1) + µ(v2, v1)(v1)
τ v3(v1) + µ(v3, v1)(v1)
...
(2.4)
For all other dependency components (vi say),
τ ′v1(vi) = max
τ v1(vi) + µ(v1, v1)(vi)
τ v2(vi) + µ(v2, v1)(vi)
τ v3(vi) + µ(v3, v1)(vi)
...
(2.5)
Note that τ vi(v1), i 6= 1, provides the events on v1 from which there exist depen-
dencies on events on the i-th variable in τ and τ v1(v1) provides the number of
events on v1 in τ . Thus, τ v1(v1) ≥ τ vi
(v1), ∀i 6= 1. Hence for τ vi(v1), row 1 of
the (max,+) computation will always have the highest value.
Case 1: (i) If the tag piece µ has no event on v1 then µ(v1, v1)(v1) = 0. Thus,
τ ′v1(v1) = τ v1
(v1) + µ(v1, v1)(v1) = τ v1(v1).
(ii) Since there is no event on v1 in µ, µ(vl, v1)(vk) = ǫ, ∀k, ∀l 6= 1 and µ(v1, v1)(vi) =
0. Thus, τ ′v1(vi) = τ v1
(vi) + µ(v1, v1)(vi) = τ v1(vi).
Case 2: (i) If the tag piece µ has an event on v1, then µ(v1, v1)(v1) = 1. Thus,
τ ′v1(v1) = τ v1
(v1) + µ(v1, v1)(v1) = τ v1(v1) + 1.
Case 2.1: The event on v1 in µ has no (direct) dependency on any event
on other variables i.e., µ(vi, v1) = 〈ǫ〉, i 6= 1. Hence, τ ′v1(vi) = τ v1
(vi) since
µ(v1, v1)(vi) = 0. This is the case where the older dependency value is preserved.
28 Chapter 2 Tagged Signal Model - Operational Semantics
Case 1 Case 2
µ
No event on v1 Event on v1
i)
ii)
i)
ii)
Case 2.2Case 2.1
in µ on last event on vi, ∀i 6= 1Event on v1 has no (direct) dependency Event on v1 has (direct) dependency
on last event on vi, ∃i 6= 1
i)i)
ii) Indirect dependency : Let vj be a variable suchthat the event on v1
in µ has no direct dependency on the last eventon vj but let in τ there exist adependency of some event on vi on the k-th
Case 2.2.1
event on vj i.e., τ vi(vj) = k
Case 2.2.2
τ ′v1(v1) = τ v1
(v1)
τ ′v1(vi) = τ v1
(vi), i 6= 1
τ ′v1(v1) = τ v1
(v1) + 1
τ ′v1(vi) = τ v1
(vi), i 6= 1 τ ′v1(vi) = τ vi
(vi)
τ v1(vj) ≤ τ vi
(vj) = k
then τ ′v1(vj) = τ vi
(vj)
τ v1(vj) > τ vi
(vj) = k
then τ ′v1(vj) = τ v1
(vj)
Figure 2.4: A case based analysis of tag vector computation.
2.3 Entities of a Tagged Signal Model 29
Case 2.2: (i) The event on v1 in µ has a dependency on the last event on
vi, i 6= 1 i.e., µ(vi, v1) = 〈0〉. In that case, τ ′v1(vi) = τ vi
(vi) since µ(vi, v1) = 〈0〉,
µ(v1, v1) = 〈0〉 and τ vi(vi) ≥ τ v1
(vi) giving us τ vi(vi) + µ(vi, v1)(vi) ≥ τ v1
(vi) +
µ(v1, v1)(vi). This is the case where the addition of a new dependency gets di-
rectly reflected in the tag vector.
(ii) Next we consider the case of indirect dependency, that is, the event on v1 in µ
has a (direct) dependency on the last event on vi, i 6= 1, such that µ(vi, v1) = 〈0〉
and no direct dependency on the last event on some variable vj , j 6= i, in µ and
some event on vj has dependency on an event on vi in τ , i.e., τ vi(vj) 6= ǫ. 3
Recall that the dependency of (some) event on v1 on (some) event on vj in τ
is given by τ v1(vj). In the present situation we have two different possibilities,
τ v1(vj) ≤ τ vi
(vj) and τ v1(vj) > τ vi
(vj).
Case 2.2.1: τ v1(vj) ≤ τ vi
(vj) : Here,
τ ′v1(vj) = max(τ v1
(vj) + µ(v1, v1)(vj), τ vi(vj) + µ(vi, v1)(vj)) = max(τ v1
(vj) +
0, τ vi(vj) + 0) = τ vi
(vj).
If multiple such dependencies get introduced transitively via µ then the one with
the maximum value will prevail. For example, consider another variable vl with
which (event on) v1 has a similar dependency scenario as with vj . In that case,
τ ′v1(vj) = max(τ vi
(vj), τ vi(vl)) by Eq. 2.5. The dependency scenarios discussed
in the present case are known as indirect or transitive dependency. The depen-
dency of some event on vi on the k-th event on vj is transitively reflected in the
tag vector τ ′ as a dependency of some event on v1 on the k-th event on vj due
to the introduction of a dependency of the event on v1 in µ on the last event on
vi. We will discuss further about such dependencies in a later chapter.
Case 2.2.2: τ v1(vj) > τ vi
(vj) : Here,
τ ′v1(vj) = max(τ v1
(vj) + µ(v1, v1)(vj), τ vi(vj) + µ(vi, v1)(vj)) = max(τ v1
(vj) +
0, τ vi(vj) + 0) = τ v1
(vj).
Thus, the current transitive dependency does not affect the existing dependency
(of the event on v1 on that on vj) because considering the transitive dependency
relation from vj to v1 via vi and the direct dependency relation from vj to v1,
the latter is more recent than the former.
The computation of the tag vector τ for the behaviour shown in Fig 2.2 is
3The absence of (direct) dependency of the event on v1 in µ on the last event on vj is
important in this case analysis. Had there been a direct dependency of the event on v1 in µ
on the last event on vj , j 6= 1, the case would have been identical to case 2.2 (i).
30 Chapter 2 Tagged Signal Model - Operational Semantics
given by τ = (· · · ((τ 0 · µ1) · µ2) · · · ). The “·” operation is left associative. For
example, τ 5 = τ 4 · µ5 = [[4, 1, 1]; [ǫ, 2, 1]; [ǫ, 1, 1]] · µ5 = [[5, 1, 1]; [ǫ, 2, 1]; [ǫ, 2, 2]] =
(τ 3 · µ4) · µ5 and 6= τ 3 · (µ4 · µ5). We exemplify our computation rule as follows.
Note that, τ 5 = τ 4 · µ5
= [[4, 1, 1]; [ǫ, 2, 1]; [ǫ, 1, 1]] ·
〈1x〉 〈ǫ〉 〈ǫ〉
〈ǫ〉 〈0〉 〈0〉
〈ǫ〉 〈ǫ〉 〈1z〉
= [[5, 1, 1]; [ǫ, 2, 1]; [ǫ, 2, 2]]
Considering τ 5z, let us first compute τ 5
z(x). We find that for the event on z
in µ, 1) there exists no dependency on the last event on x and 2) there exists
a dependency on the last event on y such that τ 4z(x) = ǫ ≤ τ 4
y(x) = ǫ. The
situation matches with case 2.2.1 and hence τ 5z(x) = τ 4
y(x) = ǫ.
For τ 5z(y), we have the scenario that in µ there exists a dependency from the last
event on y to the event on z. The situation matches with case 2.2 and we have,
τ 5z(y) = τ 4
y(y) = 2.
For τ 5z(z), we have, τ 5
z(z) = τ 4z(z) + µ(z, z)(z) = 1 + 0 = 1 by case 2 of our
analysis.
The tag structure Tdep captures the most general dependency scenario because
certain MoCs have underlying tag structures which are special cases of Tdep.
For example, consider the (algebraic) tag structure (Tsynch,≤,⊑,+), used for
modeling synchrony with tags being the reaction indices (Benveniste et al., 2005,
2008) which increase uniformly even in case there is no event on the variable. In
the tag structure Tsynch = (N,≤,⊑), ≤ is a total order (on N) and ⊑ is a flat order
such that τ1 ⊑ τ2 iff τ1 = τ2. More specifically, in a synchronous system, all the
variables have some event at each time step with the empty event (⊥) modeling
the absence of any event. This implies that at any point of computation, the
components of the tag vector are all equal to some natural number. Hence, a
tag piece µsynch is of the form: µsynch(v, v) = 〈1v〉 and µsynch(v, w)|v 6=w = 〈ǫ〉,
∀v, w ∈ V (so that at each computation step, the tag values for all variables are
incremented by 1). In such systems, events on different variables do not have
any causal ordering which makes ⊑ a flat order.
Similarly, there are other tag structures for modeling various MoCs like asyn-
chronous systems (Ttriv), time-triggered architectures (Ttta), etc. (Benveniste
et al., 2005). The heterogeneous composition of such different MoCs is modeled
using the product tag structures; for example, the tag structure Tsynch×Ttta may
2.3 Entities of a Tagged Signal Model 31
be used for modeling a synchronous system whose input and output environments
are time-triggered (Benveniste et al., 2008).
A tag piece as discussed previously captures a concurrent snapshot of events
on variables and their dependencies. The mechanism of tag vector computation
captures the causality relations among such events. For various aspects of be-
havioural analysis of systems which evolve through computation, however, the
effects of such events in terms of variables assuming values from a domain (D)
is required to be captured in a general functional form. With such a motivation
we may associate certain labeling functions with tag vectors and tag pieces as
formalized in the following definitions.
Definition 7. A labeled tag vector is a pair τ = (τ , κ), where τ is a tag vector
and κ is a partial map given by, κ : V → D such that κ(v) indicates the value
assumed by the last event on v (if there is any). Hence, τ v = (τ v, κ(v)) depicts
the (tag, value) pair for the last event on v.
For example, consider the labeled tag vectors τ 1 and τ 4 in Fig. 2.2 where the
variables assume values from the domain D = {T, F}∪N. We have τ 1 = (τ 1, κ1)
such that κ1(x) = T , κ1(y) = 1 and κ1(z) is not defined since there exists no
event on z in τ 1. Similarly, we have τ 4 = (τ 4, κ4) such that κ1(x) = F , κ1(y) = 1
and κ1(z) = 2.
For V = {v1, · · · , vn}, where v = 〈v1, · · · , v|V |〉, we denote the collection (an
ordered tuple) 〈κ(v1), κ(v2), · · · , κ(vn)〉 ∈ D|V | as κ(v).
Definition 8. A labeled tag piece is a pair µ = (µ, ς), where µ is a tag piece and
ς is a partial map given by, ς : V × V → D|V | → D such that ∀u, v ∈ V , ς(u, v)
is defined iff (u = v).
Previously we have discussed the operation of tag pieces on vectors of tags.
For defining the operation of labeled tag pieces, consider a labeled tag vector
τ= (τ , κ) and labeled tag piece µ = (µ, ς) such that,4
(τ · µ)v =def ((τ · µ)v, ς(v, v)(κ(v))) (2.6)
4 Note that ‘·’ has been used to define the operation in T as well as τ · µ; strictly speaking
these operations are different. We permit this abuse because it will be obvious from the context
which operation is being referred to.
32 Chapter 2 Tagged Signal Model - Operational Semantics
where κ(v) = 〈κ(v1), κ(v2), · · · , κ(vn)〉 ∈ D|V | denotes the collection (an or-
dered tuple) of values acquired through the respective last events on all vari-
ables in τ . Hence, τ ′ = (τ · µ) = (τ ′, κ′) such that τ ′v = (τ · µ)v and κ′(v) =
ς(v, v)(κ(v1), κ(v2), · · · , κ(vn)). Observe that ς(v, v) is a function of the type,
ς(v, v) : D|V | → D such that it takes as argument the collection of values as-
sumed by the respective last events on all the variables in V of τ = (τ , κ) given
by 〈κ(v1), κ(v2), · · · , κ(vn)〉. Further, ς(v, v) outputs a domain value which is
assumed by the new event on v in µ (if there is any, i.e. if µ(v, v) = 〈1v〉 which
becomes the last event on v in τ ′).
The function ς(v, v) can be an identity map, a constant function or a general
function. Specifically, for µ(v, v) = 〈0〉 in µ = (µ, ς), we have ς(v, v) = 1 ◦ Π|V |v
which indicates taking the v-th projection (using the projection function Π|V |v :
D|V | → D) from the input argument ∈ D|V | and then applying the identity
map 1 : D → D. The situation µ(v, v) = 〈0〉 indicates the absence of any
event on v in µ, so that κ′(v) = ς(v, v)(d) = 1 ◦ Π|V |v (d) = 1(κ(v)) = κ(v).
Thus, defining ς(v, v) as an identity map in case of absence of event on v in
µ provides for preservation of the value of v acquired through the last event
on v prior to the occurrence of µ. For example, consider τ 2 = τ 1 · µ2 in Fig.
2.2, where τ 1 = (τ 1, κ1) and µ2 = (µ2, ς2). Since µ2(y, y) = 〈0〉, we have
κ2(y) = ς2(y, y)(κ1(x), κ1(y), κ1(z)) = 1 ◦ Π|V |y (κ1(x), κ1(y), κ1(z)) = 1(κ1(y)) =
κ1(y) = 1.
In the simplest case, ς(v, v) is a constant function which will map the event
on variable v in µ to some value from domain D irrespective of the values
assumed by the last events on variables in V of τ . Otherwise, ς(v, v) is a
general function. For example, in the computation τ 4 = τ 3 · µ4 of Fig. 2.2
where τ 3 = (τ 3, κ3) and µ4 = (µ4, ς4), let ς4(y, y)(d1, d2, d3) = d3 − d2. Hence,
κ4(y) = ς4(y, y)(κ3(x), κ3(y), κ3(z)) = ς4(y, y)(T, 1, 2) = 2− 1 = 1.
Given a tag piece µ, the set Vµ, also called the support of µ, is defined as
Vµ = {v |µ(v, v) = 〈1v〉} ⊆ V . In other words, Vµ comprises all those and only
those variables for each of which µ has an event. For example, in the tagged
behaviour of Fig. 2.2, Vµ2= Vµ5
= {x, z}. The set of all labeled tag pieces
defined over variables in V using tag structure T is denoted byM(V, T ).
2.3 Entities of a Tagged Signal Model 33
The formalism of tagged machines has been proposed in (Benveniste et al.,
2005) as a step towards an operational theory of heterogeneous systems. Tag
machines are finite automata models which act as finitary generators of tagged
behaviours. In a tag machine, tag pieces representing concurrent event scenar-
ios lead to state transitions. Any run of the machine corresponds to a tagged
behaviour represented by the sequence of tag pieces.
Definition 9. A tag machine is a tuple, A = (S, S0, V, T ,M ,∆) where,
• S is a finite set of states and S0 ⊆ S is the set of initial states,
• V is a finite set of variables with a finite domain D,
• T is an algebraic tag structure (T ,≤,⊑),
• M is a finite set of tag pieces,
• ∆ ∈ S ×M × S is a transition relation.
s0
s1
s2
s3
s4
µ1
µ2
µ3
µ4µ6
µ7
Figure 2.5: A Tag Machine.
Observe that a tag machine can be conveniently represented as a digraph of
the form (S,∆). As an example, consider the tag machine shown in Fig 2.5.
Observe that the behaviour depicted earlier in Fig 2.2 is generated by the trace
µ1; µ2; µ3; µ4; µ2; µ6 of the tag machine since µ2 = µ5 in the behaviour of Fig
2.2.
34 Chapter 2 Tagged Signal Model - Operational Semantics
2.4 Conclusion
The present chapter has provided an introduction to the operational semantics
of tagged systems by defining the concepts of tag structures, tag pieces, tag vec-
tors and finally the tag machines. Next we proceed to present a review of the
works reported in the literature on asymptotic performance evaluation of em-
bedded systems with the objective of formulating methodologies for asymptotic
performance evaluation of TSM based heterogeneous system specifications.
Chapter 3
Asymptotic Performance
Analysis of Embedded Systems :
Literature Survey
3.1 Introduction
From the viewpoint of performance evaluation, an embedded system specifica-
tion, when simulated for a specific schedule, can be observed as a collection of
tasks with inter-dependencies. The inter-dependencies may be thought of as an
ordering relation such that the set of tasks may be partitioned into a set of chains
in each of which the tasks are totally ordered w.r.t. the dependence ordering.
In effect, the problem resembles a job-shop scheduling problem in case there is
no ordering among tasks belonging to different chains and a machine scheduling
problem otherwise.
The asymptotic performance evaluation of job-shops is a well studied problem
in discrete event systems theory. A job-shop specification is given as a set of jobs
where each job is given as a sequence of tasks. The asymptotic performance of
any job in a given sequence of tasks is conceived as the ratio of the number of
times the job gets completed by a finite number (n say) of repeated executions
of the sequence and the total time required for this n repeated executions of the
35
36Chapter 3 Asymptotic Performance Analysis of Embedded Systems :
Literature Survey
sequence in the limit n tending to infinity. This index serves as a measure of how
good the task sequence (i.e., a schedule) is performing a specific job. In many
cases, embedded systems are designed to perform certain jobs infinitely while
executing certain task sequences. For all practical purposes, such sequences are
infinite repetitions of finite sequences of tasks. Availability of multiple processing
units allow for tasks to execute concurrently whenever possible. Due to the
concurrent and periodic nature of execution, the asymptotic performance of such
infinitely repeating schedules cannot simply be computed as the execution time
of a single round of such schedules.
We provide a brief discussion about the commonly used techniques for asymp-
totic performance evaluation of jobshop schedules. Formal approaches for the
more general problem of asymptotic performance evaluation of machine schedul-
ing are not available in the literature to the best of our knowledge (probably
because, in general, job-shops are the most frequent cases). In section 3.2, we
discuss in brief the problem of job-shop scheduling and asymptotic performance
of a job-shop schedule. In section 3.3, we discuss a classical Petri net based ap-
proach for the problem of performance evaluation of job-shop schedules. Next, in
section 3.4, we discuss another approach for the same based on heaps of pieces.
In section 3.5, we discuss the problem of asymptotic performance evaluation for
SDFG based models. Finally, in section 3.6, we summarize our discussions about
the related works with a view to motivating our objectives.
3.2 The Problem of Job-shop Scheduling
In job-shop scheduling the tasks are partitioned into jobs. Each job represents a
total ordering of some tasks and the tasks belonging to different jobs are not or-
dered. Thus, a job-shop specification provides a partial ordering among the tasks
to be performed. Formally, a job-shop comprises a finite set J = {J1, · · · , Jn}
of jobs to be processed on a finite set M = {m1, · · · , mk} of machines. Each
job J i is a finite sequence 〈ti1 · · · tik〉 of tasks. Let T be the set of all tasks over
the jobs. The function ζ : T → M assigns tasks to machines and the function
d : T → N, the duration function, specifies the task duration. Thus, each task
t ∈ T is of the form (ζ(t), d(t)), depicting that the task t is executed by the
3.2 The Problem of Job-shop Scheduling 37
machine ζ(t) for time duration d(t). For a job-shop J , there is a partial order ≺J
over T .1 Each J i = 〈ti1, ti2 · · · t
ik〉 is a chain of the form ti1 ≺J · · · ≺J t
ik, with the
additional property that no task belonging to J i is in relation ≺J with any task
in Jk, i 6= k. As an example, consider a set of resources M = {m1, m2, m3, m4}
and a job-shop specification J = {J1, J2} given as,
J1 = a1b1c1d1 = (m3, 3)(m1, 1)(m2, 6)(m4, 5)
J2 = a2b2c2d2 = (m3, 5)(m2, 4)(m1, 5)(m4, 4)(3.1)
Hence the set T of tasks is given by T = {a1, b1, c1, d1, a2, b2, c2, d2}. Let θ denote
a finite concatenation of tasks such that θ ∈ T ∗. We say that θ satisfies the
partial ordering relation ≺J (symbolically denoted as θ |=≺J) iff
1. for any two tasks a and b in θ, if (a, b) ∈≺J , then the n-th occurrence of b in
θ (if there is any) is preceded by the n-th occurrence of a in θ, n ∈ N
2. the n-th occurrence of any task (∈ J) in u precedes the k-th occurrence of all
other tasks (∈ J) in u for k > n.
For any u ∈ T ∗, u is said to be a complete schedule if whenever a job is started
in u, it is always completed in u.
Definition 10. Given a set of tasks T and a job-shop J , the set of all possible
schedules for J is given by SJ = {u | (u ∈ T ∗)∧ (u |=≺J)∧ u is complete} ⊆ T ∗.
The definition of possible schedules extends naturally for infinite schedules
when we consider infinite concatenation of tasks, i.e, T ω instead of T ∗. Given
a schedule w ∈ SJ for the job-shop J , we denote by |w|Ji the number of times
the job J i is completed in w. By asymptotic performance of a job in a job-shop,
we refer to the throughput of the job, i.e., the number of times the job gets
completed per unit time when all the jobs are getting executed following some
schedule which is infinitely long in the limiting case.
Definition 11. Given an infinite schedule w = a1a2 · · · ∈ T ω, with a1, a2, · · · ∈
T , the asymptotic throughput of the job J i is given by:
λJi = limn→∞
inf| a1a2 · · ·an |Ji
execution time of (a1a2 · · ·an)
1We may ignore the subscript J of ≺J if the meaning is clear from the context.
38Chapter 3 Asymptotic Performance Analysis of Embedded Systems :
Literature Survey
By “inf”, we refer to the infimum or the greatest lower bound. Among
the possible schedules we need to consider only periodic sequences of the form
vω = vvv · · · , where v ∈ T ∗ and v is a complete schedule. With such a restriction,
“the lim inf” becomes a “limit” as discussed in (Gaubert and Mairesse, 1999).
In the present work, we always consider the execution policy of a schedule as
“non-lazy”, that is, the sequence of tasks, as given in the schedule, executes with
maximum possible concurrency, i.e., no task waits if the required resource is free.
For asymptotic performance evaluation of a schedule v for a job-shop J ,
a set of difference equations which relate the completion time of tasks in the
(n + 1)-th iteration with the completion time of tasks in the previous iteration
is required to be constructed. The computation is performed over the semiring
〈R∪{−∞},⊕, ·, 0, 1〉, where the operation ⊕ is defined as a⊕ b = max(a, b) and
‘·’ is the usual sum with 0 = −∞ as the identity of ⊕ and 1 = 0 is the identity
of ‘·’. Such an algebraic structure is called a (max,+) semiring and denoted
by Rmax in discrete event systems theory (Baccelli et al., 1992). A detailed
account of the theory of (max,+) semiring can be found in (Baccelli et al., 1992;
Cuninghame-Green, 1979; Gaubert and Plus, 1997).
Let the completion time of any n-th iteration of a task t be denoted by t(n).
This notation is known in literature as a dater function since it represents the
completion time (i.e., dates of completion) of the n-th iteration of a component
task. Now, let x(n) ∈ RTmax denote a vector of dater functions comprising all
the tasks in the job-shop of Eq. 3.1. For asymptotic performance evaluation of
a job-shop schedule v, we require to compute the (max, +) linear representation
x(n+ 1) = A0x(n+ 1)⊕A1x(n) (3.2)
where the matrices A0 and A1 has as entries the coefficients of the dater functions
of the individual tasks. It is then transformed to the canonical form
x(n+ 1) = A∗0A1x(n) (3.3)
where A∗0 = A0
0 ⊕ A0 ⊕ A20 ⊕ · · · ⊕ Ai
0 · · · such that Ai0 denotes the matrix
A0 being raised to the power i where A00 is the null matrix2. However, A∗
0 =
2by unrolling Eq. 3.2 (i + 1) times, we actually get
x(n + 1) = Ai+10 x(n + 1)⊕ (A0
0 ⊕A0 ⊕A20 ⊕ · · · ⊕Ai
0)A1x(n). In the process, the first term is
3.3 Petri Net based Performance Evaluation of Job-Shops 39
A00 ⊕ A0 ⊕ A2
0 ⊕ · · · ⊕ A|v|−10 since Ai
0 is the null matrix for i ≥ |v| (Baccelli
et al., 1992, Th. 3.17). The (max, +) eigen value of the matrix A = A∗0A1
given by ρ(A) provides the cycle time of the tasks while executing vω. The
cycle time denotes the difference in completion time of each task’s consecutive
iterations while a schedule is repeated infinitely. For any two tasks ti and tj ,
ti(n + 1)− ti(n) = tj(n+ 1)− tj(n) = ρ(A). Finally we have,
λJi =| v |Ji × ρ(A)−1 ∀J i ∈ J (3.4)
Methodologies for performance evaluation of job-shop problems largely differ in
their approaches for computing ρ(A) as will be revealed next.
3.3 Petri Net based Performance Evaluation of
Job-Shops
A classical approach for performance evaluation of job-shops constructs an event
graph from a Petri-net based model of the job-shop for a given schedule (Hillion
and Proth, 1989). Given the job-shop J discussed previously, a Petri net model
for the same is first constructed. A Petri net representation of the job-shop
in Eq. 3.1 is given in Fig 3.1 where every possible schedule can be thought
of as a possible firing sequence of the transitions. General discussions on Petri
net theory can be found in (Desel and Esparza, 1995; Brauer et al., 1987) and
(Murata, 1989). The representation scheme is the same as in (Gaubert and
Mairesse, 1999).
Given a schedule v = a1a2b1c1b2d1c2d2 for J , we build an event graph (EG)
representing the system. An event graph is a Petri net with only one possible
firing sequence. Given a Petri-net representation of a job-shop and a schedule
of the same, the corresponding EG may be obtained by duplicating the resource
places in the Petri net and then removing tokens appropriately from some of
them in order to impose an ordering among conflicting tasks as per the given
schedule. In effect, the EG is a Petri net that may execute only that firing
sequence which corresponds to the given schedule. For the job-shop in Eq. 3.1
diminished since Ai0 is the null matrix for i ≥ |v|.
40Chapter 3 Asymptotic Performance Analysis of Embedded Systems :
Literature Survey
and the schedule v = a1a2b1c1b2d1c2d2, the corresponding EG is shown in Fig 3.2
following (Baccelli et al., 1992, Sec 2.6).
������
������
������
������
������
������
������
������
������
������
������
������
J1
J2
a1 b1 c1 d1
a2 b2 c2 d2
m3 m1 m2 m4
Figure 3.1: The example job-shop problem shown as a Petri net.
������
������
������
������
������
������
������
������
������
������
������
������
J1
J2
a1 b1 c1 d1
a2 b2 c2 d2
m31 m32 m11 m12 m21 m22 m41 m42
Figure 3.2: The event graph for an infinite schedule (v′)ω with periodic occur-
rences of v′.
We illustrate the construction of difference equations for the job-shop given
in equation 3.1 and the schedule v captured by the event graph of Fig. 3.2. Let
us consider the task d1, for example. The task d1 consumes tokens which are
produced by c1 and d2. Observe from Eq. 3.1 and Fig. 3.2 that the following
3.3 Petri Net based Performance Evaluation of Job-Shops 41
facts hold. 1) The (n + 1)-th iteration of task d1 is preceded by the (n + 1)-
th iteration of task c1 since they belong to the same job J1. 2) The schedule v
specifies that the n-th iteration of task d2 precedes the (n+1)-th iteration of task
d1 and the (n+1)-th iteration of task d1 precedes the (n+1)-th iteration of task
d2 3) There exists no other precedence constraints for task d1 to execute since
transition d1 in Fig. 3.2 has only two input places. Hence, we may say that the
n+ 1-th iteration of task d1 may start whenever both of the (n+ 1)-th iteration
of task c1 and the n-th iteration of task d2 are complete.3 Since execution of d1
requires 5 time units, we may write the difference equation as
d1(n + 1) = max(c1(n+ 1), d2(n)) + 5
= 5 · (c1(n + 1)⊕ d2(n))
= 5 · c1(n + 1)⊕ 5 · d2(n)
where ‘⊕’ denotes ‘max’ and ‘·’ denotes addition in Rmax. In this way, the
difference equations may be constructed for all the tasks in job-shop J of Eq. 3.1
thus providing the component values of the matrices A0 and A1 in Eq. 3.2. For
example, let the rows and the columns of A0 and A1 be designated in the order
a1 → 1 · · ·d1 → 4, a2 → 5 · · ·d2 → 8. Thus the difference equation :
d1(n + 1) = 5 · c1(n+ 1)⊕ 5 · d2(n)
contributes the following entries in A0 and A1. In A0, the 4th row (for d1(n+1))
has the value 5 in column 3 (for c1(n+ 1)) and 0 elsewhere. In A1, the 4th row
(for d1(n+1)) has the value 5 in column 8 (for d2(n)) and 0 elsewhere. Applying
the same method for the other tasks, the (max,+) linear representation of Eq.
3.2 can thus be constructed from the event graph based representation. It is
then transformed to the canonical form of Eq. 3.3 thus providing the matrix
A = A∗0A1.
In (Gaubert and Mairesse, 1999), the authors identify a sub-matrix of A
which can be used for computing ρ(A) thus ignoring the other entries. They
consider the set C ⊆ T = {t1, · · · , tk} of tasks such that for any transition in the
EG corresponding to a task t ∈ C, there exists at least one token in one output
3The EG reveals that d1 is preceded by c1 and not b2 as it may seem from the linear
string representation of v. This happens because a schedule does not capture the possibility
of concurrent execution based on resource occupancy.
42Chapter 3 Asymptotic Performance Analysis of Embedded Systems :
Literature Survey
place. In our example EG, C = {d1, a2, b2, c2, d2}. In effect, C comprises the tasks
which are the last tasks of each job (d1, d2) and the tasks which are last scheduled
on each resource (c2 on m1, b2 on m2, a2 on m3 and d2 on m4 ). In (Gaubert
and Mairesse, 1999), it is observed that, ∀tj /∈ C, ∀ti ∈ T, (A∗0A1)ij = 0. Thus
A = A∗0A1 has only one nontrivial sub-matrix of size C×C denoted by A|C such
that ρ(A) = ρ(A|C). The (max,+) eigen value ρ(A|C) (denoting the maximum
cycle mean) is then computed by Karp algorithm (Karp, 1978) in order to provide
asymptotic throughput as given by Eq. 3.4. The methodology discussed for EG
based performance evaluation is given in the form of an algorithm as follows:
Algorithm 3.1: Classical EG based Performance Evaluation
Input: A job-shop J defined over resources in M and a schedule v.
Output: λji =| v |Ji ×ρ(A)−1 ∀J i ∈ J .
1. Build the EG representation of the system, using the method discussed
in (Baccelli et al., 1992; Hillion and Proth, 1989).
2. Construct the (max,+) linear representation:
x(n + 1) = A0x(n + 1)⊕A1x(n),
where x, A0, A1 have their usual meanings.
3. Compute the C × C sub-matrix A|C of A = A∗0A1.
[Complexity : O(| v | (| J | + |M |) + (| J | + |M |)2)]
4. Compute ρ(A|C) = ρ(A), using Karp algorithm.
[Complexity : O((| J | + |M |)3)]
[Total complexity : O(| v | (| J | + |M |) + (| J | + |M |)3)].4
The problem with the above approach is that a new event graph has to be
constructed from the original Petri net model of the job-shop for each schedule
specification. The approach also suffers from the problem of growth of the state-
space size with the schedule length; longer schedules generate larger event graphs.
Next, we discuss the methodology of performance evaluation of job-shops based
on heaps of pieces.
4All the methodologies discussed in the present work for performance evaluation use the
Karp algorithm for computing the (max, +) eigen value of A which is the maximum cycle mean.
Algorithms with lower order complexities for computing the cycle mean has been proposed,
like in (Dasdan et al., 1998). However, such improvements lead to similar complexity reduction
for all the methodologies discussed here.
3.4 Heap based Method for Performance Evaluation 43
3.4 Heap based Method for Performance Eval-
uation
A method for performance evaluation of job-shops using heap models has been
reported in (Gaubert and Mairesse, 1999). Given a job-shop J over a set M
of machines (resources), the initial matrices M(t), ∀t ∈ T of size (|M | + |J |) ×
(|M | + |J |) are constructed. Recall that d and ζ are functions which assign
tasks to machines and give their execution times respectively. For any t ∈ T
which belongs to job J i ∈ J , the entries of M(t) can be computed as follows.
Let ζ(t) = mj ∈ M . Except for M(t)(mj , mj) and M(t)(J i, J i), all diagonal
entries of M(t) will be 1 and M(t)(mj , mj) =M(t)(J i, J i) = d(t). Except for
M(t)(mj , Ji) and M(t)(J i, mj), all non-diagonal entries of M(t) will be 0 and
M(t)(mj , Ji) = M(t)(J i, mj) = d(t). Given a schedule v, the matrix M(v) is
computed by concatenation of the initial matrices in the order as specified by
v. Concatenation is by matrix multiplication in Rmax. It has been observed in
(Gaubert and Mairesse, 1999) that matrix A = A∗0A1 in Eq. 3.3 for a given
job-shop J with schedule v is same as M(v). Hence, following Eq. 3.4, the
asymptotic throughput for an infinite schedule (v)ω is given by,
λJi =| v |Ji × ρ(M(v))−1 ∀J i ∈ J (3.5)
where ρ(M(v)) is the (max,+) eigen value ofM(v).The basic theory of heaps of
pieces can be found in (Viennot, 1986). The computation of λJi, ∀J i ∈ J using
the heap based approach (Gaubert and Mairesse, 1999) is done as follows:
Algorithm 3.2: Heap Automata based Performance Evaluation
Input: A job-shop J defined over resources in R and a schedule v.
Output: λji =| v |Ji ×ρ(M(v))−1, ∀J i ∈ J .
1. Construct the matrices M(a), ∀a ∈ T .
2. Compute the productM(v) of matrices M(a), a ∈ T .
[Complexity : O(| v | (| J | + |M |))]
3. Compute ρ(M(v)), using Karp algorithm.
[Complexity : O((| J | + |M |)3)].
[Total complexity : O(| v | (| J | + |M |) + (| J | + |M |)3)].
The above approach based on heaps of pieces is capable of computing the
44Chapter 3 Asymptotic Performance Analysis of Embedded Systems :
Literature Survey
performance of infinite schedules of the form (v)ω (where v is a finite schedule)
with no dependency on the length of v. Such an approach avoids the compu-
tational complexity of computing the EG for every specified schedule where the
size of the EG is proportional to the length of v.
Among the different MoCs typically employed for embedded system specifica-
tion, we find only the SDFG formalism to have techniques specifically developed
for direct asymptotic performance analysis. Other specification methods do not
have such analysis methods specifically developed for them and instead rely on an
appropriate representation of the specification as a job-shop (or machine schedul-
ing) using the task ordering information present in the model and then apply
the methods reported for EG based or heap based performance evaluation. We
discuss this method next.
3.5 Performance Evaluation of SDFGs
SDFGs are widely used for modeling and analysing signal processing applications,
both concurrent and sequential (Lee and Messerschmitt, 1987). An SDFG is
given by a triple (A,C,E), where A is a set of actors, C a set of channels and
E : A→ N is a mapping specifying the execution time of the actors (Ghamarian
et al., 2006). An actor a ∈ A is a triple (I(a), O(a), E(a)) where I(a) is the
set of input ports, O(a) the set of its output ports (I(a), O(a) ⊆ Ports) and
E(a) is the number of time units it needs from the start of each of its firing
instances to produce token(s) on its output ports corresponding to that firing
instance. Every input (output) port has a consumption (production) rate given
by Rate : Ports → N. If the Rate of an input port i of an actor a is n, then
on each firing of a, n tokens are consumed by a from i; likewise for the rate of
an output port. An SDFG is essentially a composition of its actors captured by
associating channels, one each from an output port of an actor a to an input
port of an actor b (a, b not necessarily distinct). Thus, the set of channels
C ⊆ Ports2. For any actor a, the set of input (output) channels is denoted by
InC(a)(OutC(a)). A channel quantity is a mapping γ : C → N which associates
with each channel the number of tokens present in the channel at any time
instant.
3.5 Performance Evaluation of SDFGs 45
The state of an SDFG is given by a pair (γ, ϑ), where γ is a channel quantity
and ϑ is a map ϑ : A→ NK , called actor status, which associates with each actor
an ordered K-tuple of numbers representing the remaining times of different
concurrent firings of the actor. Observe that the maximum number of possible
concurrent firings of an actor is bounded by the actor’s execution time which
is always finite. Hence, K can always be chosen as the least common multiple
(l.c.m.) of the execution times of all the actors. Each state of an SDFG can
be conveniently represented as a digraph (〈A,E〉, C). Consider, for example, an
SDFG G say, with A = {a, b, c}, C = {c1, c2, c3, c4} and E = [2, 1, 1] as depicted
in Fig 3.3. The initial state s0 = (γ0, ϑ0) of G is given by γ0 = [1, 1, 1, 3] and
ϑ0 = [{}, {}, {}] (since no firing has taken place).
1 1
11
2 3
2 3
b,1 c,1a,2
c1
c2
c3
c4
Figure 3.3: A sample SDFG G, the numbers at the ports indicate the production
(consumption) rate.
An actor is ready to fire when the number of tokens in the input channels
of the actor are not less than the actor’s consumption rates for the channels.
When an actor is ready to fire, it is enabled. For example, in the initial state s0
of the SDFG of Fig. 3.3, only actor c is not enabled (since channel c3 has only
one token because γ0(c3) = 1 and c consumes three tokens from c3 while firing).
An execution of an SDFG (i.e., a sequence of actor firings) is called self-timed
iff the actors fire as soon as they are enabled in the execution. Given an initial
state of an SDFG, its self-timed execution is always unique due to the ‘as soon
as possible’ semantics of execution. The self-timed execution of an SDFG has
a periodic phase (tail) in which a sequence of states may repeat infinitely. The
number of times an actor fires in the periodic phase of the self-timed execution
of an SDFG is given by a function q : A → N which is called the repetition
vector. The repetition vector q is such that for any channel (i, o) ∈ C, from
some actor a to some actor b, Rate(o) · q(a) = Rate(i) · q(b). Such equations are
called balance equations. In the periodic phase of the self timed execution, the
actors in the SDFG are fired precisely as often as specified by a repetition vector
so that there is no net effect on the distribution of tokens over all channels.
46Chapter 3 Asymptotic Performance Analysis of Embedded Systems :
Literature Survey
A transition t in an SDFG is a tuple, t ∈ A × {↑, ↓, l, , } where the ↑-mark
beside an actor indicates the start of that actor’s firing and ↓ indicates end of
the actor firing. The symbol l indicates the start and end of an actor firing in a
single unit of time and the absence of any such arrow (marked by ‘ ’) indicates
the continuation of an actor firing started with some previous transition. Fig 3.4
shows the self timed execution of the SDFG G as defined previously.
a b ba
a
a c
a b
a ca a b↑↑
↑
↑
↓
↓
↓
↓ lll
l
l
l
s0 s1s2 s3
s4s5
s6s7
Figure 3.4: The self-timed execution of G (with tail σ say)
The different states of the SDFG in the self-timed execution are as follows :
s0=(γ0, ϑ0)=((1,1,1,3),({},{},{})), s1=(γ1, ϑ1)=((0,0,3,1),({1},{},{})),
s2=((1,1,3,1),({},{},{})), s3=((0,1,0,4),({1},{},{})), s4=((1,1,2,2),({},{},{})),
s5=((0,0,4,0),({1},{},{0})), s6=((1,1,4,0),({},{},{})), s7=((0,1,1,3),({1},{},{})).
Given the tail σ of a self-timed execution, the throughput of an actor a in σ
is defined as Th(σ, a) = limt→∞
|σ|tat
(provided the limit exists), where |σ|ta denotes
the number of occurrences of the transition (a, ↓) upto the t-th transition in σ.
By Th(a) we mean the actor throughput for the self-timed execution (Sriram
and Bhattacharya, 2000; Govindarajan and Gao, 1995) (which, being unique,
can be dropped). The throughput of an SDFG X = (A,C,E) with repetition
vector q is Th(X) = mina∈ATh(a)q(a)
. Given a self-timed execution of an SDFG,
techniques for computing the associated throughput (for the periodic phase) have
been discussed in (Ghamarian et al., 2006). The work reported in (Ghamarian
et al., 2006) proposes a simulation framework which executes the periodic phase
of the self-timed execution of an SDFG in order to generate the completion time
of actor firings in the form of Eq. 3.3. The subsequent steps of the method are
the same as that in that in Karp’s algorithm.
3.6 Conclusion 47
3.6 Conclusion
As may be noted in the survey of related works pertaining to the asymptotic
performance evaluation of embedded systems that the general technique for such
an initiative largely relies on construction of a job-shop or machine scheduling
problem corresponding to the system specification given in any high level model.
There is an absence of methodologies for evaluating asymptotic performance of
periodic task sequences from system level specifications (even for specifications
using single MoCs). For evaluating such performance measures, the correspond-
ing job-shop needs to be constructed from the specification model (apart from
the SDFG based models) and then the techniques discussed becomes applica-
ble. This happens because the dependency information among events is implicit
in commonly used specifications and is to be specially captured by a suitable
jobshop. This information, however, is an explicit property of tagged system
representations. Thus, from a tag machine based model, the recurrence rela-
tions among the tasks can be directly constructed for a given execution scenario.
Hence, a tag machine based representation of the overall system may be used
for the asymptotic performance evaluation of a heterogeneous embedded system
specification. The development of a methodology for performance evaluation of
schedules based on tag machines has the potential to be a uniform methodology
which may be applied across different MoCs and their composition provided the
MoCs can be translated to the corresponding tag machines. A given schedule
of execution will then represent an execution run of the tag machine. It will
later be revealed that with appropriate choice of tag structures and associated
computing methods, a run of a tag machine will depict a job-shop or machine
scheduling problem whose asymptotic throughput may then be computed. In
the next chapter, we will derive a methodology for performance evaluation of
execution scenarios in a tag machine by modeling jobshop schedules using the
the formalism of tag machines.
Chapter 4
Tag Machine based Performance
Analysis of Job-shop Schedules
4.1 Introduction
An embedded system specification typically comprises many submodules each
of which can be observed as a collection of tasks with inter-dependencies. The
inter-dependencies may be thought of as an ordering relation such that the set
of tasks may be partitioned into a set of chains in each of which the tasks are
totally ordered with respect to the dependence ordering. In effect, the problem
resembles a job-shop scheduling problem in case there is no ordering among tasks
belonging to different chains and a machine scheduling problem otherwise. The
problem of job-shop scheduling is the most well-studied subclass of the more
generic machine scheduling problem (Abdeddaım et al., 2006).
A methodology for evaluating the asymptotic performance of job-shop sched-
ules using TSMs may prove to be easily adaptable for performance evaluation
of systems modeled using other MoCs since the model of tagged systems is flex-
ible enough to capture different MoCs (using different or same tag structures)
(Benveniste et al., 2005) and their composition (Benveniste et al., 2008). It
is possible to capture specifications given directly as execution scenarios using
the denotational semantics of tagged systems. The finitary representation, how-
ever, is required when the original specification is also a finitary representation,
49
50 Chapter 4 Tag Machine based Performance Analysis of Job-shop Schedules
given using MoCs like timed automata or finite automata. Hence, we choose to
base our performance evaluation methodologies on the operational semantics of
TSMs, i.e., the tag machines such that the set of all possible behaviours forms
the language of the tag machine.
In this chapter, we formulate a methodology for asymptotic performance
evaluation of job-shop schedules using tag machines. The ordering of tasks in a
job-shop is akin to the partial ordering of events in any tagged system with the
underlying tag structure Tdep. Thus, the performance of any system which can be
modeled using Tdep can be evaluated using the proposed method. We start with
simple tag machine specifications representing the job-shop problems and find
out methodologies for performance evaluation of job-shop schedules represented
by runs of tag machines. We model each job as a single tag machine such that
the composite tag machine represents the overall job-shop specification and any
trace of the machine from the start to the final state results in a valid job-shop
schedule. We propose a new tag structure using which the computation of a tag
vector corresponding to a job-shop schedule captures all the relevant dependency
informations required for performance evaluation of that schedule. Embedded
system specification are typically given using high level MoCs. Unlike job-shops,
the ordering of tasks in such specifications and their execution scenarios may
not be explicitly captured. However, if it is possible to translate such specifica-
tions to suitable tag machines, the tag vectors computed for different execution
scenarios will then be capable of automatically constructing the task level re-
currence relations (in the form of Eq. 3.2) required for asymptotic performance
evaluation.
In section 4.2, we discuss the modeling of job-shop specifications using tag
machines. In section 4.3, we discuss the insufficiency of the known tag structures
in capturing the information necessary for evaluating the asymptotic performance
of job-shops and in section 4.4, we propose an alternative tag structure for the
same. In section 4.5, we propose a methodology for evaluating the asymptotic
performance of a job-shop modeled using tag machines. We further show in
sections 4.6 and 4.7 how tag vectors can be computed from execution runs of
SDFGs and heterogeneous compositional scenarios involving Dataflow (DF) and
Discrete Event (DE) modules.
4.2 Modeling Job-shop Schedules using Tag Machines 51
4.2 Modeling Job-shop Schedules using Tag Ma-
chines
A job-shop specification provides a partial ordering among the tasks to be per-
formed. Formally, a job-shop comprises a finite set J = {J1, J2, · · ·Jx} of jobs
to be processed on a finite set M = {m1, m2, · · ·my} of resources. Each job J i
is a finite sequence ti1 · · · tik of tasks. Let T be the set of all tasks over the jobs.
The function ζ : T → M assigns tasks to machines and the function d : T → N,
the duration function, specifies the task duration. Thus, each task t ∈ T is of
the form (ζ(t), d(t)), depicting that the task t is executed by the machine ζ(t)
for time duration d(t).
We construct a tag machine TMi for each job J i. For each resource mi,
we consider two variables mi and mi in the tag machines denoting occupancy
of resource mi and waiting for resource mi, respectively. Also, we have an ad-
ditional variable f , events on which denote the completion of a job. Thus,
any job J i = ti1ti2 · · · t
ik = {(ζ(ti1), d(t
i1)), · · · , (ζ(t
ik), d(t
ik))} contributes the set
Vi =⋃
j=1 to k
{ζ(tij), ζ(tij)}
⋃
{f} of variables to the underlying tagged system with
the set V of variables given by V =⋃
i=1 to |J |
Vi. The set Si of states for TMi is
such that |Si| = |Vi|. Each state s ∈ Si is labeled with a single variable v ∈ Vi
such that entry to s denotes occurrence of an event on v.
The tag machines are equipped with two kinds of transitions, the event transi-
tions modeling the change in state and the delay transitions modeling the passage
of time in a single state. The tag-pieces associated with the event transitions are
called event tag-pieces and the tag-pieces associated with the delay transitions
are called delay tag pieces. The incoming event transition to state mi (mi) from
state mj has an associated event tag piece jµi (jµi), j 6= i. Similarly, the incom-
ing event transition to state mi (mi) from state mj has an associated event tag
piece jµi (jµi), j 6= i. A delay transition for state mi (mi) has an associated delay
tag-piece µ′i (µ
′i).
Depending on the allocation and release of resources, the event transitions in
a tag machine can be classified as follows.
1. Consider an event tag piece kµk for the component tag machine TMi denoting
52 Chapter 4 Tag Machine based Performance Analysis of Job-shop Schedules
an event transition from the state labeled with variable mk to the state labeled
with variable mk accompanied with an event on the variable mk. Such a tag-
piece is associated with a transition allocating the resource mk to the job Ji after
waiting for the resource to be free, if needed.
2. The tag piece associated with an event transition leaving the state mj and
going to a state mk is given by jµk. Such a tag piece denotes the waiting phe-
nomenon for resource mk after releasing resource mj.
3. The tag piece associated with an event transition leaving the state mj and
going to a state mk is given by jµk. Such a tag piece denotes the immediate
allocation of resource mk after releasing resource mj .
A delay tag piece µk′ (µ′
k) indicates an event on variable mk (mk). Delay tag
pieces act as labels of self- loops indicating either utilization of a resource (mk)
or awaiting for a resource to be free (mk).
Consider for example, a job-shop specification given as, J1 = (m1, 4), (m2, 5);
J2 = (m1, 3). The tag machines TM1 and TM2 for the jobs J1 and J2 with
the underlying set V of variables given by V = {m1, m2, m1, m2, f} are shown
in Fig 4.1. Given the row and column designation order 〈m1, m1, m2, m2, f〉, the
m1 m1 m2m2 f
m1 m1 f
TM1
TM2
1µ1
1µ1
1µ2
1µf
2µ2 2µf
µ′1 µ′
1µ′
2 µ′2 µ′
f
µ′1
µ′1 µ′
f1µ2
Figure 4.1: Tag machines for J1 and J2.
representative event tag piece 2µ2 and delay tag piece µ′2 are given respectively
by,
〈0〉 〈ǫ〉 〈ǫ〉 〈ǫ〉 〈ǫ〉
〈ǫ〉 〈0〉 〈ǫ〉 〈ǫ〉 〈ǫ〉
〈ǫ〉 〈ǫ〉 〈0〉 〈0〉 〈ǫ〉
〈ǫ〉 〈ǫ〉 〈ǫ〉 〈1m2〉 〈ǫ〉
〈ǫ〉 〈ǫ〉 〈ǫ〉 〈ǫ〉 〈0〉
,
〈0〉 〈ǫ〉 〈ǫ〉 〈ǫ〉 〈ǫ〉
〈ǫ〉 〈0〉 〈ǫ〉 〈ǫ〉 〈ǫ〉
〈ǫ〉 〈ǫ〉 〈0〉 〈ǫ〉 〈ǫ〉
〈ǫ〉 〈ǫ〉 〈ǫ〉 〈1m2〉 〈ǫ〉
〈ǫ〉 〈ǫ〉 〈ǫ〉 〈ǫ〉 〈0〉
4.2 Modeling Job-shop Schedules using Tag Machines 53
The component values for the delay tag piece µ′2 (the second matrix) are the same
as those for 2µ2 (the first matrix) except for the m2m2-th component which is
〈ǫ〉 instead of being 〈0〉 as in 2µ2 (due to the absence of any dependency from
m2 to m2 in µ′2).
4.2.1 Composition of Tag Machines
Let V be the set of all variables of the component tag machines modeling indi-
vidual jobs. We partition V as V = V sa ∪ V nsa (a disjoint union), where V sa
denotes the set of variables on which events due to multiple tag machines are
allowed simultaneously (“sa” implies “simultaneously activable”) and the oppo-
site for V nsa. For better understanding, consider a variable mi which denotes
waiting for some resource mi to be free. Clearly mi ∈ V sa, as more than one
tag machine can simultaneously wait for mi thereby bringing about events on
mi simultaneously. Now consider the variable mi corresponding to usage of the
resource mi which can serve only a single task at a time. Clearly, mi ∈ V nsa
and the variable f ∈ V sa since multiple tag machines can reach the final state
at the same instant. This scenario underlines the need of mutual exclusion be-
tween transitions of different tag machines trying to cause events on the same
variable mi ∈ V nsa simultaneously in the composed model. Such a requirement
is satisfied by incorporating suitable unification condition(s) for tag pieces.
The partial orders ≤ and ⊑ and the relation ⊲⊳ defined for tags extend
component-wise to tag pieces as follows. Given two tag pieces µ1 and µ2 be-
longing to two different tag machines corresponding to different jobs, we have,
µ1 ⊲⊳ µ2 iff ∀(w, v) ∈ V × V, µ1wv ⊲⊳ µ2wv , where µi : V × V 7→ T , i = (1, 2). For
handling mutual exclusion, we modify the unification condition as follows.
µ1 ⊲⊳ µ2 iff
{
∀(w, v) ∈ V × V, µ1(w, v) ⊲⊳ µ2(w, v) and
(Vµ1∩ Vµ2
) ∩ V nsa = φ(4.1)
Recall that Vµ1⊆ V comprises variables which have events in µ1 and similarly
for Vµ2. The first clause in condition 4.1 is the usual unification condition while
the second one ensures that the unified tag pieces do not have simultaneous
events on some variable in V nsa. For further illustration of unification of tag
pieces, consider the unification scenario shown in Fig 4.2 depicting allocation of
54 Chapter 4 Tag Machine based Performance Analysis of Job-shop Schedules
resources m2 and m1 to jobs J1 and J2 respectively for the job-shop given in
Fig 4.1. The event transitions denoting allocation of resources m2 and m1 (after
PSfrag
m2m2 m2m2 m1m1 m1m1
2µ2 1µ1 2µ2 ⊔1 µ1
Figure 4.2: Unification of event tag pieces.
waiting) are depicted by tag pieces 2µ2 and 1µ1 respectively. Note that for tag
pieces 2µ2 and 1µ1, the first clause in Eq. 4.1 holds since the upper bounds exist
by taking componentwise maximums. Also, V2µ2
= m2 and V1µ1
= m1 so that
the second clause is also true. Hence, the tag pieces are unifiable and we have
the following unified tag piece :
2µ2 ⊔ 1µ1 =
〈0〉 〈0〉 〈ǫ〉 〈ǫ〉 〈ǫ〉
〈ǫ〉 〈1m1〉 〈ǫ〉 〈ǫ〉 〈ǫ〉
〈ǫ〉 〈ǫ〉 〈0〉 〈0〉 〈ǫ〉
〈ǫ〉 〈ǫ〉 〈ǫ〉 〈1m2〉 〈ǫ〉
〈ǫ〉 〈ǫ〉 〈ǫ〉 〈ǫ〉 〈0〉
In the unified tag piece 2µ2 ⊔ 1µ1, the first tag piece is contributed by TM1 and
the second tag piece is contributed by TM2. Observe from the entries in 2µ2⊔ 1µ1
that
1. there exists an event on m1 which is preceded by the last event on m1,
2. there exists an event on m2 which is preceded by the last event on m2 and
3. there exists no other events.
Thus, the tag piece 2µ2 ⊔ 1µ1 denotes simultaneous allocation of resources m2
and m1 to jobs J1 (modeled by TM1) and J2 (modeled by TM2) respectively.
The tag machine TM obtained by composing TM1 and TM2 is shown in Fig
4.3. The composite machine is obtained by unifying event tag pieces and delay
tag pieces of the individual machines using the rules defined in Equation 4.1.
In Fig, 4.3, the tag piece µ′2 ⊔ µ
′1 (within dotted box) depicts the scenario that
J1 occupies resource m2 and J2 occupies resource m1 simultaneously. Similarly,
2µf ⊔ 1µf depicts simultaneous release of resources m2 and m1 by jobs J1 and J2
respectively. Also, µ′1⊔ 1µ1 denotes that job J1 is waiting for resource m1 while
4.2 Modeling Job-shop Schedules using Tag Machines 55
m1m1 m1m1 m1f
m1m1 m1f
m2m1 m2m1 m2f
m2m1 m2m1 m2f
fm1 fm1 ff
µ′1⊔ 1µ1
µ′1⊔ 1µf
1µ1 ⊔ µ′1
1µ1 ⊔ µ′f
1µ2 ⊔ µ1′
1µ2 ⊔ µ′f
µ′2⊔ 1µ1
µ′2⊔ 1µf
2µ2 ⊔ µ′1
2µ2 ⊔ µ′1
2µ2 ⊔ µ′f2µ2 ⊔ 1µ1
µ′2 ⊔ 1µ1 µ′
2 ⊔ 1µf
2µf ⊔ µ′1
2µf ⊔ µ′1
2µf ⊔ 1µf
2µf ⊔ µ′f
µ′f ⊔ 1µ1 µ′
f ⊔ 1µf
µ′1⊔ µ′
1
µ′1⊔ µ′
1
µ′1⊔ µ′
f
µ′1 ⊔ µ
′1 µ′
1 ⊔ µ′f
µ′2⊔ µ′
1
µ′2⊔ µ′
1
µ′2⊔ µ′
f
µ′2 ⊔ µ
′1
µ′2 ⊔ µ
′1
µ′2 ⊔ µ
′f
µ′f ⊔ µ
′1
µ′f ⊔ µ
′1
µ′f ⊔ µ
′f
2µf ⊔ 1µ1
2µ2 ⊔ 1µf
1µ2 ⊔ 1µ1
1µ1 ⊔ 1µf
1µ2 ⊔ µ′1 1µ2 ⊔ 1µ1
1µ2 ⊔ µ′f
Figure 4.3: Composite Tag machine for the tag machines in Fig 4.1.
J2 gets m1 allocated.
4.2.2 Job-Shop Schedules as Tag Vectors
In the present work, we always consider non-lazy executions of job-shop sched-
ules. In a non-lazy execution of a schedule, any task t gets started whenever all
the preceding tasks have finished execution and the resource ζ(t) is free. Consider
the job-shop J1 = a1b1 = (m1, 4), (m2, 5); J2 = a2 = (m1, 3) with a non-lazy
execution of the schedule χ = a1a2b1 in Fig. 4.3. Since the execution is non-lazy
56 Chapter 4 Tag Machine based Performance Analysis of Job-shop Schedules
J1 will immediately occupy resource m1 along with J2 waiting for m1 to be free.
In effect, the transition with tag piece 1µ1 ⊔ µ′1
comprising an event transition
with tag piece 1µ1 in TM1 and a simultaneous delay transition with tag piece µ′1
in TM2 will be executed in the composite tag machine. Task a1 consumes 4 units
of time in m1 of which the first unit of time is captured by the event transition
itself. Hence, in a non-lazy execution, the composite tag machine will execute
the delay transition with tag piece µ′1 ⊔ µ
′1
three times in order to complete the
task a1 of J1 while J2 waits for m1 to be released by J1. Following the com-
pletion of task a1, job J1 immediately occupies resource m2 and starts task b1
while J2 immediately occupies resource m1 (freed by J1) and starts task a2. The
situation is captured by the transition with tag piece 1µ2 ⊔ 1µ1. Since ζ(b1) = 5
and ζ(a2) = 3 and the transition completes one unit of both jobs, the delay
transition with tag piece µ′2 ⊔ µ
′1 is executed twice thus leading to completion of
a2 while b1 still requires 2 time units to complete. Due to the non-lazy nature of
execution, the transition with tag piece µ′2⊔ 1µf is executed denoting 1 time unit
of execution of b1 and immediate release of m1 by J2 leading to completion of
J2. Next, the remaining 1 time unit of execution of b1 is achieved by executing
the transition with tag piece µ′2 ⊔ µ
′f and subsequently the transition with tag
piece 2µf ⊔ µ′f is executed to denote completion of J1.
The tag vector generated by concatenation of tag pieces for non-lazy execution
of schedule χ is given by τχ such that τχ = τ 0 · (1µ1 ⊔ µ′1) · (µ′
1 ⊔ µ′1)3 · (1µ2 ⊔
1µ1) · (µ′2 ⊔ µ
′1)
2 · (µ′2 ⊔ 1µf) · (µ′
2 ⊔ µ′f) · (2µf ⊔ µ′
f).
An important observation regarding the use of tag machines for evaluating the
asymptotic performance of job-shop schedules is that we only need to consider the
variables corresponding to resource access for our analysis, i.e., the set V nsa. The
other variables (∈ V sa) are either waiting states or final states of a tag machine
and do not contribute to the dynamics of the system. This happens because in
case of non-lazy execution of a schedule, a sequence of concurrent events on any
subset of V sa is concurrent with events on the variables in V nsa. In other words,
the completion of a job-shop schedule can be conclusively determined by checking
whether the number of events on the variables in V nsa has been in accordance
with the job-shop specification and whether the dependencies between events in
V nsa satisfies the precedence relations given in the specification.
4.2 Modeling Job-shop Schedules using Tag Machines 57
For computing the tag vector for a job-shop schedule while being restricted
to the variables in V nsa, the following rules are honoured.
1. Any tag piece of the form kµk representing allocation of mk to some task t in
job J i (after waiting for mk to be free) is replaced by a tag piece jµk provided
the task immediately preceding t required the resource mj . If there exists no
such preceding task, then kµk is replaced by the tag piece µ′k since it is the first
task to compute and have no dependency on any preceding task.
2. Tag pieces containing events only on variables in V sa are deleted.
Based on these facts, the tag vector,
τχ = τ 0 ·(1µ1⊔µ′1)·(µ′
1⊔µ′1)3 ·(1µ2⊔1µ1)·(µ′
2⊔µ′1)
2 ·(µ′2⊔1µf )·(µ′
2⊔µ′f)·(2µf⊔µ′
f)
becomes
τχ|V nsa = τ 0 · (µ′1) · (µ
′1)
3 · (1µ2 ⊔ µ′1) · (µ
′2 ⊔ µ
′1)
2 · (µ′2) · (µ
′2) = τ 0 · (µ′
1)4 · (1µ2 ⊔
µ′1) · (µ
′2 ⊔ µ
′1)
2 · (µ′2)
2.
Observe that the unified tag piece (1µ1 ⊔ µ′1) in the computation of τχ becomes
(µ′1) in the computation of τχ|V nsa . This happens because the tag piece 1µ1
in the unified tag piece (1µ1 ⊔ µ′1) gets changed to µ′
1 by the second clause of
rule 1 and the tag piece µ′1
in the unified tag piece (1µ1 ⊔ µ′1) gets deleted by
rule 2. Similarly, the transformations in the other tag pieces is also performed
using the above rules. In the computation of τχ|V nsa , the initial tag vector τ 0
and tag pieces have the row and column designation order as 〈m1, m2, m3, m4〉.
From here onwards, we will be always considering schedules as non-lazy and tag
vectors as being restricted to V nsa unless mentioned otherwise. Let us consider
the job-shop specification J given by,
J1 = a1b1c1d1 = (m3, 3)(m1, 1)(m2, 6)(m4, 5)
J2 = a2b2c2d2 = (m3, 5)(m2, 4)(m1, 5)(m4, 4)(4.2)
For J , a concatenation of tasks is given by v = a2a1b1b2c1d1c2d2. Clearly v ∈ SJ
(the finite collection of order preserving sequences of tasks with completed jobs
as introduced in chapter 3, definition 10.), since the relative ordering of tasks for
each component job (a1 ≺ b1 ≺ c1 ≺ d1 and a2 ≺ b2 ≺ c2 ≺ d2) is honoured in v
and both the component jobs start and finish in v. In the given job-shop J with
two disjoint chains, each of length 4, the number of feasible schedules is given by8!
4!4!.
Executing the schedule v in the composite tag machine, we have the overall
58 Chapter 4 Tag Machine based Performance Analysis of Job-shop Schedules
tag vector given by τ v = [τ vm1
; τ vm2
; τ vm3
; τ vm4
] = τ 0 · (µ′3)
5 · (µ′3 ⊔ 3µ2) · (µ′
3 ⊔ µ′2)
2 ·
(3µ1 ⊔ µ′2) · (1µ2 ⊔ 2µ1) · (µ′
2 ⊔ µ′1)
4 · (µ′2) · (2µ4) · (µ′
4)4 · (1µ4) · (µ′
4)3
such that,
τ v = [[6, 10, 8, ǫ]; [1, 10, 8, ǫ], [ǫ, ǫ, 8, ǫ]; [6, 10, 8, 9]] (4.3)
where the column designation order for each component of τ v is 〈m1, m2, m3, m4〉.
For example, τ vm1
= 〈6, 10, 8, ǫ〉 where, τ vm1
(m1) = 6, τ vm1
(m2) = 10, etc.
4.3 Inadequacy of known tag structures
Consider I to be a relation, called independence relation, on the set of tasks T ,
such that for two tasks t1 and t2, we have t1It2 if,
• ζ(t1) 6= ζ(t2), and
• t1 and t2 are not components of the same job, i.e., (t1 ⊀ t2) ∧ (t2 ⊀ t1).
Obviously, independent tasks can be executed in parallel. The relation I induces
an equivalence relation ∼ on the set of all possible schedules such that two sched-
ules v1 and v2 are equivalent (denoted as v1 ∼ v2) if one can be derived from the
other by a finite number of commutations performed on consecutive independent
tasks. Consider, for example, the job-shop J given by Eq. 4.2 with schedules
v1 = a2a1b1b2c1d1c2d2, v2 = a2a1b2b1c1c2d1d2 and v3 = a1a2b1b2c1d1c2d2. We have
v1 ∼ v2 as b1Ib2 and d1Ic2. Hence, v2 can be derived from v1 (and vice versa) by
commutating the order of b1b2 and c1d1 in v1. However, schedules v1 and v3 are
not equivalent since the order of two non-independent tasks a1 and a2 have been
reversed. Hence, neither of them can be derived from the other by commuting
independent tasks. The asymptotic performances of all schedules belonging to
the same equivalence class are equal (Gaubert and Mairesse, 1999). Since a tag
vector represents a computation performed by a tag machine, we expect that
non-equivalent schedules must not lead to the same tag vector which however is
not the case as demonstrated next.
4.3 Inadequacy of known tag structures 59
4.3.1 Inadequacy of the Algebraic Tag Structure Tdep
A tag vector represents a computation performed by a tag machine by capturing
the events on different variables and their dependencies. Hence, it is natural
for any performance evaluation methodology based on TSMs to take as input
the tag vector corresponding to a schedule and devise methods for computing
the asymptotic throughput. However, we show that computation of tag vectors
using Tdep as the underlying tag structure is not appropriate for this purpose
since non-equivalent schedules with different asymptotic performances may lead
to the same tag vector using Tdep. Such a situation is demonstrated below.
The tag vector τ v does not uniquely represent all schedules belonging to the
equivalence class of v = a2a1b1b2c1d1c2d2. Consider another schedule given by
v′ = a1a2b1c1b2d1c2d2 for the job-shop given by Eq 4.2. The pictorial represen-
tation of the schedules v and v′ are given in Fig 4.4 where the values denote
the height of the ‘heap of tag pieces’ due to v and v′. A heap of tag pieces is
a pictorial representation which depicts the events on variables along with their
dependencies which result due to the concatenation of a sequence of tag pieces.
Using our method of tag vector computation for the schedules v and v′, the re-
sulting heap of tag pieces is given in Fig. 4.4. It can be observed that the two
4
10
5
8
5
9
5
9
1
66
10
3
8
6
1
m1m1m2m2
m3m3m4m4
b1b1
c2c2
b2
b2
c1
c1
a2
a2
a1
a1
d1d1
d2d2
a) schedule v b) schedule v′
Figure 4.4: Heaps of tag pieces for the schedules v and v′ for the job-shop J in
Eq. 4.2.
60 Chapter 4 Tag Machine based Performance Analysis of Job-shop Schedules
schedules v and v′ are not equivalent (since (a1, a2) /∈ I, and the ordering of a1
and a2 in v is reversed in v′). However, it can be checked that τ v′ = τ v. Thus,
it is possible to arrive at the same tag vector for two non-equivalent schedules
using Tdep as the underlying tag structure. More specifically, in the schedule v′,
consider the dependency from the 6th event on m2 to the first event on m4 as
revealed in Fig. 4.4. The value of the component of τ v′(= τ v) corresponding to
m4 for variable m2 is τm4(m2) = 10, as given by Eq 4.3. The value is the same
for both the schedules v and v′. It can be checked from Fig 4.4 that only v has a
causal dependency from the 10-th event on m2 to the first event on m4. In case
of schedule v′, there is a dependency from the 6-th event on m2 to the first event
on m4 and hence τ v′
m4(m2) should be 6. However, the computation reveals that
τ v′
m4(m2) = 10.
The reason behind this anomaly can be easily found by analyzing the com-
putation method of dependencies. Let u be the prefix of schedule v′ such that,
ud2 = v′. Hence, u = a1a2b1c1b2d1c2. Let τu be the tag vector for u. Therefore,
τu ·(1µ4)·(µ′4)
3 = τ v′ since c2 is the task preceding d2 with ζ(c2) = m1, ζ(d2) = m4
and d(d2) = 4. The tag vector τu (computed by concatenation of the respective
tag pieces) is given by τu = [[6, 10, 8, ǫ]; [1, 10, 8, ǫ]; [ǫ, ǫ, 8, ǫ]; [6, 6, 8, 5]]. Also, the
the tag piece 1µ4 is given as,
1µ4 =
〈0〉 〈ǫ〉 〈ǫ〉 〈0〉
〈ǫ〉 〈0〉 〈ǫ〉 〈ǫ〉
〈ǫ〉 〈ǫ〉 〈0〉 〈ǫ〉
〈ǫ〉 〈ǫ〉 〈ǫ〉 〈14〉
(4.4)
It can be observed from Eq 4.4 that τum4
(m2) = 6, τum1
(m2) = 10. Let τx =
τu · (1µ4). We have,
τxm4
(m2) = max
τum1
(m2) + 1µ4(m1, m4)(m2)
τum2
(m2) + 1µ4(m2, m4)(m2)
τum3
(m2) + 1µ4(m3, m4)(m2)
τum4
(m2) + 1µ4(m4, m4)(m2)
= max
10 + 0
10 + ǫ
ǫ + ǫ
6 + 0
= 10
(4.5)
As we can see, the application of the max rule overwrites the dependency in-
formation. The tag piece 1µ4 contains an event onm4 (1µ4(m4, m4) = 〈14〉) which
4.3 Inadequacy of known tag structures 61
depends on the preceding event on m1 (i.e., the 6-th event given by τum1
(m1) = 6)
as 1µ4(m1, m4) = 〈0〉 . Since τum1
(m2) = 10 (there already existed a dependency
from the 10-th event on m2 to some event on m1 in τu), the first row becomes the
maximum in the (max,+) computation in Eq. 4.5 and results in τxm4
(m2) = 10.
However, observe from Fig 4.4 that events on m4 have a dependency from the
6-th event on m2.
In order to remedy this discrepancy, we create a formal distinction between
two different kinds of dependencies encountered in the computation of a tag
vector.
• A dependency from the event ei to the event ej is direct (denoted as ei ⇀ ej)
if ej is the first event which is caused by ei and there does not exist any set
of events {ek1, · · · , ekl
}, l ≥ 1, such that ei causes ek1, ek1
causes ek2, · · · ekl−1
causes ekl, and finally, ekl
causes ej .
• A dependency from an event ei to an event ej is transitive (denoted as
ei # ej) if there exists a set of events {ek1, · · · ekl}, (l ≥ 0) may be empty
(i.e., l = 0), such that (ei ⇀ ek1) ∧ (ek1 ⇀ ek2) ∧ · · · ∧ (ekl ⇀ ej).
Thus, transitive dependency contains direct dependency. The above notion of
direct dependency among events applies to the tasks as follows. Any job, being
a sequence of tasks, naturally specifies a natural direct dependence between
any two consecutive tasks of the sequence. For example, consider a job J i =
{ti1, · · · , tiki}. The n-th execution of the the task tij directly depends on the n-th
execution of the task tij−1, 2 ≤ j ≤ ki, for any n, for any schedule; in terms of
events, this assertion is equivalent to the statement that the ((n−1)×d(tij)+1)-
th event on ζ(tij) directly depends on the (n × d(tij−1))-th event on ζ(tij−1). A
schedule, in addition, specifies some more direct dependencies between two tasks
from two different jobs. Specifically, for two tasks ti1 ∈ J i and tj2 ∈ J j such
that ζ(ti1) = ζ(tj2), a schedule must make ti1 ⇀ tj2 or the opposite. For any
schedule s, the set of direct dependencies (denoted as ⇀s) is the same for all
schedules in the equivalence class [s] of s i.e, ⇀s1= ⇀s2
∀s1, s2 ∈ [s]. For
any two non-equivalent schedules (such as ν and ν ′ mentioned above), we have:
∀s ∈ [ν], ∀s′ ∈ [ν ′], ⇀s 6= ⇀s′. In Fig 4.5, we show the set of direct dependencies
62 Chapter 4 Tag Machine based Performance Analysis of Job-shop Schedules
for all schedules in the equivalence classes of ν and ν ′.1
PSfrag
a1
a1
b1
b1
c1
c1
d1
d1
a2
a2
b2
b2
c2
c2
d2
d2
⇀[ν]:
⇀[ν′]:
Figure 4.5: Set of direct dependencies for schedules in [ν] and [ν ′]. The depen-
dency pairs are connected by the directed edges.
The matrix operation rule for tag vector computation using Tdep includes
the transitive dependencies between variables due to the (max,+) computation
rule. From the computation of τ ν and τ ν′
, it is evident that the set of transitive
dependencies may be the same for two non-equivalent schedules. Hence, the tag
structure Tdep is not suitable for unique representation of schedules. Moreover,
any performance evaluation method which uses the tag vectors for computing the
asymptotic throughput λJi is going to give similar performance results for some
non-equivalent schedules. The given scenario suggests the need of tag structures
which restrict the computation rules of tag vectors to direct dependencies only.
In the subsequent section, we propose such alternative structures.
4.4 Proposing New Tag Structures
Consider a tag structure (Timm,≤imm,⊑imm), where Timm denotes the set of
all possible direct (immediate) dependencies over the events on the set V =
{v1, · · · , vn} of variables with ≤imm being the time stamp order (taken compo-
nent wise) and ⊑imm being the unification order over Timm.
1In Fig 4.5, the set of horizontal arrows indicate dependencies due to ≺J in the job-shop
specification J . The vertical or the slanting arrows indicate dependencies between tasks intro-
duced by the given schedule
4.4 Proposing New Tag Structures 63
The interpretation of τ under the tag structure Timm is different from that
under Tdep since Tdep includes transitive dependencies while Timm is restricted to
the set of direct dependencies. Let si : N→ T ×D denote the signal (comprising
events) on vi ∈ V ; let si(m) denote the m-th event on vi in si. Any tag τvi∈ Timm
(for some j-th event si(j) on vi) is an n-tuple 〈x1, · · ·xn〉, n = |V |, such that,
∀i, 1 ≤ i 6= k ≤ n, τvi(vk) = xk ∈ N⇒ sk(xk) ⇀ si(j)
and the xk-th event is the most recent one on vk. Also, τvi(vk) = ǫ models the
absence of any direct dependency of si(j) on any event in sk. Further τvi(vi) = j,
since si(j) is the j-th event on vi.
As an example, consider the sample behaviour modeled using the tag struc-
ture Timm shown in Fig 4.6. Consider the event σ(v1)(3) with tag value (3, ǫ, 1).
The tag values denote that the event is the 3rd event on v1, it is immediately
caused by the 1st event on v3 and there is no direct dependency from any event
on v2. However, using Tdep, the tag value would have been (3, 1, 1) as there is a
transitive dependency from the 1st event on v2.
(1,ǫ, ǫ);T (2,ǫ, ǫ);F (3,ǫ,1);T (4,ǫ, ǫ);F (5,ǫ, ǫ);T (6,2,2);F
(ǫ,1,ǫ);1 (ǫ,2,ǫ);1
(ǫ,1,1);1 (4,2,2);1
v1
v2
v3
Figure 4.6: A sample behaviour modeled using Timm.
Consider an event si(p) (on vi) with tag τ1 and similarly, an event sk(q) (on
vk) with tag τ2. The relation ≤imm is defined as follows.
For τ1, τ2 ∈ Timm, τ1 ≤imm τ2 ⇒ ∀vj ∈ V, τ1(vj) ≤ τ2(vj).
The situation implies that if si(p) directly depends on the m-th event on vj and
sk(q) directly depends on the n-th event on vj then m ≤ n.
The relation ⊑imm is defined using the following cases.
1. For τ1, τ2 ∈ Timm, i 6= k, τ1 ⊑imm τ2 ⇒ si(p) ⇀ sk(q)⇒ τ1(vi) = p = τ2(vi).
The situation implies that if sk(q) has any direct dependency on some event on
vi, then the latter one is the most recent event on vi.
64 Chapter 4 Tag Machine based Performance Analysis of Job-shop Schedules
2. For τ1, τ2 ∈ Timm, i = k, τ1 ⊑imm τ2 ⇒ τ1(vi) = p ≤ τ2(vk) = q.
The situation implies that the tag of any event on a variable is always related
by ⊑imm with the tags of all later events on that variable.
The corresponding derived relation ⊲⊳imm is defined as follows: τ1 ⊲⊳imm τ2 (τ2 is
unifiable with τ2) iff τ1 and τ2 have a common upper bound in ⊑imm defined as,
τx = τ1⊔imm τ2 such that τx ⊑imm τ1∧ τx ⊑imm τ2. Such an upper bound may not
be unique as exemplified in Fig. 4.6. Observe that the tags of the events s1(4)
and s2(2) have the tags of events s1(6) and s3(2) both as upper bounds. Also
observe that the tags of the events s2(2) and s3(2) have the tag of event s1(6) as
upper bound.2
In the context of Timm, a tag piece µ : V × V → Tdep can be smoothly
interpreted as µ : V × V → Timm without any change since tag pieces only
capture direct dependency even in Tdep. Given a tag vector τ ∈ Timm and a
tag piece µ, we need to define a computation rule in the form τ ′ = τ · µ which
captures the evolution of tag vectors. Recall that µ(u, v) = 〈0〉 (〈ǫ〉), u 6= v, if
there is a dependency (no dependency) from u to v. For the situation when a
direct dependency from u to v is introduced by µ, the new tag vector τ ′ should
have the component τ ′v(u) = τu(u), to refer to the most recent (i.e., previous)
event on variable u. Otherwise, the older value of direct dependency from u to v
as given by τ should be preserved, i.e., it should be the case that τ ′v(u) = τ v(u).
The situation is captured by defining,
τ ′v(u) = max(τu(u) + µ(u, v)(u), τv(u)) (4.6)
In the case u = v, the component µ(v, v) is either 〈1v〉 or 〈0〉 depending on
presence (absence) of event on v. Consequently, τ v(v) is either τ v(v) + 1 or
τ v(v). Otherwise, for u 6= v, it is easy to check that Eq. 4.6 yields the greater of
τu(u) (the most recent event on u) and τ v(u) thereby giving the desired result.
Using Timm, the tag values for schedules v and v′ are found to be:
τ v = [[6, 4, 8, ǫ]; [1, 10, 5, ǫ]; [ǫ, ǫ, 8, ǫ]; [6, 10, ǫ, 9]]
and
τ v′ = [[6, 10, 3, ǫ]; [1, 10, 8, ǫ]; [ǫ, ǫ, 8, ǫ]; [6, 6, ǫ, 9]]
2In future, we will be omitting the subscript ‘imm’ from the ordering relations when the
implication is clear from the context.
4.4 Proposing New Tag Structures 65
The computation rule of tag vectors using Timm ensures capturing the set of direct
dependencies only. The schedules v and v′ being non-equivalent have a different
set of direct dependencies and hence, the tag vectors τ v and τ v′ are distinct.
Another relevant point for performance evaluation of job-shop schedules is
that the tag structure Tdep, which can be used for functional composition of sub-
systems, do not have any notion of incoming dependency. To make the point
clear, let us consider the tag vector, τ say, for schedule v′. The components of
τ are τm1, τm2
, τm3and τm4
with τmk(mi) = n representing that some event
on variable mk depends on the n-th event on variable mi. However, it does
not provide the information that precisely which event on mk is being talked
about. This is the information which we will call an incoming dependency. Such
a dependency (along with the usual one) provides information regarding which
event on a variable is directly caused by which event on another variable. Observe
that this information is evident from the pictorial representation of heaps of tag
pieces in Fig. 4.4 and is required for constructing the (max, +) recurrence
relations for performance evaluation. We intend to compute tag vectors for
schedules in such a way that the components of the vector will capture all the
relevant incoming and outgoing dependency informations as provided by the heap
of tag pieces, thus establishing a one-to-one correspondence among tag vectors
and heaps of tag pieces.
By a natural extension of this nomenclature, the notion of dependency de-
scribed so far by Timm is referred to as outgoing dependency and the correspond-
ing tag structure is designated as (T Oimm,≤
O,⊑O) to distinguish from the struc-
ture modeling incoming dependency. The latter is modeled by the tag structure
(T Iimm,≤
I ,⊑I). The designation “incoming” and “outgoing” refer to the “arrow”
which represents the instance of a dependency in the heap of tag pieces due to a
given schedule. For example, in the heap of tag pieces due to ν ′ in Fig 4.4, the
first event on m1, designating the beginning of task b1 directly depends on the
third event on m3 designating the end of task a1; this is depicted by an arrow
from the third event on m3 to the zero-th (bottom of first event) on m1. The
incoming dependency (head of the arrow) will depict that the first event on m1
directly depends on (caused by) some event on m3. The outgoing dependency
(the tail of the arrow) will depict that the third event on m3 directly causes some
event on m1. Thus, while computing the tag vector, the concatenation of the
66 Chapter 4 Tag Machine based Performance Analysis of Job-shop Schedules
tag piece 3µ1 (signifying the end of task a1 and the start of b1) should produce
the following tag vector components in order to capture both the incoming and
outgoing dependency (as depicted in the heap of tag pieces) : τOm1
(m3) = 3,
τ Im3
(m1) = 0.3 For achieving this, the matrix operation rule for T Oimm remains
the same as for Timm and is given by:
τ ′Ov (u) = (τO · µO)v(u) = max(τOu (u) + µ(u, v)O(u), τO
v (u)) (4.7)
We now formalize the tag structure for incoming dependency and its vec-
tor computation rule as follows. Consider a tag structure (T Iimm,≤
I ,⊑I) (for
incoming dependency) with ≤I=≤ and ⊑I being the unification order for in-
coming dependencies. Both τ Ivi(vi) and τO
vi(vi) denotes the number of events
on variable vi. Any tag τ Ivi∈ T I
imm (for some event j-th on vi, si(j) say) is an
n-tuple 〈x1, · · ·xn〉 such that, τ Ivi
(vk)|i6=k = xk ∈ N, 1 ≤ k ≤ n ⇒ for some j,
si(j) ⇀ sk(xk+1) i.e., there is an incoming dependency into sk(xk+1) from si(j).
Also, τ Ivi
(vk)|i6=k = ǫ models the absence of any direct dependency; τ Ivi
(vi) = j
since si(j) is the j-th event on vi. The interpretation of ⊑I remains similar to
⊑O as defined previously (when we designated the order as ⊑imm). The matrix
operation rule in T Iimm similarly specializes as:
τ ′Iv (u) = (τ I · µI)v(u) = max(τ Iu(u) + µ(v, u)I(u), τ I
v(u)) (4.8)
Let us consider the dependency scenario as shown in Fig 4.7. The tag vectors
k1
l1 l2
k2
vi vj
Figure 4.7: Incoming and outgoing dependency.
τO and τ I computed for this dependency scenario will have the following com-
3We will shortly introduce how such a characterization serves the purpose
4.4 Proposing New Tag Structures 67
ponents:
τOvj
(vi) = l1, τ Ivj
(vi) = k1
τOvi
(vj) = l2, τ Ivi
(vj) = k2
It may be noted that, considering the entries τOv (u) and τ I
u(v) (for any v and u),
we have, τOvj
(vi) = l1 ⇒ si(l1) ⇀ sj(m), for some m and τ Ivi
(vj) = k2 ⇒ si(n) ⇀
sj(k2 + 1), for some n. Since, τOvj
(vi) and τ Ivi
(vj) characterize the last occurring
outgoing dependency from vi to vj and the last occurring incoming dependency
from vi to vj respectively, we may conclusively say that m=k2+1 and n=l1 i.e.,
the (k2 +1)-th event on vj is immediately caused by the l1-th event on vi. Hence,
we claim that the components τOvj
(vi) and τ Ivi
(vj), for any vi, vj ∈ V , completely
characterizes the cause-effect relationship in the last occurring dependency from
vi to vj for a given behaviour. Similarly, from the other two diagonally opposite
entries, we determine that the (k1 + 1)-th event on vi is immediately caused by
the l2-th event on vj.
Drawing motivation from the discussion above, we consider the following tag
structure for performance evaluation using tag machines:
(Tperf,≤perf,⊑perf) = (T Oimm × T
Iimm,≤
O × ≤I ,⊑O × ⊑I)
For τ1 = (τO1 , τ
I1 ), τ2 = (τO
2 , τI2 ) ∈ Tperf, τ1 ⊑perf τ2 iff τO
1 ⊑O τO
2 and τ I1 ⊑
I τ I2 .
Given a tag vector τ and a tag piece µ, the matrix operation rule for tag structure
Tperf is,
(τ · µ)v(u) = [(τ · µ)Ov (u), (τ · µ)I
v(u)] (4.9)
For the schedules v and v′, the tag vectors computed using Tperf (applying
Eqs 4.9,4.8,4.7) are as follows,
τ v =
τ vm1
= [[6, 6] [10, 4] [8, ǫ] [ǫ, 5]]
τ vm2
= [[1, 1] [10, 10] [8, ǫ] [ǫ, 0]]
τ vm3
= [[ǫ, 0] [ǫ, 0] [8, 8] [ǫ, ǫ]]
τ vm4
= [[6, ǫ] [10, ǫ] [ǫ, ǫ] [9, 9]]
68 Chapter 4 Tag Machine based Performance Analysis of Job-shop Schedules
and
τ v′ =
τ v′
m1= [[6, 6] [10, 0] [3, ǫ] [ǫ, 5]]
τ v′
m2= [[1, 1] [10, 10] [8, ǫ] [ǫ, 0]]
τ v′
m3= [[ǫ, 0] [ǫ, 6] [8, 8] [ǫ, ǫ]]
τ v′
m4= [[6, ǫ] [6, ǫ] [ǫ, ǫ] [9, 9]]
Using the tag structure Tperf, the computation of a tag vector succinctly captures
the last occurring dependencies between all pairs of events on variables which is
required for creating the (max,+) linear dynamics (Gaubert and Mairesse, 1999)
of a given schedule for a set of jobs. Such a methodology is discussed in the
following section.
4.5 Performance Evaluation of Job-shops using
Tag Machines
In the present section, we propose a methodology for evaluating the asymptotic
performance of job-shop schedules using the formalism of tag machines. We
introduce our approach to performance evaluation as an improvement over the
classical Petri net based modeling of job-shops. We analyze the complexity of
the proposed approach and compare it with the asymptotic complexities of the
two reputed approaches based on Petri nets and heaps of pieces discussed earlier.
4.5.1 Tag Machine based Performance Evaluation of Job-
shops
For asymptotic performance evaluation of a job-shop schedule, we require to
compute the (max, +) linear representation of Eq 3.2 given by,
x(n+ 1) = A0x(n+ 1)⊕A1x(n)
A methodology for computing such (max, +) linear representations using tag
machines is discussed below.
Let x(n) = 〈a1(n), a2(n), b1(n), b2(n), c1(n), c2(n), d1(n), d2(n)〉. Note that a
4.5 Performance Evaluation of Job-shops using Tag Machines 69
tag vector computed for a given schedule reveals the dependencies among events
on the variables (which are the resources). Hence, from a tag vector computed
in case of a given schedule, we will first construct the recurrence relations over
completion times of events on variables and then transform them to recurrence
relations over completion times of tasks. For doing so we need to relate the
completion time of n-th (and (n + 1)-th) iteration of tasks with the completion
time of events. This can be done as follows.
For task a1 observe (from the job-shop J) that, ζ(a1) = m3 and d(a1) = 3.
Also, J comprises two tasks a1 and a2 which require the resource (machine) m3
since ζ(a2) = m3. Let the completion time of the n-th event on some variable
mi be given by mi(n). In schedule v′, a1 and a2 are executed exactly once and
a1 ≺ a2. Thus, each iteration of v′ requires m3 for d(a1) + d(a2) = 3 + 5 = 8
units of time. Hence, the completion time of a1 in the (n + 1)-th iteration of v′
is given by a1(n + 1) = m3(8n + 3) and a2(n + 1) = m3(8n + 8). Similarly, for
the other tasks we have,
b1(n+1) = m1(6n+1), c2(n+1) = m1(6n+6), c1(n+1) = m2(10n+6), b2(n+1) =
m2(10n+ 10), d1(n+ 1) = m4(9n+ 5), d2(n+ 1) = m4(9n+ 9).
Substituting n = n − 1 we get, a1(n) = m3(8n − 5), a2(n) = m3(8n), b1(n) =
m1(6n − 5), b2(n) = m2(10n), c1(n) = m2(10n − 4), c2(n) = m1(6n), d1(n) =
m4(9n− 4), d2(n) = m4(9n).
Next we construct the recurrence relations over completion times of events on
variables m1, m2, m3 and m4 from τ v′ as follows.
Observe that the schedule v′ for J contains a total number of 6 events on m1.
Hence, we can denote the completion time of the first event on m1 in the (n+1)-
th iteration of v′ by m1(6n+1), n ≥ 0. For simplicity, we denote the tag vectors
computed for schedules v and v′ as τ and τ ′ respectively.4 From τ ′ computed
using Tperf we have, (τ ′Im3(m1) = 0)⇒ the 1st event on m1 is immediately caused
by some event on m3 and (τ ′Im2(m1) = 1)⇒ the 2nd event on m1 is immediately
caused by some event on m2. The event on m3 which immediately causes the 1st
event on m1 is given by the outgoing dependency from m3 to m1, τ′Om1
(m3) = 3.
Now m3 has a total number of 8 events. Hence, the completion time of the 3rd
event on m3 in the (n+1)-th iteration of v′ is given by m3(8n+3) in general. The
4Ideally, it should have been denoted as τv and τv′
respectively.
70 Chapter 4 Tag Machine based Performance Analysis of Job-shop Schedules
earliest starting time of the (6n+1)th event on m1 is the maximum of m3(8n+3)
and m1(6n) and each event takes one unit of time to get completed. We denote
max by “⊕” and addition by “·”. Hence, the completion time of the 1st event of
m1 is given by,
m1(6n+ 1) = 1 · (m1(6n)⊕m3(8n+ 3))
= 1 ·m1(6n)⊕ 1 ·m3(8n + 3)
The number of difference equations for each mi is the same as the number of
incoming dependencies to the events on mi5. For example, in v′, consider the
number of incoming dependencies for m1. In τ ′, we find the m1-th component of
m3 and m2 to be defined ( 6= ǫ). The dependency from m3 has been accounted for
in the previous equation. For the dependency from m2, we have (τ ′Im2(m1) = 1)
⇒ the 2nd event on m1 is immediately caused by some event on m2. Hence, we
need another difference equation for completely characterizing the events on m1
which is given by,
m1(6n+ 2) = 1 ·m1(6n+ 1)⊕ 1 ·m2(10n+ 10)
Similarly, for the other variables we have similar set of equations given by:
m2(10n+ 1) = 1 ·m2(10n)⊕ 1 ·m1(6n+ 1)
m2(10n+ 7) = 1 ·m2(10n+ 6)⊕ 1 ·m3(8n+ 8)
m3(8n+ 1) = 1 ·m3(8n)⊕ 1 ·m4(9n− 4)
m3(8n+ 4) = 1 ·m3(8n+ 3)⊕ 1 ·m4(9n)
m4(9n+ 1) = 1 ·m4(9n)⊕ 1 ·m2(10n+ 6)
m4(9n+ 6) = 1 ·m4(9n+ 5)⊕ 1 ·m1(6n+ 6)
6 The set of difference equations formulated above denote the completion time of
5Apparently this may seem not to hold for m3 since events on m3 have no incoming depen-
dency as shown in Fig. 4.4. This is because the first tasks of both the jobs are executed in m3.
However, the figure reveals only the first iteration of v′. The events on m3 will have incoming
dependencies from the last events of tasks d1 and d2 from the next iteration of v′.6For m3(8n + 1), we have the following argument: In schedule v′, the completion time of
the n-th iteration of the job-shop J (same as completion time of job J2 as per v′) is given
by m4(9n) as the last task d2 in v′ executes on m4. However, the n-th iteration of J1 is
completed 4 units of time earlier with the task d1 whose completion time is m4(9n−4). Hence
the (n + 1)-th iteration of J1 may start immediately after that without the n-th iteration of
4.5 Performance Evaluation of Job-shops using Tag Machines 71
the 1st event of each task. Recall that we need to form (max,+) recurrences which
related the (n+1)-th and n-th iterations of tasks. Observe from the previous set
of recurrences that the events on m2 has two dependencies, one on the first event
(m2(10n+1)) , i.e. which is the first event of task c1(n+1) = m2(10n+6) and one
on the 7-th event, i.e. which is the first event of task b2(n+ 1) = m2(10n+ 10).
Since there are no other recurrences for m2, we may simply add 5 to both sides
of the recurrence of m2(10n+1) in order to generate a recurrence for c1(n+1) =
m2(10n+6). Thus we have, m2(10n+6) = 6 ·m2(10n)⊕6 ·m1(6n+1). Similarly,
for representing the equations in terms of tasks, we normalize them to denote
the completion time of the last event of each task given as:
m1(6n+ 1) = 1 ·m1(6n)⊕ 1 ·m3(8n+ 3)
m1(6n+ 6) = 5 ·m1(6n+ 1)⊕ 5 ·m2(10n+ 10)
m2(10n+ 6) = 6 ·m2(10n)⊕ 6 ·m1(6n+ 1)
m2(10n+ 10) = 4 ·m2(10n+ 6)⊕ 4 ·m3(8n+ 8)
m3(8n+ 3) = 3 ·m3(8n)⊕ 3 ·m4(9n− 4)
m3(8n+ 8) = 5 ·m3(8n+ 3)⊕ 5 ·m4(9n)
m4(9n+ 5) = 5 ·m4(9n)⊕ 5 ·m2(10n+ 6)
m4(9n+ 9) = 4 ·m4(9n+ 5)⊕ 4 ·m1(6n+ 6)
Rewriting the difference equations in terms of tasks we have:
a1(n+ 1) = m3(8n+ 3) = 3 ·m3(8n)⊕ 3 ·m4(9n− 4) = 3 · a2(n)⊕ 3 · d1(n)
a2(n+ 1) = m3(8n+ 8) = 5 ·m3(8n+ 3)⊕ 5 ·m4(9n) = 5 · a1(n+ 1)⊕ 5 · d2(n)
b1(n+ 1) = m1(6n+ 1) = 1 ·m1(6n)⊕ 1 ·m3(8n+ 3) = 1 · a1(n+ 1)⊕ 1 · c2(n)
b2(n+1) = m2(10n+10) = 4·m2(10n+6)⊕ 4·m3(8n+8) = 4·c1(n+1)⊕ 4·a1(n+1)
c1(n+ 1) = m2(10n+ 6) = 6 ·m2(10n)⊕ 6 ·m1(6n+ 1) = 6 · b2(n)⊕ 6 · b1(n+ 1)
c2(n+1) = m1(6n+6) = 5·m1(6n+1)⊕ 5·m2(10n+10) = 5·b1(n+1)⊕ 5·b2(n+1)
d1(n+ 1) = m4(9n+ 5) = 5 ·m4(9n)⊕ 5 ·m2(10n+ 6) = 5 · d2(n)⊕ 5 · c1(n+ 1)
d2(n+1) = m4(9n+9) = 4·m4(9n+5)⊕ 4·m1(6n+6) = 4·d1(n+1)⊕ 4·c2(n+1)
Rewriting the equations in the form of Eq 3.2, we may now construct the matri-
ces A0 and A1. Next, in our approach using tag machines, we need to identify
J2 being completed. The 2nd equation for m3 pertaining to its 4th event on each iteration can
be obtained in a similar manner.
72 Chapter 4 Tag Machine based Performance Analysis of Job-shop Schedules
the set C of tasks for computing A as was done in the EG based approached
discussed in chapter 3, section 3.3. Each individual job J i specifies the ordering
of tasks belonging to itself. Given a job-shop J = {J1, J2}, with J i = aibicidi,
i = 1, 2, we have:
(J i)ω = {· · · ≺ di(n− 1) ≺ ai(n) ≺ bi(n) ≺ ci(n) ≺ di(n) ≺ ai(n+ 1) ≺ · · · }
Now consider the set Xi of tasks which execute on machine mi given by
Xi = {t ∈ T | ξ(t) = mi}. Any schedule v ∈ S ⊆ T ∗ creates an ordering inside
each Xi in the form of a chain (a total order), which is the same for all schedules
equivalent to v. For example, consider the ordering imposed by the schedule v′
in our example job-shop. Due to v′, we have: X1 = {b1 ≺ c2}, X2 = {c1 ≺
b2}, X3 = {a1 ≺ a2}, X4 = {d1 ≺ d2}. We denote the maximum/top of each Xi
by X⊺
i ; (the maximal element in each Xi is unique, hence entitled the maximum
of Xi). Similarly let Jk⊺
the maximum for the job Jk. It can be observed that
the set of transitions with at least one token in one downstream place, i.e. the
set C, is given by:
C = (∪i{X⊺
i }) ∪ (∪k{Jk⊺
}) (4.10)
In the EG, the tokens occur in downstream places for only those transitions
which are either Xi⊺ or Jk⊺
(Jk⊺
is the last task for job Jk for some k and Xi⊺
is the last task scheduled in mi for a given schedule for some i). Hence, we can
easily construct C by computing the Xis given a schedule v without incurring
the additional modeling complexity of using an EG. The cardinality of C is of
the order | J | + | M |. The procedure for computing A|C from A = A∗0A1 and
associated complexity values are given in (Gaubert and Mairesse, 1999). After
identifying C, the construction of A|C and subsequent evaluation of ρ(A) using
Karp’s algorithm can be done using methods similar to the previously reported
classical approaches. Next we propose an algorithm for computing λJi for tag
machine based job-shop models and compare the complexities involved with the
state of the art ones.
In our approach of tag machine based performance evaluation, the tag ma-
chines for different jobs and the compositional machine are constructed initially.
For a given v, we compute the tag vector τ v using the tag structure T perf. In a tag
4.5 Performance Evaluation of Job-shops using Tag Machines 73
vector, only the last occurring dependencies between a variable pair is recorded.
Hence, in the computation process, whenever a new dependency is introduced
by a tag piece, the tag vector is computed and the corresponding (max,+) recur-
rence is formed. Our algorithm for asymptotic performance evaluation is given
as follows:
Algorithm 4.1: Tag machine based Performance Evaluation
Input: A job-shop J defined over resources in M and a schedule v.
Output: λji =| v |Ji ×ρ(A)−1 ∀J i ∈ J .
1. Construct the compositional tag machine for J .
2. Compute τ v and construct the (max,+) linear representation :
x(n + 1) = A0x(n + 1)⊕A1x(n)
where x,A0, A1 have their usual meanings.
[Complexity : O(| v |)]
3. Compute the C × C sub-matrix A of A∗0A1.
[Complexity : O(| v | (| J | + |M |) + (| J | + |M |)2)]
4. Compute ρ(A), using Karp algorithm.
[Complexity : O((| J | + |M |)3)]
[Total complexity : O(| v | (| J | + |M |) + (| J | + |M |)3)]
4.5.2 A Relative Comparison of the Approaches
One can observe that our algorithm and both EG and heap based approaches
incur similar time complexity results. However, as discussed in (Gaubert and
Mairesse, 1999), the event graph has to be separately constructed for every spec-
ification of a new schedule. Such an extra computational complexity for building
the EG with every schedule is not involved in the heap based method where the
heap realizationM is constructed only once and remains valid for all schedules,
both periodic and non-periodic. Another problem with the EG based method is
that the size of the EG grows with the length of the schedule pattern whereas the
size of the heap automaton remains constant (Gaubert and Mairesse, 1999). Like
heap based models, the tag machines do not incur extra modeling complexity for
different schedule specifications. The composite tag machine for a given job-shop
is constructed only once. For different schedules, only the order of concatenation
of the tag pieces change giving different tag vectors.
74 Chapter 4 Tag Machine based Performance Analysis of Job-shop Schedules
Analysing the algorithms for efficiency measures we have the following out-
comes. Step 1 is not taken into account as it is a one time process valid for
all different runs like construction of the Petri net in step 1 of the EG based
method and computation of initial matrices in heap based method. In step 2,
the computation of τ v requires the construction of |v| matrix products, each of
which can be done in constant amount of time as explained below. Observe that
the concatenation of τ v with the tag pieces corresponding to any task always
results in the change of exactly two incoming dependencies and two outgoing
dependencies. For example, consider the change in τ v due to concatenation
with the tag pieces of task c1. We have ζ(c1) = m2 and the task b1 as the
immediate predecessor of c1, with ζ(b1) = m1. The concatenation of the tag
pieces of c1 will lead to re-computation of the following four components in
τ v: τOm2
(m1), τIm1
(m2), τIm2
(m2), τOm2
(m2). Using the tag structure Tdep, compu-
tation of each dependency value required O(| M |) time (one max operation
and |M | additions). Using Tperf this requires constant time since exactly two
additions and one max operation are involved (using immediate dependency
rule). Thus, step 2 of the algorithm requires |v| × 4× 3 elementary operations.
The corresponding step in the heap based method for computingM(v) requires
O(| v | (| J | + | M |)) which is linear in v but also depends on J and R.
Correspondingly, the EG based method requires to construct the event graph for
the schedule v. As exemplified in (Gaubert and Mairesse, 1999), for a schedule
v in which each job is completed n times, the number of transitions T in the EG
is T = | v | and the number of places P = | v | +n× (|M | +m) where m is the
number of resources which are in conflict7. In step 2 of the EG based method,
one needs to construct two T × P and one 1× P boolean matrices for modeling
the incoming and the outgoing arcs as well as the tokens in the places. Thus, the
modeling complexity of EG based method is of the order O(| v |2 + | v || M |)
and the number of elementary operations involved will be larger by an order
when compared with the computation of τ v. Steps 3 and 4 of our method are
the same as in EG based method. Hence, we may safely claim that the tag
machine based method is more efficient than the EG based method. However,
in the heap based method, step 3 (of our method and the EG based method) is
completely absent. Hence, it is the most efficient among the three methods in
7For simplicity, we assume conflict between two tasks only. In more complex cases, the
number of places will be larger.
4.6 Performance Evaluation of SynchronousDataflow Graphs (SDFG) 75
spite of the fact that all three share the same asymptotic complexity.
4.6 Performance Evaluation of Synchronous
Dataflow Graphs (SDFG)
Consider the SDFG G = (〈A,E〉, C) shown in Fig. 4.8 and its self-timed exe-
cution as given in Fig. 4.9 with notation as defined in the previous chapter.
1 1
11
2 3
2 3
b,1 c,1a,2
c1
c2
c3
c4
Figure 4.8: A sample SDFG G.
We construct a tagged system view of this SDFG and its self-timed execution as
a b ba
a
a c
a b
a ca a b↑↑
↑
↑
↓
↓
↓
↓ lll
l
l
l
s0 s1s2 s3
s4s5
s6s7
Figure 4.9: The self-timed execution (σ say) of G
shown in Fig 4.9 (using Tdep) and show that it is possible to use the performance
evaluation algorithm developed in the present work for computing throughput
even for SDFG models. We use the concept of labeled tag pieces comprising
tag pieces and values. We construct the labeled tag pieces (µ) for the different
execution steps in the self-timed execution of the SDFG G. The idea is that, if
a periodic execution in any MoC can be modeled as a (finite) sequence of tag
pieces, then our methodology for performance evaluation can compute the re-
sulting tag vector for a single cycle of execution (taking into account both kinds
of dependencies), generate the corresponding (max,+) equations and finally the
eigen value which may be used for computing the throughput.
In the context of SDFG, we set V = A (the set of actors), T = Tdep and
76 Chapter 4 Tag Machine based Performance Analysis of Job-shop Schedules
D = Z|C| (C is the set of channels). The firing of an actor is considered as
an event on the corresponding variable. The domain D of values assumed by
events on variables (actor firings) is constructed as |C|-tuples of integers which
reflect the production or consumption of tokens in the channels due to the actor
firings. Recall that for an actor a, a ↑ denotes the start of firing, a ↓ denotes
the end of firing and a l denotes the start and the end of firing inside a single
execution cycle. A state transition of an SDFG can be conceived as an A × A
labeled tag piece µ = (µ, ς) where µ captures the event information along with
dependency and ς captures the token consumption and production information.
For a transition t in G, the entries of µ are defined as follows.
1. If an actor a has no firing in t then µ(a, a) = 〈0〉.
2. If an actor a - i) starts firing in t (denoted by a ↑), ii) ends its firing (starting
previously) in t (denoted by a ↓), iii) both starts and ends firing in t (denoted
by a l), iv) starting previously, continues a firing in t – then µ(a, a) = 〈1a〉.
3. If an actor a starts firing in t (denoted by a ↑) and is preceded by the end of an
actor firing b (denoted by b ↓) in the previous transition then µ(b, a) = 〈0〉, i.e.,
the event on a has a dependence on the event on b. If a ↑ is preceded by multiple
such end of actor firings, i.e., c ↓, d ↓ say, then the corresponding components in
µ will be µ(c, a) = 〈0〉, µ(d, a) = 〈0〉 accordingly.
Recall that the labels in the tag pieces are given by the mapping ς : V ×V →
D|V | → D, with only ς(v, v), v ∈ V being defined. While associating the channel
quantities as labels to the tag pieces, we follow the usual convention that tokens
are consumed from the input channels when an actor starts firing and it produces
tokens on the output channels at the end of firing. Since, the production and the
consumption rates of actors do not change with firing and production (with ↓)
and consumption (with ↑) are instantaneous, the valuation functions ς(a, a), a ∈
A are constant functions8 which simply assign the production/consumption rate
values to the events, independent of values assigned by previous events. For
example, the labeled tag pieces for the transitions s6a↑,cl−−−→ s7
a↓,bl−−−→ s2 in the self-
timed execution of the SDFG in Fig 4.8 is shown in Fig 4.10. The execution time
of actor a is two time units. Hence, for each instance of firing, a has two events;
with the first event, tokens are consumed from the input channels and with the
second event tokens are produced in the output channels (notice the label of the
8A constant function always outputs the same value independent of its argument.
4.6 Performance Evaluation of SynchronousDataflow Graphs (SDFG) 77
a b c
ca b
ς(a, a) = 〈−1, 0, 0, 0〉 ς(c, c) = 〈0, 0,−3,+3〉
ς(a, a) = 〈+1,+1, 0, 0〉 ς(b, b) = 〈0,−1,+2,−2〉
Figure 4.10: Tag pieces for actor firings (a ↑, c l) followed by (a ↓, b l).
tag pieces in Fig 4.10). With the first event on a we have ς(a, a) = 〈−1, 0, 0, 0〉 ∈
D = Z|C| denoting a constant function which will assign the token consumption
information 〈−1, 0, 0, 0〉 to the last event on a in a tag vector. When a labeled
tag piece µ = (µ, ς) is concatenated with a labeled tag vector τ = (τ , κ) such
that τ ′ = (τ ′, κ′), we have τ ′
a = (τ ′a, κ′(a)) = ((τ ·µ)a, ς(a, a)(d)) where d denotes
a |A|-tuple comprising the values assumed by the last events on the variables in
A. Thus, κ′(a) = ς(a, a)(d) = 〈−1, 0, 0, 0〉(d) = 〈−1, 0, 0, 0〉. The actor c has
execution time one and hence it has only one event which accounts for both i/p
consumption (ς(c, c)(c3) = −3) and o/p production (ς(c, c)(c4) = +3).
In the periodic phase of the self-timed execution of G (see Fig 4.9), the
number of firings of the actors a, b and c are 3, 3 and 2 respectively. The heap
of tag pieces generated by the periodic phase of the actor firings is shown in
Fig 4.11 for a single cycle. At this stage, it can be clearly seen that using
a b c
Next cycle
Figure 4.11: The heap of tag pieces for the periodic execution.
our methodology we can compute τσ for the self-timed execution σ of G by
78 Chapter 4 Tag Machine based Performance Analysis of Job-shop Schedules
concatenating the corresponding tag pieces with underlying tag structure Tperf
(derived from Tdep and used previously for performance evaluation) and construct
the (max,+) representation tn = Atn−1, where tn is the vector comprising of the
n-th firing time of all the actors. Using the subsequent steps of our algorithm, we
can compute the maximum cycle mean λ = ρ(A) (where ρ(A) is the eigen value
of A) and get the throughput (Th(G)) since Th(G) = 1λ
(Ghamarian, 2008).
4.7 Modeling Heterogeneous Composition
The models of tagged systems employ different kinds of tag structures for cap-
turing the execution semantics of different MoCs. In this sub-section, we give
example of some other tag structures apart from Tdep, analyse an example of het-
erogeneous composition and examine the scalability of our approach for perfor-
mance evaluation in such a scenario. We use product tag structures for capturing
heterogeneous systems involving multiple MoCs.
Among the popular tools for modeling and simulation of heterogeneous sys-
tems, Ptolemy from U.C. Berkeley (which is one of the most widely used tools)
(Eker et al., 2003) advocates the use of various kinds of tag structures for cap-
turing a wide range of MoCs. We discuss a case of heterogeneous modeling
(Chang et al., 1997) using Ptolemy and establish the possibility of evaluating
the performance of such mixed models using our proposed mechanism of perfor-
mance evaluation. We consider the case of embedding a dataflow MoC inside
an outer design block which is based on the discrete event MoC. A dataflow
MoC contains the partial ordering information regarding the sequence in which
some of the component modules (sub-systems) execute. A discrete event MoC
has a global notion of time such that every module performs an event in every
time-step and the event may occur with a meaningful valuation or it may be an
empty event. The situation is shown in Fig 4.12, where C is a dataflow (DF)
component embedded inside the overall discrete event (DE) component. In the
DE module, each subsystem executes with every clock cycle producing either a
meaningful or an empty value. In the dataflow module, a subsystem executes
provided input dependencies are satisfied, i.e., subsystems scheduled earlier have
completed execution in the last clock cycle. The ordered tuples at the output of
4.7 Modeling Heterogeneous Composition 79
A
B
E
F
G D
Discrete Event
Data Flow
C
(1, (0, 0, 0))
(1, (0, 0, 0))
(1, (1, 0, 0))
(1, (0, 1, 0))
(1, (1, 1, 1))
(2, (1, 1, 1))
Figure 4.12: A Heterogeneous scenario: Data Flow within Discrete Event MoC.
each actor (subsystem) signify the tag value of the event corresponding to the
actor firing. We model the overall DE module using the tag structure Tsynch and
the the inner DF module using the tag structure Tdep. The first component of
the tuple is a natural number in bold signifying a vector (comprising the vari-
ables A, B and D in the DE module excluding the DF module C) with all values
equal to the number (as should be the case with Tsynch). With the first instance
of firing of the DE module (whenever actors A and B fire and D has an empty
event), the DE-tags ∈ Tsynch assume the value 1 = 〈1, 1, 1〉. Similarly, with the
second instance of firing of the DE module (whenever actor D fires and A and
B have empty events), the DE-tags assume the value 2. The second component
of the tag values model the ordering of the firing instances of dataflow actors
(E,F,G) using Tdep. Observe that actors E and F fire in parallel with tags (1,0,0)
and (0,1,0) respectively. Firing of G is dependent on E and F and hence G fires
with tag (1,1,1). Next we show how the overall system can be modeled using the
product tag structure Tsynch × Tdep where we capture the dependencies among
events on variables in the DE module and events on variables in the DF module.
In our approach, we deviate from the usual execution semantics of Ptolemy
where a dataflow subsystem appears to an outer DE module as a zero-delay block,
i.e, the DF module produces data to the DE module with the same DE tag as was
available at its input. For our purpose of asymptotic performance evaluation,
we use an alternative semantics as given in (Chang et al., 1997) where each
complete execution cycle of a DF module advances the global time by a fixed
number of absent (⊥) events in the outer DE module called the schedule period.
In a simplistic scenario as depicted in Fig 4.12, where each of the DF actors
80 Chapter 4 Tag Machine based Performance Analysis of Job-shop Schedules
consumes one DE cycle, the schedule period of the DF module C will be two DE
cycles since the DF actors E and F fire in parallel followed by G.
The individual (composite) tag pieces are shown in the second column of
Fig. 4.13 marked as ‘a’, ‘b’ · · · in the order of actor firing. The first column in
the figure magnifies the DE tags. The resulting heap of tag pieces for a single
cycle of execution is shown in Fig 4.13 in the third column. The bold arrows
[A,B,D] E F G
[A,B,D] E F G
[A,B,D] E F G
[A,B,D] E F G
A B D [A,B,D] E F G
A B D
a)
b)
c)
d)
⊥⊥
⊥
⊥
⊥
Figure 4.13: Composite tag pieces for a single execution cycle.
in the composite tag pieces signify synchronous events on all the variables in
the DE MoC (as highlighted in the first column). The empty events are labeled
accordingly (with ⊥). The first tag piece as depicted in the figure and marked as
‘a’ signifies events in the outer MoC comprising meaningful events on A and B
and an empty event on D. The second tag piece signifies empty events on all of
A, B and D in the outer MoC while there exist events on E and F in the inner DF
MoC which are dependent on the last events on A, B and D. The third tag piece
signifies empty events on all of A, B and D in the outer MoC while there exists
an event on G in the inner DF MoC which is dependent on the last event on E
and F. The last tag piece depicts empty events on A and B and a meaningful
event on D. Similar to the previous cases, it is evident that our algorithm can
compute the tag vector for an execution cycle by concatenation of the composite
tag pieces and output the resulting throughput.
We have implemented our approach of performance evaluation using tag vec-
tors and the throughput results for the different examples discussed throughout
the chapter are as shown in Table 4.1. More specifically, we provide the asymp-
4.8 Conclusion 81
Table 4.1: Asymptotic throughput in the examples discussed.
Problem Job/Actor ρ(A) Throughput
Job-shop in Eq. 4.2 J1 16 1/16
with schedule ν ′
SDFG G with actor c 6 2/6 = 1/3
execution σ in Fig. 3.4
DE and DF composition DF actor C 2 1/2
in Fig. 4.12
totic throughput of a) job J1 in the job-shop J of Eq. 4.2, b) actor C of the
SDFG G in Fig. 3.4 and c) the DF actor C in the heterogeneous composition
scenario of Fig. 4.12.
4.8 Conclusion
In the present chapter, we have modeled job-shop specifications as tag machines.
We revealed the inadequacy of known tag structures in the context of asymptotic
performance evaluation. We derived a new tag structure which can capture the
dependency information in a schedule (given as a tag vector) in order to construct
the recurrence relation among tasks. We proposed a performance evaluation
methodology for asymptotic performance evaluation of schedules in a job-shop
modeled as tag machines. We elaborated how tag vectors can be computed from
execution runs of SDFGs and heterogeneous compositional scenarios involving
Dataflow (DF) and Discrete Event (DE) modules.
In the following chapter we show how the methods constructed become adapt-
able to the finitary MoC of timed automata. We depict timed automata as tag
machines and model their executions by the corresponding tag traces so that the
asymptotic execution performance may be computed using the methods devised
in this chapter for evaluation of job-shop schedules using tag machines.
Chapter 5
Translation of Timed Automata
to Tag Machines
5.1 Introduction
In the previous chapter we proposed a performance evaluation methodology for
job-shop schedules represented as tag machines. We envisaged that the method-
ology remains applicable uniformly across different MoCs and their compositions.
In order to substantiate that claim, we exhibited techniques for representing the
execution runs of systems modeled using different MoCs as tag vectors. However,
complex embedded system specifications are often given using finitary MoCs. For
evaluating the asymptotic performance of execution scenarios for such specifica-
tions, we need to translate such finitary MoCs to tag machines. In that case,
the dependency informations for any possible execution scenario will be captured
by the corresponding tag vector generated by the tag machine. With this mo-
tivation, we now present a methodology for translating timed automata to tag
machines such that every run of a timed automaton can be represented by a tag
vector computed from the corresponding run in the tag machine obtained using
our methodology.
In section 5.2, we present a brief overview of the formalism of timed automata.
In section 5.3, we provide the theory behind construction of zone automata. We
discuss the construction of tag machines from zone automata in section 5.4 and
83
84 Chapter 5 Translation of Timed Automata to Tag Machines
prove the correctness of the translation mechanism in section 5.5. The chapter
ends with a concluding discussion in section 5.6.
5.2 Theory of Timed automata
Timed automata (Alur and Dill, 1994) are automata augmented with continuous
clock variables. The values of the clocks grow uniformly at every state. A subset
of clocks can be reset to zero at certain transitions and tests on their values
can be used as guard conditions for enabling transitions. For a set X of clock
variables, the set Φ(X) of clock constraints φ is defined by the grammar
φ := x ≤ c | c ≤ x | x < c | c < x |φ ∧ φ (5.1)
where x ∈ X, c ∈ R+ ∪ {0}, the set of non-negative reals. A clock valuation is a
function v : X → R+∪{0}. For δ ∈ R+, v+ δ denotes the clock valuation which
maps every clock x to the value v(x) + δ. A clock valuation v for X satisfies a
clock constraint φ over X iff φ evaluates to true according to the values given by
v.
Formally, a timed automaton A is a tuple A = 〈L,L0,Σ, X, I, E〉 (Alur, 1999)
where
• L is a finite set of locations,
• L0 ⊆ L is a set of initial locations,
• Σ is a finite set of labels; the labels signify input events for the automaton,
• X is a finite set of real-valued clocks,
• I is the staying condition or invariant that labels every location l ∈ L,
given by I : L→ Φ(X),
• E ⊆ L×Σ×Φ(X)×2X×L is the set of switches. A switch e = 〈li, σ, φ, R, lj〉
represents a transition of the automaton from location li to location lj which
is taken provided an input event σ arrives at a time instant when the clock
valuations satisfy the clock constraint φ. After the switch is executed, the
set R of clocks are initialized to zero.
5.2 Theory of Timed automata 85
A configuration of the automaton is a pair (l,v) consisting of a location and
a clock valuation. Every subset Y ⊆ X of clocks induces a reset function ResetY
given as ResetY : V → V, where V is the set of all possible clock valuations.
For any clock valuation v ∈ V, we have ResetY (v) = v′ ∈ V, such that ∀x ∈ X,
v′(x) =
{
v(x) x /∈ Y
0 x ∈ Y
The transition system SA corresponding to a timed automaton A can execute
two kinds of transitions :
• Transition due to the passage of time : for a configuration (l,v) and a real-
valued time increment δ ≥ 0, (l,v)δ−→ (l,v + δ) if ∀δ′, 0 ≤ δ′ ≤ δ,v + δ′
satisfies the invariant I(l).
• Transition due to change of location: for a configuration (l,v) and a switch
〈l, a, φ, λ, l′〉 such that v satisfies φ, (l,v)a−→ (l′,v′), where v′ = Resetλ(v).
As an example, consider the timed automaton A as shown in Fig 5.1 with
L = {l0, · · · , l3}, L0 = {l0}, X = {x, y} and Σ = {a, b, c, d}. The invariant
function I is defined for all the locations such that the clock constraints given by
I hold true in each location, e.g., I(l0) = 〈x ≥ 0, y ≥ 0〉. For other locations, we
skip the invariants for simplicity. The set E of switches of the timed automaton
A has entries like (l0, a, x > 0, {y}, l1), (l1, c, y > 0 ∧ x < 5, {}, l3) and similarly
for other transitions. The model of timed automata gives us an infinite transition
l0 l1
l2
l3
a, x > 0, y := 0
c, x < 1
d, x > 5
c, y > 0 ∧ x < 5
a, y < 5, y := 0
b, y = 2
Figure 5.1: An example timed automaton (Alur and Dill, 1994)
system due to infinite clock valuations; however, the range of clock valuations can
be partitioned into a finite number of time regions (Alur and Dill, 1994) yielding
86 Chapter 5 Translation of Timed Automata to Tag Machines
a region automaton. The partitioning is performed by defining an equivalence
relation which equates two configurations with the same location if they agree on
the integral part of all clock values and the ordering of the fractional part of all
clock values. The integral parts of clock values decide whether or not a particular
clock constraint is met and the ordering of fractional parts decide which clock
will change its integral part first. In a region automaton, any location may
have multiple successors for the same input symbol. For example, consider the
region automaton in Fig. 5.2 corresponding to the timed automaton in Fig. 5.1.
Starting from the initial region with location l0, the automaton can switch to any
of the three successor regions with location l1 depending on the time of arrival of
the input event a. Another finite representation called zone automaton can be
x=y=0
0 = y < x < 1 y = 0, x = 1 y = 0, x > 1 1 = y < x
0 < y < x < 1 0 < y < 1 < x 1 = y < x x > 1, y > 1
a a a
a a a a
b
b
b
c
d d d
dd
d
d
d
l0
l1 l1 l1 l2
l3l3l3l3
Figure 5.2: The region automaton for the timed automaton shown in Fig. 5.1.
constructed using clock zones obtained by the convex union of the clock regions
of the multiple successors in the region automaton (Alur, 1999). For the timed
automaton A as shown in Fig 5.1, the corresponding zone automaton Az obtained
by the method described in (Alur and Dill, 1994), is given in Fig 5.3.1 The
members of the Az-tuple are Lz = {(l0, 〈x = y = 0〉), (l1, 〈0 = y < x〉), (l1, 〈y =
0, x > 5〉), (l2, 〈2 = y < x〉), (l3, 〈0 < y < x < 5〉), (l3, 〈0 < y, x > 5〉)}, Lz0=
{(l0, 〈x = y = 0〉)}, Σ and X are the same as in A and Ez has entries like
1Figures 5.1, 5.2 and 5.3 have been adopted from (Alur and Dill, 1994).
5.3 Zone Automata Construction 87
〈(l0, 〈x = y = 0〉), a, (l1, 〈0 = y < x〉)〉, 〈(l1, 〈0 = y < x〉), c, (l3, 〈0 < y < x < 5〉)〉
and similarly for other transitions. In the zone automaton, every location has
0 < y < x < 5
aab
b
dd
x = y = 0
0 = y < x
2 = y < x y = 0, x >5
0 <y, x > 5a
c
l0 l2 l1
l3l3l1
Figure 5.3: The zone automaton (obtained from the region automaton in Fig.
5.2) of the timed automaton in Fig. 5.1.
a unique successor for a given input symbol. Observe that in Az, the zone
(l0, 〈x = y = 0〉) has as successor a single zone (l1, 〈0 = y < x〉) which can
be obtained by taking convex union of the three successor regions of the initial
region (l0, 〈x = y = 0〉), namely : (l1, 〈0 = y < x < 1〉), (l1, 〈y = 0, x = 1〉) and
(l1, 〈y = 0, x > 1〉).
The determinism of a zone automaton makes it a good candidate as an in-
termediate representation while transforming a timed automaton into a deter-
ministic tag machine. Hence, for a given timed automaton, we may choose to
construct the corresponding zone automaton whose components may further lead
to obtaining the set of states, the transition relations and the tag pieces for the
tag machine which may simulate the original timed automaton. With this mo-
tivation we discuss the construction of a zone automaton from a given timed
automaton in the next subsection.
5.3 Zone Automata Construction
A clock zone z is a set of clock valuations (interpretations) described by con-
junction of clock constraints. A key property of clock zones is closure under the
following operations (Alur, 1999) :
1. for clock zones z1 and z2, z1 ∧ z2 denotes their intersection.
88 Chapter 5 Translation of Timed Automata to Tag Machines
2. for a clock zone z, z ⇑ denotes the set v + δ of clock valuations for v ∈ z
and δ ∈ R.
3. for a set λ ⊆ X of clocks and a clock zone z, z[λ := 0] denotes the set of
clock valuations v[λ := 0] for v ∈ z, i.e., valuations obtained by resetting
the clocks in λ and leaving the remaining clocks unchanged.
A location in a zone automaton is a pair (l, z), where l is a location in the corre-
sponding timed automaton and z is a clock zone. Consider a timed automaton
A = 〈L,L0,Σ, X, I, E〉 where all the symbols carry their usual meaning. We may
construct the corresponding zone automaton Az = 〈Lz , Lz0,Σ, X,Ez〉 where:
• Lz is the set of locations in the zone automaton,
• Lz0⊆ Lz is the set of initial locations,
• Σ is the finite set of labels,
• X is the finite set of clocks.
• Ez ⊆ Lz × Σ× Lz is the set of switches,2
Let the set L of locations of A be {l0, l1, · · · , ln}. The set Lz can be characterized
as Lz =⋃
i
{li}, li ∈ L, where {li} is of the form {li} = {(li, zi0), · · · , (li, zik)},
0 ≤ i ≤ n, where each member (li, zij) of {li} is a location of the zone automaton
with clock zone zij , 1 ≤ j ≤ k, k being the number of clock zones for the
location li of automaton A. Also we have Lz0=
⋃
i
{li}, li ∈ L0, where each {li}
is a singleton set containing only one location of the zone automaton given by
(li, z00), with z00 = {∀x ∈ X, x = 0}. The construction of the zone automaton
(Alur, 1999) can be done as follows.
Consider a switch e ∈ E in A such that e = 〈li, a, ψ, λ, lj〉. In the correspond-
ing zone automaton Az, the set of locations with A-location li is {li} as defined
previously. If {li} has k clock zones zij , 1 ≤ j ≤ k, we will have k number of
switches in Az of the form ej ∈ Ez such that ej = 〈(li, zij), a, (lm, zij+)〉, where
2Since Az is deterministic, the transition relation is a function of the form Ez : Lz×Σ→ Lz.
5.4 Characterizing Zone Automaton as Tag Machine 89
zij+ is the successor of the clock zone zij under the switch e computed by the
relation (Alur and Dill, 1994) given below.
zij+ = succ(zij, e) = (((zij ∧ I(li)) ⇑) ∧ I(li) ∧ ψ)[λ := 0]
The succ operation involves taking conjunction of the original location invariant
I(li) of the timed automaton with the clock zone zij after which time elapsing
operation is performed till both the invariant and the constraint ψ of the switch
are satisfied. The overall conjunction is further modified by resetting the set λ of
clocks to get the successor zone. Observe that clock zones are closed under the
succ operation. For every switch e = 〈li, a, ψ, λ, lj〉 in A and every clock zone
zij , there will be a switch 〈(li, zij), a, (li, succ(zij, e))〉 in Az.
5.4 Characterizing Zone Automaton as Tag Ma-
chine
In order to translate a given timed automaton into the corresponding tag ma-
chine, we need to define all the essential components of a tag machine from the
formal specification of a timed automaton. Given a timed automaton A whose
zone representation is Az, as defined above, we denote the tag machine for A by
TA = (SA, SA0, VA, TA,MA,△A).
5.4.1 Characterizing Locations as Variables
A timed system can be represented as a composition of multiple component au-
tomata. A single component automaton is essentially a sequential machine. Each
of the locations may be reached by an event transition from other location(s). In
the context of specifying real world systems (distributed or stand-alone) using
timed automata, different extensions of the classical model have been proposed
over the years (Bornot et al., 1998; Norstrom et al., 1999) where the component
tasks of the system are either represented by the set L of locations or the set
Σ of events. We follow the first kind of representation where each location of
the automaton is associated with the execution of some subtask(s) of the system.
90 Chapter 5 Translation of Timed Automata to Tag Machines
The input events are used as triggers for switching to a different task (denoted by
another location of the automaton) from the currently executing task (denoted
by the currently active location). Thus, an event transition denotes a change in
the executing task due to the occurrence of some triggering event while a delay
transition denotes the continuation of the same task through passage of time.
A state in a tag machine aggregates the occurrences of events on the set VA
of all variables for the machine. A change in some variable value through an
occurrence of a new event results in a corresponding change in the state. The
variables in a tag machine can be either associated with the locations of the given
timed automaton, that is, with the tasks performed in them, or with the event
labels of the transitions of the automaton. Depending upon the object of study,
a choice is to be made from these two alternatives. For example, for language
theoretic issues like containment (in formal verification), the latter may be more
suitable. For performance evaluation of a schedule specification, however, it
would be more natural to associate each variable in VA with a location (task).
Each of the variables in VA is thus associated with some task(s) of the system.
Hence, we can always say that the set L of locations of the timed automatonA has
a one-to-one correspondence with the set VA of variables of the corresponding
tag machine and hence VA = L. Since we are using the zone automaton for
translation from timed automaton to tag machine, the set SA of states of the tag
machine TA is defined as SA = Lz with SA0= Lz0
thus establishing a one-to-one
correspondence between the set of states of the tag machine and that of locations
of the zone automaton constructed for the timed automaton.
In the formalism of tag machine, the elapsing of time in a state is measured
by the number of event occurrences on the corresponding variable set. Hence, we
will naturally have discretized time with a small time quantum, δ say, which is
decided based on the smallest constant value occurring in the clock constraints.
In the present work, we will always assume this normalized forms of timed au-
tomaton where all the clock constraints have been normalized by proper digiti-
zation (Henzinger et al., 1992). We choose N, the set of non-negative integers to
model time which is essentially the discrete time model. Verification of real time
systems using discrete time models has been reported in (Emerson et al., 1991;
Campos and Clarke, 1994). It has been established in (Henzinger et al., 1992)
that the integer time semantics suffices for all timed transition systems in the
5.4 Characterizing Zone Automaton as Tag Machine 91
sense that every qualitative (time-independent) and most common quantitative
(hard real-time) properties are digitizable.
In the context of the tag machine corresponding to the given timed automa-
ton, we may now speak about n consecutive occurrences of an event (n ∈ N) on
some variable vi (without any intervening event on any other variable) of the tag
machine to imply that in the timed automaton, a total of n · δ units of time have
elapsed at some location l = li. Next we characterize the nature of tag-pieces
due to different kinds of transitions occurring in a timed automaton.
5.4.2 Characterizing Transitions as Tag Pieces
As mentioned before, in a timed automaton we have two kinds of transitions
namely, the event transitions and the delay transition. We can classify the
transitions in a zone automaton Az into the following classes :
• Event transitions of the form ((li, zik)σ→ (lj, zjx)), i 6= j, for some σ ∈ Σ.
We designate such transitions as event transitions of type 1.
• Event transitions of the form ((li, zik)σ→ (li, zix)), k 6= x, for some σ ∈ Σ.
We designate such transitions as event transitions of type 2.
• Event transitions of the form ((li, zik)σ→ (li, zik)), for some σ ∈ Σ. We
designate such transitions as event transitions of type 3.
• Delay transitions.
Event transitions of type 2 and 3 are due to self loops in the original timed au-
tomaton. Consider a run r of the automaton A of Fig 5.1 given as,
(l0, [x = 0, y = 0] ⇑)a,1−→ (l1, [x = 1, y = 0] ⇑)
c,3−→ (l3, [x = 3, y = 2] ⇑)
d,6−→
(l3, [x = 6, y = 5] ⇑)d,7−→ (l3, [x = 7, y = 6] ⇑)
The entries in a run are represented by 2-tuples comprising locations and the
starting clock values followed by the usual time elapsing operation ⇑ performed
on the clock valuations immediately after the location is entered. The corre-
sponding run in Az will be,
(l0, 〈x = y = 0〉)a−→ (l1, 〈0 = y < x〉)
c−→ (l3, 〈0 < y < x < 5〉)
d−→ (l3, 〈0 <
92 Chapter 5 Translation of Timed Automata to Tag Machines
y, x > 5〉)d−→ (l3, 〈0 < y, x > 5〉)
Thus, the event transitions (a, 1) and (c, 3) result in zone automaton (ZA) tran-
sitions of type 1. The transition (d, 6) results in a ZA transition of type 2
because the location remains the same (l3) and only the clock zone changes from
〈0 < y < x < 5〉 to 〈0 < y, x > 5〉. The transition (d, 7) is of type 3 for the
reason that both location and zone remain the same, which is a case of self loop
in the zone automaton.
Given the timed automaton A = 〈L, l0,Σ, X, I, E〉, having the zone au-
tomaton Az = 〈Lz, Lz0,Σ, X,Ez〉, we construct the corresponding tag machine
TA = (SA, SA0, VA, TA,MA,△A) with the set of variables VA = L, the set of states
SA = Lz and SA0= Lz0
.
As discussed in (Benveniste et al., 2005), the tag structure (Tdep,≤,⊑) is
the structure of choice for functional composition of sub-systems while taking
causality into account. Hence, we have TA = Tdep with a tag piece µA being
defined as µA : VA × VA → TA. We define a labeled tag piece for A to be a pair
µA = (µA, ς) such that,
ς : VA × VA → D|VA|A → DA
where DA = V × Σ′, V is the set of possible clock valuations ; Σ′ = Σ ∪ ⊥,
where Σ is the finite set of events of the timed automaton A and ⊥ indicates
a null-event. Recall from chapter 2, definition 8 that ∀u, v ∈ VA, the function
ς(u, v) is defined iff (u = v). We define MA as a finite set of labeled tag pieces
for A. The transition relation is given by ∆A ⊆ SA ×MA × SA.3
Consider a timed automaton A; let |L| = n. For the corresponding tag
machine, every tag τ ∈ T is a mapping, τ : L 7→ N0. A tag piece µ is an n × n
matrix of tags, given by µ : L × L → T . In the tag piece µ, a tag µ(li, lj) is a
1× n vector with its components denoted by µ(li, lj)(u), where u ∈ L.
Consider an event transition of type 1 given as ((lm, zmk)σ→ (ln, znx)), m 6= n.
We will refer to tag pieces modeling event transitions in timed automaton as event
tag pieces. We denote the labeled event tag piece for the transition from location
3or, SA ×MA → SA since the relation is actually a function due to the determinism of the
tag machine guaranteed by the determinism of the zone automaton.
5.4 Characterizing Zone Automaton as Tag Machine 93
lm to ln by µmn = (µmn, ςmn). The tag piece µmn captures an event on variable
ln which has a dependency on the last event on variable lm. The component tag
values of µmn are given as follows.
µmn(li, lj) =
〈1n〉 if i = j = n // event on variable ln
〈0〉 if i = j 6= n // no event on other variables
〈0〉 if (i 6= j) ∧ (i = m) ∧ (j = n) // event on ln depends on
// the last event on lm
〈ǫ〉 if (i 6= j) ∧ ¬((i = m) ∧ (j = n)) // no other dependencies
The first clause indicates an event on ln in the tag piece. The second clause
ensures the absence of events on any other variable. The third clause indicates
the dependency of the event on ln in the tag piece on the last event on lm. The
fourth clause ensures the absence of any other dependencies.
Note that in a tag machine TA corresponding to a timed automation A, a
labeled tag piece µA = (µA, ς) such that, ς : VA × VA → D|VA|A → DA where
DA = V × Σ′ is the domain from where the events on variables assume values.
The values are given by 2-tuples in which the first component represents the
clock valuations when the event occurred and the second component represents
an input event in the original timed automaton, if there was any (∈ Σ′) when
the event occurred. For an event on any variable v ∈ VA as captured by µ, the
function ς(v, v) will take as argument the valuations, d ∈ D|VA|A say, caused by
the respective last events (before µ occurred) on the variables in VA and return
the valuation of the new event on v in µ.
For the event transition of type 1 given as ((lm, zmk)σ→ (ln, znx)), m 6= n, the
labeled event tag piece of type 1 given by µmn = (µmn, ςmn) is shown in Fig.
5.4(a). Let the clock valuations for the last event on variable lm be v and the set
of clocks reset during the transition be λ. Hence, the clock valuations for event
on variable ln in µmn must be v[λ := 0]. Thus the event on variable ln in µmn
occurs with the valuation 〈v[λ := 0], σ〉 ∈ DA.
We follow the convention that any term denoted by ‘〈f1 · · · fk〉’ represents a
k-tuple of function symbols which operates on a k-tuple of the form 〈d1 · · · dk〉
such that 〈f1 · · · fk〉(〈d1 · · · dk〉) = 〈f1(d1) · · · fk(dk)〉. Using this convention, we
94 Chapter 5 Translation of Timed Automata to Tag Machines
have ςmn being defined as ςmn(ln, ln) = 〈Resetλ, σ〉 ◦ Π|VA|lm
and undefined for all
other variable pairs. The projection function Π|VA|lm
takes as argument a |VA|-
tuple comprising the values assumed by the last events on all the variables and
returns some value 〈v, σ′ ∈ Σ′〉 assumed by the last event on lm. The function
〈Resetλ, σ〉 is defined as, 〈Resetλ, σ〉(〈v, σ′ ∈ Σ′〉) = 〈v[λ := 0], σ〉. The first
component ‘Resetλ’ of the function 〈Resetλ, σ〉 resets the clocks in set λ to
produce the clock valuation v[λ := 0]. The second component ‘σ’ of the function
simply replaces the second component ‘σ′’ of 〈v, σ′ ∈ Σ′〉 (which is the last input
event in A before the transition) by itself (i.e., acts like a constant function).
A labeled tag vector τ = (τ , κ) generated by the tag machine TA has |VA|
number of components. Any component τ lm is of the form τ lm = (τ lm , κ(lm))
for any lm ∈ VA such that κ(lm) ∈ DA provides the valuation caused by the last
event on lm in τ . The initial labeled tag vector τ 0 is given by τ 0 = (τ 0, κ0) where
τ 0 is an initial tag vector defined over VA and ∀lm ∈ VA, κ0(lm) = 〈v0,⊥〉 where
v0 is the initial clock valuation given by ∀x ∈ X, v0(x) = 0 and ⊥ denotes the
absence of any timed automata (TA) event.
Consider the case when the labeled event tag piece µmn = (µmn, ςmn) is
concatenated with a labeled tag vector τ having τ lm = (τ lm , κ(lm)) where
κ(lm) = 〈v, σ′ ∈ Σ′〉. We have, τ ′
ln = (τ ′ln , κ′(ln)) where τ ′ln = (τ · µmn)ln and
κ′(ln) = ςmn(ln, ln)(κ(l1), · · · , κ(l|VA|)) = 〈Resetλ, σ〉 ◦Π|VA|lm
(κ(l1), · · · , κ(l|VA|)) =
〈Resetλ, σ〉(κ(lm)) = 〈Resetλ, σ〉(〈v, σ′ ∈ Σ′〉) = 〈v[λ := 0], σ〉 which indicates
that the clock valuations after the tag piece are 〈v[λ := 0]〉 and σ denotes that
the tag piece models an event transition with a TA event σ associated with it.
lm ln
〈Resetλ, σ〉
(a) Labeled event tag piece type 1.
lk−1 lk lk+1
〈Resetλ, σ〉
(b) Labeled event tag piece type 2 and 3.
Figure 5.4: Labeled Event Tag Pieces
Observe that in case of labeled tag pieces for event transitions of type 2 and
3 there exists no dependencies from other variables since there is no change in
the executing event variable, lk say. Such tag pieces are of the form µk = (µk, ςk)
5.4 Characterizing Zone Automaton as Tag Machine 95
where,
µk(li, lj) =
〈1k〉 if i = j = k // event on variable lk
〈0〉 if i = j 6= k // no event on other variables
〈ǫ〉 if i 6= j // no dependence of the event on other events
The first clause denotes the presence of an event on variable lk in the tag piece.
The second clause denotes the absence of events on variables other than lk in the
tag piece. The third clause denotes the absence of dependencies of the event on
variable lk in the tag piece on any preceding event on some other variable. The
function ςk remains the same as defined in case of type 1 transitions.
In a timed automaton, immediately after a new location is entered with some
clock valuation, the clocks keep on increasing uniformly indicating the passage
of time. Hence, the delay transitions are implicit in such a model. However, the
corresponding transitions in the tag machine (TM) will contribute to explicit
events on the location variables as explained below. Consider a state s in the tag
machine which corresponds to some location (l, z) in the zone automaton where
l is a location of the original timed automaton. Since the location in a timed
automaton gives rise to multiple locations in a zone automaton, we may have the
same variable corresponding to many states in the corresponding tag machine.
Thus, the elapsing of time in s is modeled by consecutive occurrences of events on
the variable l. Events on the variable corresponding to the tag machine state thus
keep on occurring unless a new event transition (of the timed automaton) takes
place causing a change in location. In order to capture this scenario using the
computation mechanism of tag vectors via concatenation of tag pieces, the delay
transitions in a timed automaton are explicitly depicted as self-loops associated
with the states of the tag machine and labeled with appropriate tag pieces. We
will refer to the tag pieces modeling the delay transitions in the timed automaton
as delay tag pieces.
Consider a delay transition of the form (lk,v)d−→ (lk,v + d) in the timed
automaton. Such a delay transition of d time units in the location lk ∈ L
of the timed automaton is modeled using d number of successive delay tag
pieces. We denote the delay tag piece which labels self-loops (in tag machines)
capturing event occurrences on the variable corresponding to location lk by µ′k.
96 Chapter 5 Translation of Timed Automata to Tag Machines
The component tag values of µ′k are given as follows.
µ′k(li, lj) =
〈1k〉 if i = j = k // event on variable lk
〈0〉 if i = j 6= k // no event on other variables
〈ǫ〉 if i 6= j // no dependence of the event on other events
A labeled delay tag piece µ′
k= (µ′
k, ς′k) is shown in Fig. 5.5. We have ς ′k being
lk−1 lk lk+1
〈〈+1,⊥〉〉
Figure 5.5: Labeled Delay Tag Piece with the valuation function, ς.
defined as ς ′k(lk, lk) = 〈+1,⊥〉◦Π|VA|lk
(which is a function of type V×Σ′ → V×Σ′
that takes as argument elements of the form 〈v, σ〉 ∈ V× Σ′) and undefined for
all other variable pairs. The function 〈+1,⊥〉 is defined as, 〈+1,⊥〉(〈v, σ ∈
Σ′〉) = 〈v + 1,⊥〉. Thus, the first component ‘+1’ of the function 〈+1,⊥〉
increments the first component v (which is the clock valuation) of the input
argument 〈v, σ ∈ Σ′〉 by 1 for all the clock variables. The second component ‘⊥’
of 〈+1,⊥〉 is a constant function which simply replaces the second component
‘σ’ of 〈v, σ ∈ Σ′〉 (which is the last input event in A) by ⊥.
When the labeled delay tag piece µ′
k= (µ′
k, ς′k) is concatenated with a labeled
tag vector τ having τ lk = (τ lk , κ(lk)) where κ(lk) = 〈v, σ ∈ Σ′〉, we have,
τ ′
lk = (τ ′lk , κ′(lk)) such that τ ′lk = (τ · µ)lk and
κ′(lk) = ς ′k(lk, lk)(κ(l1), · · · , κ(l|VA|)) = 〈+1,⊥〉 ◦ Π|VA|lk
(κ(l1), · · · , κ(l|VA|))
= 〈+1,⊥〉(κ(lk)) = 〈+1,⊥〉(〈v, σ ∈ Σ′〉) = 〈v + 1,⊥〉.
The first component of 〈v + 1,⊥〉 indicates that the clock valuations after the
occurrence of the new tag piece are v + 1, which is one unit more than the
valuation just before the occurrence of this tag piece (which is v) and ⊥ denotes
that the new tag piece does not have any TA event associated with it.
Next we characterize the transition relation1 △A for the tag machine TA
corresponding to the timed automaton A.
5.4 Characterizing Zone Automaton as Tag Machine 97
5.4.3 Characterizing the Transition Relation
The transition relation △A of the tag machine TA may be constructed as follows.
For any state si = (lj , z) ∈ SA in the tag machine TA, we may include a tuple
(si
µ′j−→ si) in △A, where the delay tag piece µ′
j can be constructed according to
the method given previously. As discussed above, we make the delay transition
explicit in case of tag machines. For any event transition (li, zik)e−→ (lj, zjx), let
the states in the tag machine corresponding to (li, zik) and (lj , zjx) be sm and sn
respectively. Depending on the type of the event transition, we include the tuple
(sm
µij
−→ sn) in △A where the event tag piece µij can be constructed according
to the method given previously.
Based on the above methodology, the tag machine for the timed automaton
A is shown in Fig 5.6. The tag machine has the same number of states as the
number of locations in the zone automaton. The states may be enumerated as
s0 = (l0, 〈x = y = 0〉), s1 = (l1, 〈0 = y < x〉), s2 = (l2, 〈2 = y < x〉), s5 =
(l1, 〈y = 0, x > 5〉), s3 = (l3, 〈0 < y < x < 5〉), s4 = (l3, 〈0 < y, x > 5〉). For a
state si = (lj, z) ∈ SA, the delay transitions are explicitly shown with the labeled
delay tag pieces denoted by µ′
jand the event transitions from sm = (li, zik) to
sn = (lj , zjx) are shown with the labeled event transitions denoted by µij.
s0
s1
s2
s5
s3
s4
µ′
0µ01
µ′
1
µ12 µ′
2
µ13
µ′
3
µ33
µ′
3
µ33
µ31
µ31 µ12
µ′
1
Figure 5.6: The tag machine for the automaton A.
In Fig 5.6, the labeled event tag pieces µ01, µ12, µ13, µ31 are of type 1, the
labeled event tag piece µ33 for the transition from the state s3 to the state s4
is of type 2 and the labeled event tag piece µ33 for the self-loop of the state s4
98 Chapter 5 Translation of Timed Automata to Tag Machines
is of type 3. Recall the run r of the automaton A given as, (l0, [x = 0, y = 0] ⇑
)a,1−→ (l1, [x = 1, y = 0] ⇑)
c,3−→ (l3, [x = 3, y = 2] ⇑)
d,6−→ (l3, [x = 6, y = 5] ⇑)
d,7−→
(l3, [x = 7, y = 6] ⇑). The corresponding run in Az was, (l0, 〈x = y = 0〉)a−→
(l1, 〈0 = y < x〉)c−→ (l3, 〈0 < y < x < 5〉)
d−→ (l3, 〈0 < y, x > 5〉)
d−→ (l3, 〈0 <
y, x > 5〉). The labeled tag trace (or, trace of labeled tag pieces) corresponding
to the timed run is given by : [µ′
0,µ01, (µ′
1)2,µ13, (µ
′
3)3,µ33,µ
′
3,µ33].
5.5 Correctness of the Translation
A configuration of a tag machine is defined as a pair (τ = (τ , κ), s = (l, z)) where
τ is a tag vector and s is a state such that the first component of κ(l) is the present
clock valuation which satisfies the zone z, i.e., Π21(κ(l)) ∈ z. Next we provide
a definition of equivalence between the configuration of a timed automaton and
the configuration of a tag machine.
Definition 12. A configuration (ln, vn) of a timed automaton A is equivalent to a
configuration (τ n = (τn, κn), sn = (ln, znx)) of the tag machine TA corresponding
to A iff Π21(κ
n(ln)) = vn ∈ znx. The equivalence is denoted by (ln, vn) ≡ (τ n, sn).
Using this notion of equivalence, we proceed with the proof of correctness of
our translation mechanism from timed automaton to tag machine as given by
the following theorem.
Theorem 5.5.1. For any timed run of a timed automaton A reaching a configu-
ration (l, v), there exists a tag trace (sequence of transitions) of the tag machine
TA reaching a configuration which is equivalent to (l, v).
Proof. By induction on the length |π| of timed run π, given by the number of
transitions in π.
Basis |π| = 0 : Obviously, the configuration of the TA A reached by π
is an initial configuration given by (l0,v0) where l0 is an initial location and v0
is the initial clock valuation given by ∀x ∈ X, v0(x) = 0. The corresponding
5.6 Conclusion 99
equivalent configuration of the tag machine is (τ 0 = (τ 0, κ0), s0 = (l0, z00)) where
z00 = {x = 0 | x ∈ X}, reached by a tag trace of length ‘0’.
Induction step : We assume that for a timed run πn−1 in A of length n− 1
given by, (l0,v0 ⇑)σ1,t1−−→ (l1,v1 ⇑)
σ2,t2−−→ (l2,v2 ⇑)σ3,t3−−→ · · ·
σn−1,tn−1
−−−−−−→ (ln−1,vn−1)
reaching a configuration (ln−1,vn−1), there exists a tag trace πtn−1 given by,
πtn−1 = [(µ′
0)t1 ,µ01, (µ
′
1)t2−t1 ,µ12, · · · (µ′
n−2)tn−1−tn−2 ,µ(n−2) (n−1)], reaching a
configuration (τ n−1 = (τn−1, κn−1), sn−1 = (ln−1, z(n−1) x)) in the corresponding
tag machine TA such that (ln−1,vn−1) ≡ (τ n−1, sn−1) – Induction Hypothesis.
We extend πn−1 to construct a timed run π of length n given by,
(l0,v0 ⇑)σ1,t1−−→ (l1,v1 ⇑)
σ2,t2−−→ (l2,v2 ⇑)σ3,t3−−→ · · · (ln−1,vn−1 ⇑)
σn,tn−−−→ (ln,vn)
reaching a configuration (ln,vn). Hence, there exists a location (ln, zny) in the
corresponding zone automaton Az such that vn ∈ zny.
By construction of TA,
1. there must exist a state sn = (ln, zny) such that vn ∈ zny.
2. there must exist a delay transition with tag piece µ′
n−1 for state sn−1.
3. there must exist an event transition with tag piece µ(n−1) n from sn−1 to sn.
Hence, πt = [πtn−1, (µn−1
′)(tn−tn−1),µ(n−1) n] is a possible tag trace of TA reaching
a configuration (τ n, sn) where, τ n = (τn, κn) = τ n−1 · (µn−1′)(tn−tn−1) ·µ(n−1) n.
By induction hypothesis, (ln−1,vn−1) ≡ (τ n−1 = (τn−1, κn−1), sn−1). Hence,
by definition 12 we have, Π21(κ
n−1(ln−1)) = vn−1. Now consider the intermediate
tag vector given by τ n−1i = (τn−1
i , κn−1i ) = τ n−1 · (µn−1
′)(tn−tn−1) where we have
Π21(κ
n−1i (ln−1)) = (vn−1+tn−tn−1). Observe that τ n = (τn, κn) = τ n−1
i ·µ(n−1) n.
It is easy to check that Π21(κ
n(ln)) = (vn−1 +(tn− tn−1))[λn := 0] = vn. Thus for
any timed run π of length n in A leading to a configuration (ln,vn), there exists
a tag trace πt in TA leading to a configuration (τ n = (τn, κn), sn = (ln, zn y))
such that (ln,vn) ≡ (τ n, sn).
5.6 Conclusion
In this chapter we have presented a correct-by-construction methodology for
translating a timed automaton to a tag machine. The motivation for this work
has been to show that complex specification formalisms like timed automaton
100 Chapter 5 Translation of Timed Automata to Tag Machines
can be translated to the formalism of tag machines using appropriate choice of
tag structures.
Apart from performance evaluation, another important requirement in any
design methodology of embedded systems is the availability of formal analysis
methods for correctness verification. With a view towards establishing a reason-
ing framework for correctness verification of TSM based heterogeneous system
models, we provide a survey of popular techniques for reasoning about embedded
systems in the subsequent chapter.
Chapter 6
Reasoning about Embedded
Systems : Literature Survey
6.1 Introduction
We begin with a brief discussion about the commonly used techniques for reason-
ing about embedded systems for behavioural and property verification. Verifica-
tion of embedded systems is the process of checking whether the implementation
satisfies the specification. In industrial design cycles, testing is used for this
purpose with the objective of achieving a high test coverage. The problem with
testing is that often the test space is too huge to be reasonably covered, partic-
ularly in case of embedded systems.
In this regard, a different approach is formal verification, which tries to prove
or disprove a given property of a system. A somewhat recent survey of formal
reasoning techniques typically employed for embedded real-time systems can be
found in Wang (2004) while (Ostroff, 1992) is a previous survey emphasizing on
older efforts. The two most widely used approaches in this regard are model
checking and logical inferencing using theorem proving or deductive verification.
Model checking techniques work on a formal representation of the system mod-
eled using MoCs like finite state machines, Petri nets, etc. A model consists of a
domain of values and functions on the domain. Without such formal definitions,
rigorous and mechanical verification of real-time systems is not possible. Intu-
101
102 Chapter 6 Reasoning about Embedded Systems : Literature Survey
itively, for the purpose of specification and verification, a model captures a set
of acceptable behaviours as intended by a system description (or specification).
In model checking, the objective is to detect whether a given property holds
for the system model or not. Such properties are described in temporal logics
like linear temporal logic (LTL), computational tree logic (CTL), etc. Model
checking techniques systematically explore all the relevant states and transitions
of the model and infers the truth/falsity of the property. Such techniques are
frequently limited by the problem of state space explosion in case of reasonably
sized real life system models. Another considerable overhead is the construction
of such checkable system models. One solution in this regard is to use program-
ming languages with formal semantics right from the start of system specifica-
tion. A prominent example is the synchronous programming language Esterel
(Berry, 2000) which is able to specify a control oriented system at a high-level
of abstraction and perform automated synthesis towards embedded hardware or
software realizations. The high level description itself serves as the model which
may be used for property verification. The correctness of the implementation
with respect to the initial high level specification may then be verified using
techniques of equivalence checking for behavioural verification. Methods for be-
havioural verification establish a notion of equivalence among two behaviours and
check whether the equivalence holds among the behaviours generated by two dif-
ferent specifications (implementations). Unlike model checking techniques, the
approaches for deductive verification do not suffer from the problem of state-
space explosion in case of real-life complex systems. Hence, such techniques do
not require finite-state abstractions. However, the downside of such techniques
is that the deductive approaches are mostly human-in-the-loop semi-automated
mechanisms which require formulation of good proof tactics.
The chapter is organized as follows. We discuss certain aspects of model
checking for system property verification in section 6.2. In section 6.3, we discuss
methods for behavioural verification using functional equivalence checking. In
section 6.4, we discuss some of the popular deductive verification approaches.
Finally we summarize our survey of different kinds of reasoning mechanisms in
section 6.5.
6.2 Model Checking based Methods 103
6.2 Model Checking based Methods
Model checking is a popular verification technique which has been used suc-
cessfully for verifying temporal logic properties of finite state machine based
system models (Pnueli, 1977; Emerson and Clarke, 1982; Emerson and Halpern,
1986; Alur et al., 1990; Clarke et al., 1999). The method works on finite state
abstractions of automata-based transition systems using exhaustive state-space
exploration techniques. According to the various frameworks commonly used,
a model for a real-time system can be a state set, a state sequence, an event
sequence, a state tree or an infinite domain with relations (Wang, 2004). A
computation can be viewed either as a state sequence with only one future or
as a tree with many possible futures. The former conforms to the semantics of
LTL (Pnueli, 1977) while the latter conforms to the semantics of CTL (Emer-
son and Clarke, 1982). A real-time extension of CTL (RTCTL) was proposed
by Emerson et al. (1991). The expressiveness and complexity of CTL extension
with universally quantified clock variables and arbitrary linear clock constraints
has been discussed in Harel et al. (1990). The most commonly used branching-
time temporal logic for real-time systems with dense-time clock models is TCTL
(Alur et al., 1990). The integration of linear and branching time into a unified
specification language called CTL∗ is discussed in Emerson and Halpern (1986).
A natural extension of CTL∗ is TCTL∗. Model checking methods for a subclass
of TCTL∗ has been discussed in Moller (2002).
It is not always the case that finite-state abstractions can be derived for a
given transition system. For example, consider the continuous state-space of
hybrid automata which does not admit equivalent finite-state abstractions (Hen-
zinger, 1996). Hence, model checkers for hybrid automata use various approxima-
tions (Alur et al., 1995; Henzinger et al., 1994; Mysore, 2006; Asarin et al., 2003;
Anai and Weispfenning, 2001; Chutinan and Krogh, 2003; Clarke et al., 2003;
Franzle, 1999; Tiwari, 2003). In Lafferriere et al. (1999), the authors present a
decision procedure for o-minimal hybrid automata and classes of linear dynam-
ics with a homogeneous eigen structure. An o-minimal hybrid system admits
a finite bisimulation. In particular, the bisimulation algorithm terminates for
o-minimal hybrid systems (Lafferriere et al., 1998). However, the work requires
independent analysis of discrete and continuous dynamics, i.e., the outcome of
104 Chapter 6 Reasoning about Embedded Systems : Literature Survey
discrete jump is completely independent of the continuous state. In Chutinan
and Krogh (2003), the authors present a polyhedral approximation of hybrid au-
tomata with polyhedral discrete dynamics, invariants and initial state sets. The
work reported in Franzle (1999) shows decidable reachability for specific classes
of robust polynomial hybrid automata. In polynomial hybrid automata, every
state and transition can be described by one polynomial predicate through the
first-order logic over real-closed fields. In Asarin et al. (2003), the authors use
a piecewise linear numerical approximation in an approximate reachability al-
gorithm for continuous systems with known Lipschitz bounds. The decidability
of bounded-time and bounded switching reachability prefixes of semi-algebraic
hybrid automata have been shown in Mysore (2006). Hybrid systems generally
have non-linear parametric constraints due to the interaction of discrete and
continuous dynamics thus rendering standard model-checking approaches (Alur
et al., 1995; Chutinan and Krogh, 2003; Henzinger, 1996; Frehse, 2008) unusable.
Overall, since hybrid systems do not admit equivalent finite-state abstractions
(Henzinger, 1996) and due to the general limits of numerical approximations
(Platzer and Clarke, 2007), hybrid model checkers are still more successful in
falsification than in verification (Platzer, 2008).
6.3 Equivalence Checking for Behavioural Ver-
ification
Due to the increasing abstraction gap between initial system level models and
final implementations, the verification of transformational design refinements be-
tween models at different abstraction levels has become a formidable task. As
advocated in Edwards et al. (1997), a system design methodology should derive
the final implementation from an initial high level model through correct-by-
construction design decisions, e.g. (Seceleanu, 2000). Synchronous languages like
Esterel (Berry, 1996) and Lustre (Halbwachs et al., 1991) have been developed
keeping in mind the requirement of formal verification. Verification of Esterel
models by theorem proving and verification of Lustre models using binary deci-
sion diagrams (BDDs) have been reported (Nadjm-Tehrani and Akerlund, 1999;
Hagen and Tinelli, 2008). Another example of a verification oriented design
6.3 Equivalence Checking for Behavioural Verification 105
language is LAVA (Singh, 2003) which is based on the functional programming
language Haskell. LAVA can model hardware as well as the requirements that
the hardware needs to satisfy. The requirements are then verified using theorem
proving techniques. The correctness verification of transformational design re-
finements in SpecC has been discussed in Abdi and Gajski (2006). However, such
an approach limits the designer to only a set of design transformations which are
known to be correct-by- construction. Another approach reported in Raudvere
et al. (2008) discusses verification methods which are applicable towards proving
the correctness of nonsemantic-preserving transformations.
The requirement of equivalence checking methods for behavioural transfor-
mation verification has great importance in the context of High Level Synthesis
(HLS) frameworks. High level synthesis is the process of generating the register-
transfer-level (RTL) design from the high level behavioural description. Different
methods for checking the equivalence of the generated RTL and the initial model
exist. In general, techniques for equivalence checking in HLS can be broadly di-
vided into three categories : presynthesis verification where software verification
methods are being applied, formal synthesis verification where the synthesized
results are formally derived using some logical calculus and postsynthesis veri-
fication where the synthesized results are verified against the input behavioural
description (Kumar et al., 1996). In Chapman et al. (1992), a presynthesis veri-
fication technique for the BEDROC HLS tool is reported. However, due to the
large size of HLS verification tools, it is not possible to apply such techniques
in all phases of the HLS flow. Formal synthesis verification techniques for HLS
have been reported in Blumenroehr et al. (1996); Mendıas et al. (2002). How-
ever, for applying such techniques, the designers are required to understand the
mathematical formalisms in order to represent the underlying hardware concepts
as formal transformations. In this regard, postsynthesis verification techniques
have an advantage due to the fact that in case of postsynthesis verification, the
correspondence between outputs of various design steps in the HLS flow is al-
ready established and is independent of the synthesis procedure. Hence, among
the above three techniques, postsynthesis is the most widely used one (Kumar
et al., 1996). Examples of such HLS verification methods can be found in Fujita
(2005); Mansouri and Vemuri (1998); Radhakrishnan et al. (2000). These tech-
niques however suffer from the inability in locating the exact scheduling errors.
106 Chapter 6 Reasoning about Embedded Systems : Literature Survey
Simulation based techniques proposed in Bergamaschi et al. (1995); Ernst and
Bhasker (1991) on the other hand become impractical in case of highly complex
digital systems and other postsynthesis methods like (Ashar et al., 1998; Borrione
et al., 2000) are only applicable for specific phases (allocation, binding) of HLS.
Techniques for HLS verification with the contraint that the control structure
does not change have been reported in Mansouri and Vemuri (1998). The work
reported in Kim et al. (2004) proposes another HLS verification method which
uses Finite State Machine with Data (FSMDs) as representations of both input
and output of the HLS scheduler with the constraint that the path structure is
not disturbed. Among other FSMD based approaches, (Jain et al., 1991; Lee
et al., 1989) are well suited for Basic block (BB) based scheduling only such that
the path structure of the input behaviour does not get modified due to scheduling
or operations are not moved across the BB boundaries. The works reported in
Camposano (1991); Rahmouni and Jerraya (1995) are able to handle transforma-
tions which modify control structures in the input behaviour and in Karfa et al.
(2008) the authors discuss a method which may handle such transformations as
well as code motions across BB boundaries.
Bisimulation is a notion of system equivalence that has become one of the
primary tools in the analysis of concurrent processes. When two concurrent sys-
tems are bisimilar, known properties for one system are readily transferred to the
other. For purely discrete systems, several techniques for establishing bisimula-
tion equivalence have been proposed. In Joyal et al. (1996), the authors propose
a notion of bisimulation equivalence for concurrency in an abstract categorical
setting which captures the strong bisimulation definition of Milner (1989). Fur-
ther, in Cheng and Nielsen (1998), it was shown that abstract bisimilarity also
captures testing equivalence (Hennessy, 1988), barbed bisimulation (Milner and
Sangiorgi, 1992) and probabilistic bisimulation (Larsen and Skou, 1991). Bisim-
ulation relations for Markov processes have been proposed in Desharnais et al.
(2002). Co-algebraic approaches for bisimulation have been proposed in Jacobs
and Rutten (2002); Rutten (1996).
The methodologies for behavioural verification discussed so far are more or
less restricted to the domain of digital hardwares and their high level synchronous
representations. In the domain of modern day computing devices, hybrid sys-
tems have recently emerged as a mathematical model for embedded computing
6.4 Deductive Verification Techniques 107
devices interacting with the continuous time environment. A major challenge in
the research area of hybrid systems is defining the notion of equivalence among
systems so that the known properties of one system can be readily transferred
to another. The works reported in Pappas (2003); Tabuada and Pappas (2004)
characterize the notion of bisimulation equivalence for dynamical and control sys-
tems in a functional setting which is further extended to a relational setting in
Haghverdi et al. (2005) and it is shown that this equivalence relation is captured
by the abstract bisimulation relation of Joyal et al. (1996). The work reported
in Haghverdi et al. (2005) proposes novel and natural notions of bisimulation for
hybrid systems and show that this notion is also captured in the framework of
Joyal et al. (1996) thus unifying the notion of bisimulation across discrete and
continuous domains.
6.4 Deductive Verification Techniques
Deductive approaches for verifying systems rely on formal proofs instead of state
space exploration and thus do not require finite state abstractions (Harel, 1979;
Harel et al., 1984; Chaochen et al., 1993; Hutter et al., 1996; Davoren, 1999;
Davoren and Nerode, 2000; Beckert and Platzer, 2006; Beckert et al., 2007). Du-
ration Calculus (DC) is a formal system for specification and design of real-time
safety-critical systems. It is based on a modal logic for describing and reasoning
about the real-time behaviour of dynamic systems (Chaochen et al., 1993). It
is an extension of Interval Temporal Logic (Maszkowski and Manna, 1983), but
with continuous time. The calculus also uses a multitude of rules and a non-
constructive oracle which requires external mathematical reasoning about the
notions of derivatives and continuity. In Hutter et al. (1996), the incorporation
of a deduction method into the verification support environment (VSE) for mod-
eling and verification of safety critical properties in software systems has been
discussed. In Davoren (1999), a modal µ-calculus has been proposed for formal
analysis and verification of hybrid and real-time systems. The work demonstrates
that the modal logic extensions of the propositional µ-calculus provide a richly
expressive formalism with natural representations for reasoning about proper-
ties of hybrid dynamical systems. It has further been argued in Davoren and
Nerode (2000) that deductive methods support formulas with free parameters.
108 Chapter 6 Reasoning about Embedded Systems : Literature Survey
The authors present a modal µ-calculus in hybrid systems and examine topolog-
ical aspects. They provide Hilbert-style calculi for proving formulas valid for all
hybrid systems simultaneously. In Chaochen et al. (1996), the authors present
a hybrid variant of CSP as a language for describing hybrid systems. They also
give a semantics using extended duration calculus (Chaochen et al., 1993) but do
not provide any verification technique. In Ronkko et al. (2003), the authors pro-
vide an extended guarded command language with differential relations and give
a weakest-precondition semantics in higher-order logic with built-in derivatives.
However, no verification means of this logic has been provided and the approach
is still limited to providing notational variants for classical mathematics.
An algebraic framework dealing with hybrid systems is the process algebra of
Bergstra and Middleburg (2005). The algebra has been obtained by extending
the process algebra with continuous relative timing from Baeten and Middelburg
(2002) and and the process algebra with propositional signals from Baeten and
Bergstra (1997). However, the algebra does not contain structural transformation
rules for larger systems. Besides the theories of hybrid automata and algebra,
there are further related works. For example, in He Jifeng (1994) a variant of
timed CSP (Davies et al., 1992) is introduced that allows limited dealing with
continuous behaviour. In Rounds and Song (2003), the π-calculus (Milner, 1999)
is modified such that it can deal with continuous behaviour.
Checking the invariants of hybrid automata using theorem provers like STep
(De Alfaro et al., 1997) and PVS (Abraham-Mumm et al., 2001) have been pro-
posed by De Alfaro et al. (1997); Kesten et al. (2000) and Abraham-Mumm et al.
(2001) respectively. However, in such methods, the hybrid aspect and the tran-
sition structure are flattened to a quantified boolean formula and the elegance
of deductive verification using symbolic decomposition gets lost in the process.
Another approach where the axioms of Kleene algebra are used for hybrid sys-
tem verification has been discussed in Hofner and Moller (2009). In Platzer
(2008), the authors propose a dynamic logic dL (Differential Dynamic Logic) for
reasoning over hybrid systems. Based on dL, a theorem proving environment,
called Kymera, has been developed which has been applied for verifying flight
controllers and the European Train control protocol (Damm et al., 2007; Platzer
and Quesel, 2009).
6.5 Conclusion 109
6.5 Conclusion
In this chapter, we have explored some of the popular formal reasoning techniques
primarily used for embedded system verification.
The different methodologies for model checking as discussed are largely based
on finite automata and hybrid automata as the underlying formal models.
In case of equivalence checking, we find that the methodologies are applicable
to the models of finite automata and their extensions with operations over data.
For behavioural verification of both discrete and continuous systems, different
notions of bi-simulation have also been established.
In case of deductive verification techniques, we find many examples of logical
inference mechanisms being proposed for a wide range of MoCs. This is how-
ever natural since deductive techniques only require to provide a set of sound
logical inference rules for a specific MoC. Thereafter, theorem provers, enabled
with the axiomatization of the MoC provided are employed for both behavioural
and property verification. In most cases, due to complexity of the underlying
MoCs, the axiomatization is sound but not complete thus providing no decision
procedure and relying on human-in-the-loop iterations hoping for a proof to be
arrived at.
To the best of our knowledge, no formal reasoning framework applicable to
TSMs have been reported in the literature. In the subsequent chapters, we have
tried to provide an actor based reasoning framework for TSMs and demonstrated
its possible applicability. Our approach has largely been towards proposing an
algebraic axiomatization over the set of TSM actors using which deductive veri-
fication of heterogeneous systems may be carried out.
Chapter 7
TSM Actors and their Kleene
Algebra
7.1 Introduction
In this chapter we intend to provide an algebraic framework in order to evolve rea-
soning mechanisms over TSMs. For capturing computation over tagged signals,
we define the notion of weakly Scott-continuous TSM actor functions. Continu-
ity of such functions is a required property since continuity implies the causality
of physical systems modeled using such functions. However, continuity of func-
tional maps further require certain order theoretic properties to hold for both
the domain and range of such mappings. Hence, we perform an order theoretic
analysis of the set of tagged signals and actor functions defined over such signals.
We define sequential and concurrent composition and finite iteration over such
actor functions and prove their closure. We finally show that the set of TSM ac-
tor functions, ordered pointwise and equipped with the operations defined form
a Kleene algebra.
The chapter is organized as follows. In section 7.2, we provide some basic
order theoretic background. In section 7.3, we study the order structure of tagged
signals. In section 7.4, we define TSM actor functions and certain operations over
such functions. We further prove the closure of the set of TSM actor functions
under such operations. Next in section 7.5, we study the algebraic properties of
111
112 Chapter 7 TSM Actors and their Kleene Algebra
TSM actor functions. Finally, we conclude our discussions on algebraic properties
of TSM actors in 7.6
7.2 Some Important Definitions
The following definitions are taken from the literature (Davey and Priestley,
2002).
Directed Set: Let (P,≤) be a poset. A subset S of P is said to be directed if
it is non-empty and ∀x, y ∈ S, there exists z ∈ S such that x ≤ z and y ≤ z.
Equivalently, z is an upper bound of {x,y}.
DCPO: A poset (P,≤) is said to be a directed complete partial order (DCPO)
if every directed subset S of P has a least upper bound (lub), denoted as∨
S.
CPO : A DCPO P is a CPO if it has a least element denoted by ⊥P .
Note that, CPOs form a cartesian closed category due to the following important
property.
Lemma 7.2.1. If D and E are cpo’s, then D × E is also a cpo under the co-
ordinatewise order.
Proof. If P ⊆ D×E is directed, then P1 = {x | (x, y) ∈ P for some y} and P2 =
{y | (x, y) ∈ P for some x} are directed. Since P1 and P2 are directed subsets of
the CPOs D and E respectively, the elements∨
P1 ∈ D and∨
P2 ∈ E exists.
Thus (∨
P1,∨
P2) ∈ D×E and it can be easily checked that∨
P = (∨
P1,∨
P2).
The proof is borrowed from (Gunter, 1992).
Monotone Functions : For any two posets (P,≤) and (Q,�), a function
f : (P,≤)→ (Q,�) is called monotone order-preserving (or simply, a monotone)
if ∀x, y ∈ P , x ≤ y implies f(x) � f(y). Let [Pm→ Q] denote the set of all such
monotone functions between the two posets P and Q. Then, ∀f, g ∈ [Pm→ Q], f
is said to be in pointwise order with g, denoted f·≤ g, if ∀x ∈ P , f(x) � g(x).
The structure ([Pm→ Q],
·≤) creates another poset called the monotone function
space between P and Q. Note that, if f ∈ [Pm→ Q] and S is a directed subset
of P , then f(S) is a directed subset of Q. In the notation, we may omit symbol
‘m’ for monotonicity, in general.
7.3 The Order Structure of Tagged Signals 113
Continuous functions : Let P and Q be DCPO’s. A function f : (P,≤) →
(Q,�) is (Scott-) continuous if it is a monotone and if for each directed subset
S of P we have f(∨
S) =∨
f(S). If P and Q are CPOs, then f is continuous if
it furthermore preserves the least element, i.e., f(⊥P ) =⊥Q.
For deeper discussions, one may also consider (Abramsky and Jung, 1995)
and (Aarts et al., 1995). We use the above definitions for analyzing the order
structure of tagged signals in the subsequent section.
7.3 The Order Structure of Tagged Signals
In the denotational semantics of a tagged signal model (Lee and Sangiovanni-
Vincentelli, 1998), a behaviour σ ∈ Σ is given by the partial map σ : V → T → D
where Σ denotes the set of all possible behaviours for the variables in V and the
set T of all tags is a poset (T ,≤τ ) where the interpretation of ≤τ depends on
the underlying model of computation. For a given σ ∈ Σ, σ(v) : T → D is called
a signal for the variable v ∈ V in σ, provided σ is defined for v. We consider the
above definition with the domain D being extended with the element ⊥ such that
D⊥ = D ∪ {⊥}, where D is partially ordered by ≤ such that ∀x ∈ D⊥, ⊥≤ x
and ∀x, y ∈ D, x 6= y → x ≍ y i.e. D⊥ is a flat cpo as shown in Fig 7.1.
d2
...
d1 · · · · · · dk · · ·
⊥
Figure 7.1: The order Structure of D⊥ = {d1, d2 · · · , dk, · · · } ∪ {⊥}
With this modification we define a signal for the variable v ∈ V in σ as a
total map σ(v) : T → D⊥. Thus, in our framework, signals may not assume
meaningful values (∈ D) at certain tags followed by meaningful values at some
subsequent tags similar to the convention followed in (Caspi et al., 2009). For
a signal s = σ(v), the graph of s is defined as, graph(s) = {(τ, d) | σ(v)(τ) =
d, τ ∈ T , d ∈ D = D⊥ \ {⊥}}. Given a signal s, we further define dom(s) =
114 Chapter 7 TSM Actors and their Kleene Algebra
{τ | (τ, d) ∈ graph(s)}. The set Σ of all possible behaviours contains an empty
behaviour σ⊥ such that ∀ v ∈ V , graph(σ⊥(v)) is empty. Also, σ⊥(v) is called
the empty signal for the variable v.
For any variable v ∈ V , let the set of all possible signals on v in a set Σ of
behaviours be denoted as SΣv such that SΣ
v = {σ(v)|σ ∈ Σ}. When Σ is clear
from the context, we shall denote this set simply as Sv. Following Caspi et al.
(2009), we now define a prefix ordering ⊑ on the set Sv of signals. For any two
signals s1, s2 ∈ Sv, s1 ⊑ s2 (read as s1 is contained in s2) iff
∀τ ∈ T [s1(τ) 6= s2(τ)⇒ ∀τ ′ ≥ τ [s1(τ′) = ⊥]].1
Note that for any two signals s1, s2, s1 ⊑ s2 ⇒ graph(s1) ⊆ graph(s2). For
example, consider the signals s1, s2 and s3 defined over variable v for the tag
set T = {τ1, τ2, τ3, τ4, τ5, τ6, τ7, τ8} as shown below in table 7.12. Based on ⊑, we
tags→ τ1 τ2 τ3 τ4 τ5 τ6 τ7 τ8
s1 = σ1(v) d1 ⊥ d2 d3 d4 ⊥ ⊥ ⊥
s2 = σ2(v) d1 ⊥ d2 d3 ⊥ ⊥ ⊥ ⊥
s3 = σ⊥(v) ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥
Table 7.1: Note that s2 ⊑ s1 and s3 is an empty signal.
have the following lemmas.
Lemma 7.3.1. Let S be a directed subset of the poset (Sv,⊑). Then ∀si, sk ∈
S, ∀τ ∈ T [τ ∈ dom(si) ∧ τ ∈ dom(sk)⇒ si(τ) = sk(τ)].
Proof. Recall that S being a directed set requires that ∀si, sk, ∃sj such that
sj is an upper bound of {si, sk}. This implies that graph(si) ⊆ graph(sj) and
graph(sk) ⊆ graph(sj). Hence, from the first conjunct, ∀τ ∈ dom(si), si(τ) =
d⇒ (τ, d) ∈ graph(sj)⇒ sj(τ) = d = si(τ). Similarly, from the second conjunct
we have, sj(τ) = sk(τ). Thus ∀τ ∈ dom(si)∩dom(sk), si(τ) = sj(τ) = sk(τ).
Lemma 7.3.2. For all v ∈ V , the poset (Sv,⊑) is a CPO.
Proof. For any directed subset S of Sv, consider DS =⋃
s∈S
graph(s). We define a
signal r such that graph(r) = DS. Hence, ∀s, s′ ∈ S, ∀τ ∈ dom(s) ∩ dom(s′), by
1It can be trivially checked that ⊑ is indeed a partial order.2di ∈ D = D⊥ \ {⊥} , ∀i
7.3 The Order Structure of Tagged Signals 115
lemma 7.3.1 s(τ) = s′(τ) and by construction, r(τ) = s(τ) = s′(τ). Otherwise,
∀τ ∈ dom(s), r(τ) = s(τ); ∀τ ∈ dom(s′), r(τ) = s′(τ) and ∀τ /∈ dom(s)∪dom(s′),
r(τ) =⊥. So, r is well defined.
It can be easily seen that r is an upper bound of S. Let s′ be any upper
bound of S. Now let us consider that it is not the case that r ⊑ s′. Hence,
from the definition of ⊑ we have, ∃τ [r(τ) 6= s′(τ) ∧ ∃τ ′ ≥ τ [r(τ ′) 6=⊥]]. Hence,
τ ′ ∈ dom(r) and by construction of r, ∃s1 ∈ S such that τ ′ ∈ dom(s1) and
r(τ ′) = s1(τ′). Since, r is an upper bound of S, s1 ⊑ r. Since τ ′ ∈ dom(s1) and
s1 ⊑ r, we have ∀τ ′′ ≤ τ ′[s1(τ′′) = r(τ ′′)]. Since τ ≤ τ ′, we have s1(τ) = r(τ).
Thus ∃τ [s1(τ) 6= s′(τ) ∧ ∃τ ′ ≥ τ [s1(τ′) 6=⊥]] which implies that it is not the
case that s1 ⊑ s′. However, this contradicts our initial assumption that s′ is any
upper bound of S. Hence, it must have been the case that r ⊑ s′ and thus r is the
lub of S. Thus, every directed subset of Sv has an lub. Hence, ∀v ∈ V , (Sv,⊑)
is a DCPO. It is easy to see that the signal σ⊥(v) ∈ Sv is the least element of
(Sv,⊑). Hence, ∀v ∈ V , (Sv,⊑) is a CPO.
The model of tagged systems have both denotational (Lee and Sangiovanni-
Vincentelli, 1998) and operational semantics (Benveniste et al., 2005). It is
worthwhile to pay interest towards the inter-relation among the two different
representations as discussed next.
7.3.1 Denotational and Operational Semantics of TSM
Observe that in the operational semantics as discussed and used in chapters 2,
4 and 5, we have been using the notion of events; behaviours were composed of
events happening concurrently. However, in case of denotational semantics used
from now onwards, our approach has been more inclined towards the original Lee-
Sangiovanni-Vincentelli (LSV) model (Lee and Sangiovanni-Vincentelli, 1998)
where behaviours are basically partial maps from tags to values. However, this
apparent discrepancy is easy to solve. In the operational semantics as borrowed
from Benveniste et al. (2005), behaviours had been defined as,
σ : V → N → T → D⊥, while in the LSV model behaviours had been defined
as, σ : V → T → D⊥. Note that the only constraint on the set T of all
116 Chapter 7 TSM Actors and their Kleene Algebra
tags is that (T ,≤T ) is a poset. Thus, let (T ,≤T ) = (N,≤N) × (T1,≤T1), where
we place the restriction that the set T of all tags has natural numbers as the
first component in the LSV model. With this we have the first component of
the tag denoting event indices. Thus, for any variable v, the n-th event in a
behaviour σ is supposed to be σ(v)(n) : T1 → D⊥ by the original LSV model.
Since the maps are partial, we can always have σ(v)(n) defined only for some
tag τ ∈ T1 with some value d ∈ D and ⊥ for all other tags 6= τ . The scenario
can thus be viewed as σ(v)(n) = (τ, d) which is consistent with the operational
semantics and the corresponding performance evaluation work reported in this
thesis. Using the denotational semantics of TSM, we next propose a definition of
TSM actor functions and define certain operations over the set of such functions.
7.4 Actors in Tagged Signal Models
Let V be a set of variables in a TSM. We partition the set V of variables as V =
I∪O, where I is the set of variables attached to the input ports (input variables)
of a system and O is the set of variables for the output ports (output variables).
For a variable v ∈ I and a behaviour σ ∈ Σ, the signal σ(v) is called an input
signal on v; output signals on output variables are similarly defined. The set of
all possible input signals is given by SI = { Πv∈Iσ(v)|σ ∈ Σ} where Π represents
the cartesian product. For example, let I = {i1, i2, i3} and Σ = {σ1, σ2}. In this
case, we have SI = {〈σ1(i1), σ1(i2), σ1(i3)〉, 〈σ2(i1), σ2(i2), σ2(i3)〉}. Similarly, we
may define SO and SI × SO. Note that the ordering relation ⊑ defined over Sv,
for any v ∈ V , can be easily extended co-ordinatewise to the cartesian product
sets SI , SO and SI ×SO. For an input variable v, a possible input signal σ(v) is
a member of Sv. We refer to a member σ(I) of SI as an input situation which
basically means an ordered collection of signals defined over all the input variables
for a given behaviour σ ∈ Σ. Similarly, we may define an output situation.
An actor maps input situations to output situations; hence it is natural to
conceive of actors as functions with domain SI and codomain SO. To arrive at a
Kleene algebra of actors, however, the first thing that we need is that the actor
functions form a set closed under functional composition, where the latter is
meant to capture cascading of components corresponding to the actor functions.
7.4 Actors in Tagged Signal Models 117
If we take the domain to be SI , we confront the problem of maintaining closure
because the outputs of a component may be inputs to other components. So
the domain of an actor function is taken as SI × SO since actors may work
on external signals (SI) and (or) output signals (SO) of other actors including
itself (in case of feedback). The range of an actor function is the set of all
non-empty subsets over SO so that possible nondeterminism exhibited by actors
can be captured in a functional form. For some (σ(I), σ(O)) ∈ SI × SO, let
a(σ(I), σ(O)) = Σa(O) ∈ P(SO). Unlike σ(I)/σ(O) which denotes a single
input/output situation, Σa(O) denotes a set of possible output situations such
that Σa(O)|v∈O ∈ P(Sv) where ‘Σa(O)|v’ denotes the restriction of Σa(O) to the
variable v. Note that Σa(O)|v is a set of signals on v, any of which may be
the output signal at v for the given input situation. We consider the ordering
relation ⊑p defined over P(Sv) (the set of possible outputs by a TSM actor at
port v) as follows.
Definition 13. For X, Y ∈ P(Sv), X ⊑p Y if ∀x ∈ X, ∃y ∈ Y such that x ⊑ y.
The relation ⊑p is evidently influenced by Smyth ordering of powerdomains
(Plotkin, 1976).
Lemma 7.4.1. The ordering relation ⊑p is a pre-order on P(Sv), ∀v ∈ O and
not a partial order.
Proof. It can be trivially checked that ⊑p is reflexive and transitive in both
of the above sets. However, ⊑p is not antisymmetric in P(Sv). For example,
consider two elements of P(Sv) given by A = {a1, a2} and B = {b1, b2, b3} where,
a1 = b1, a2 = b2 and b3 ⊏ a2. Note that b3 ⊏ a2 basically means b3 ⊑ a2∧b3 6= a2.
Observe that both A ⊑p B and B ⊑p A holds but A 6= B. Hence, ⊑p is a pre-
order and not a partial order on P(Sv).
Note that, the least element of P(Sv) under ⊑p is {σ⊥(v)}.
Lemma 7.4.2. For any X, Y ∈ P(Sv),∨
{X, Y } = X ∪ Y .
Proof. Consider any X, Y ∈ P(Sv). Note that X ∪ Y is an upper bound of both
X and Y under ⊑p. Now consider any other upper bound Z for X and Y . Thus,
X ⊑p Z ∧ Y ⊑p Z ⇒ X ∪ Y ⊑p Z. Hence,∨
{X, Y } = X ∪ Y .
118 Chapter 7 TSM Actors and their Kleene Algebra
By associativity of ∪, the lemma holds for any number of elements in P(Sv).
Similarly,
Corollary 7.4.3. For any chain M ⊆ P(Sv), we have∨
M = ( ∪M∈M
M) ∈
P(Sv).
Since ⊑p is not a partial order, the equality cannot be achieved through
antisymmetry (of ⊑p). We, however, can still have a weaker notion of equivalence
(strictly containing equality) defined for ⊑p as follows.
Definition 14. We define a relation ≡ on (P(Sv),⊑p) given by, ∀X, Y ∈ P(Sv),
X ≡ Y if and only if X ⊑p Y and Y ⊑p X. Note that by property of pre-orders,
1. ‘≡’ is an equivalence relation.
2. The order ⊑p is well defined on the equivalence classes of P(Sv) under ≡,
i.e., (X ⊑p Y ) ∧ (X ≡ X ′) ∧ (Y ≡ Y ′)⇒ X ′ ⊑p Y′.
3. The equivalence classes of P(Sv) under ≡ are partially ordered w.r.t. ⊑p.
Corollary 7.4.4. For any X, Y ∈ P(Sv), X ⊑p Y ⇔ (X ∪ Y ≡ Y ).
The ordering relation⊑p over P(Sv) can be extended to P(SO) co-ordinatewise.
Note that a pre-ordered set D is chain-complete iff the least upper bounds of all
chains in D exist in D (Knijnenburg, 1993). Hence, by lemma 7.3.2 and corollary
7.4.3, (SI × SO,⊑) and (P(SO),⊑p) are both chain-complete pre-orders. Note
that any directed set is also a chain. Thus, directed completeness always implies
chain-completeness.
An actor acts/fires simply by evaluating the function associated with itself
(actor function) for obtaining the output signals corresponding to the given input
signals. Considering causality, an actor may be looked upon as a computing
scheme where its output behaviours are constructed through rounds of execution.
For simplicity, let us consider an actor A having a single input variable i and a
single output variable o. For any variable v, let στ (v) denote the prefix of the
signal σ(v) up to tag τ . By a prefix upto tag τ of a signal, we mean the same
signal upto tag τ and being assigned ⊥ for all succeeding tags. In each round
7.4 Actors in Tagged Signal Models 119
of execution, the input στ (i) × στ (o) comprising prefixes of σ(i) and σ(o) upto
tag τ is mapped to στ ′(o) such that τ < τ ′. Next we formalize a definition of
weakly Scott-continuous functions which will be essential for defining TSM actor
functions later.
Definition 15. Let (P,≺) and (Q,⊑) be chain-complete pre-orders with least
elements ⊥P and ⊥Q respectively. A function f : (P,≺) → (Q,⊑) is weakly
Scott-continuous if
1. f is a monotone,
2. for any chain S of P , we have f(∨
S) =∨
f(S) and
3. f preserves the least element, i.e., f(⊥P ) =⊥Q.
Note that, Scott-continuous functions have as range a CPO (or DCPO), i.e.,
a partially ordered set with directed completeness. Observe that weakly Scott-
continuous functions as defined above have as range a pre-ordered set with chain
completeness. Using this notion of weak Scott-continuity, we define a TSM actor
function as follows.
Definition 16. A TSM actor function is a weakly Scott-continuous function of
the form, f : (SI × SO,⊑) → (P(SO),⊑p). The set A of all such possible actor
functions is denoted as, A = [SI × SO → P(SO)].
In the present work, we intend to evolve methods for algebraic reasoning over
TSM actor functions. Our intuition has been that with appropriate choice of
operations and ordering relation, the set A of TSM actor functions may form
a Kleene algebra. In such a scenario, the axioms of Kleene algebra and its ex-
tensions may be used to encode such actor functions and reason over them. A
Kleene algebra (KA) is an algebraic structure (K,+, ·, ∗, 0, 1,≤) with the multi-
plicative identity 1 and the additive identity 0 (Kozen, 1997). The relation ≤ is
the natural order on K defined by
a ≤ b iff a+ b = b
The set K is closed under the operations ‘+, ‘·’ and ‘∗’. Hence, we need to define
the corresponding operations (with closure) and ordering relation over A and
satisfy the axioms of Kleene algebra.
120 Chapter 7 TSM Actors and their Kleene Algebra
Since the set A of TSM actor functions is a collection of mappings between
two ordered sets, we may equip A with the ordering relation ⊑A which is the
pointwise order over A defined as,
∀a, b ∈ A, a ⊑A b iff ∀~s ∈ SI × SO, a(~s) ⊑p b(~s)
For example, consider the following signals given as,
s1 = 〈(τ1, x1), (τ2, x2)〉
s2 = 〈(τ ′1, x′1)〉
s3 = 〈(τ3, x3), (τ4, x4)〉
s4 = 〈(τ ′3, x′3)〉
s′2 = 〈(τ ′1, x′1), (τ ′2, x
′2)〉
s′4 = 〈(τ ′3, x′3), (τ ′4, x
′4)〉
and actor functions a, b ∈ A where, a(s1) = s2, a(s3) = s4 and b(s1) = s′2, b(s3) =
s′4. It is easy to see that a ⊑A b. Again consider the case where, a(s1) =
s′2, a(s3) = s4 and b(s1) = s2, b(s3) = s′4 where we can see it is not the case that
a ⊑A b.
Observe that ⊑A is a pre-order on A. This is obvious because the co-domain
P(SO) of the maps in A is pre-ordered w.r.t. ⊑p and the property will be
lifted into the pointwise ordered function space. Also, the equivalence relation
‘≡’ defined over P(SO) can be naturally extended for providing a definition of
equivalence of actors.
Definition 17. Actors a, b ∈ A are said to be equivalent (a ≡A b) if and only if
a ⊑A b ∧ b ⊑A a.
We will omit the subscript A of ‘≡A’ whenever the implication is clear from
the context. With the definition of TSM actor functions and their order structure
in place we may now define certain operations over A and prove their closures.
7.4.1 Sequential Composition of Actors
Observe that the domain and the range of actor functions do not have a matching
type. Hence the sequential composition of such actor functions is different from
the standard function composition operation. For actors f1, f2 ∈ A, the sequen-
7.4 Actors in Tagged Signal Models 121
tial composition f1 ⋄ f2 is defined as follows. Given an overall input situation3,
〈σ1(I), σ2(O)〉 ∈ SI × SO we have,
(f1 ⋄ f2)(σ1(I), σ2(O)) = ∪σ′(O)∈f1(σ1(I), σ2(O))
f2(σ1(I), σ′(O))
In other words, the result is obtained by first applying f1 on the overall input
situation 〈σ1(I), σ2(O)〉 followed by the application of f2. The output due to f1
is a set of possible output signals {σ′(O)|σ′(O) ∈ f1(σ1(I), σ2(O))}. Each of the
generated outputs are combined with the original external input signals in order
to generate a set of possible overall input situations for actor f2. For each of
these inputs, application of f2 again yields a set of possible output situations,
i.e, a member of P(SO). These elements of P(SO) are accumulated to generate
the final output which contains all the possible output signals. A demonstration
of such a situation is given in Fig 7.2. In the present work, we have used right
i1
i2
i3
o2o3
o4
o5f1 f2
Figure 7.2: Demonstration of the ‘⋄’ operation for actors f1 and f2 with external
inputs I = {i1, i2, i3} and outputs O = {o1, o2, o3, o4}.
composition ‘⋄’ instead of left composition ‘◦’. From the definition of ⋄, we have
the following lemma.
Lemma 7.4.5. ∀X, Y ∈ P(SO) with X ⊑p Y , if f is a TSM actor function and
σ(I) ∈ SI then, ∪x∈X
f(σ(I), x) ∪ ∪y∈Y
f(σ(I), y) ≡ ∪y∈Y
f(σ(I), y).
Proof. Since, X ⊑p Y , we have ∀x ∈ X, ∃y ∈ Y such that x ⊑ y. Since,
f is continuous, we have f(x) ⊑p f(y). Thus, ∀x ∈ X, ∃y ∈ Y such that
f(x) ⊑p f(y) ⇒ f(x) ∪ f(y) ≡ f(y) by lemma 7.4.4. Hence, the lemma follows
by associativity of ∪.
3by an overall input situation, we mean an ordered collection of signals defined over input
and output lines combined (∈ SI×SO). By slight abuse of notation we may skip using ‘overall’
where the implication may be clear from the context.
122 Chapter 7 TSM Actors and their Kleene Algebra
Another variant of the above lemma is,
Lemma 7.4.6. ∀X, Y ∈ P(SO) with X ⊑p Y , if f is a TSM actor function and
σ(I) ∈ SI then, ∪x∈X
f(σ(I), x) ⊑p ∪y∈Y
f(σ(I), y).
Proof. Follows by applying lemma 7.4.5 and then corollary 7.4.4.
Lemma 7.4.7. The sequential composition operation ‘⋄’ preserves the mono-
tonicity of actor functions, i.e., ∀f1, f2 ∈ A,
if there exists (σ1(I), σ2(O)), (σ3(I), σ4(O)) ∈ SI × SO
such that (σ1(I), σ2(O)) ⊑ (σ3(I), σ4(O)),
then (f1 ⋄ f2)(σ1(I), σ2(O)) ⊑p (f1 ⋄ f2)(σ3(I), σ4(O)).
Proof. By continuity of f1,
(σ1(I), σ2(O)) ⊑ (σ3(I), σ4(O)) ⇒ f1(σ1(I), σ2(O)) ⊑p f1(σ3(I), σ4(O)). Thus,
∀σ′(O) ∈ f1(σ1(I), σ2(O)), ∃σ′′(O) ∈ f1(σ3(I), σ4(O)) such that σ′(O) ⊑ σ′′(O).
Note that (σ1(I), σ2(O)) ⊑ (σ3(I), σ4(O))⇔ (σ1(I) ⊑ σ3(I)) ∧ (σ2(O) ⊑ σ4(O)).
Thus we have (σ1(I), σ′(O)) ⊑ (σ3(I), σ
′′(O)) which further implies
f2(σ1(I), σ′(O)) ⊑p f2(σ3(I), σ
′′(O)) by continuity of f2. Hence,
f2(σ1(I), σ′(O)) ∪ f2(σ3(I), σ
′′(O)) ≡ f2(σ3(I), σ′′(O)) by corollary 7.4.4. Thus,
∀σ′(O) ∈ f1(σ1(I), σ2(O)), ∃σ′′(O) ∈ f1(σ3(I), σ4(O)) such that
f2(σ1(I), σ′(O)) ∪ f2(σ3(I), σ
′′(O)) ≡ f2(σ3(I), σ′′(O)).
This fact along with the associativity of ∪ leads us to the following,
(f1 ⋄ f2)(σ1(I), σ2(O)) ∪ (f1 ⋄ f2)(σ3(I), σ4(O))
= ∪σ′(O)∈f1(σ1(I),σ2(O))
f2(σ1(I), σ′(O)) ∪ ∪
σ′′(O)∈f1(σ3(I),σ4(O))f2(σ3(I), σ
′′(O))
≡ ∪σ′′(O)∈f1(σ3(I),σ4(O))
f2(σ3(I), σ′′(O))
⇒ (f1 ⋄ f2)(σ1(I), σ2(O)) ⊑p (f1 ⋄ f2)(σ3(I), σ4(O))
Lemma 7.4.8. The set A is closed under sequential composition ‘⋄’.
Proof. For an overall input situation, 〈σ1(I), σ2(O)〉 ∈ SI × SO, we have
(f1 ⋄ f2)(σ1(I), σ2(O)) = ∪σ′(O)∈f1(σ1(I), σ2(O))
f2(σ1(I), σ′(O)) ∈ P(SO) for any
f1, f2 ∈ A. We need to check whether ⋄ preserves the continuity of actor func-
tions. By lemma 7.4.7, ⋄ preserves monotonicity of actor functions. Hence, we
only need to check whether it preserves suprema of the chains. Let S be a chain
7.4 Actors in Tagged Signal Models 123
in SI × SO. We need to prove that∨
(f1 ⋄ f2)(S) = (f1 ⋄ f2)(∨
S). Examining
further,
(f1 ⋄ f2)(∨
S)
= ∪σ(O)∈f1(
W
S)f2(
∨
S|I , σ(O))
= ∪σ(O)∈
W
~s∈S
f1(~s)f2(
∨
S|I , σ(O)) by continuity of f1
= ∪σ(O)∈ ∪
~s∈Sf1(~s)
f2(∨
S|I , σ(O))
= ∪σ(O)∈ ∪
~s∈Sf1(~s)
∨
~s∈S
f2(~s|I , σ(O)) by continuity of f2
= ∪σ(O)∈ ∪
~s∈Sf1(~s)
∪~s∈S
f2(~s|I , σ(O))
= ∪~s∈S
∪σ(O)∈ ∪
~s∈Sf1(~s)
f2(~s|I , σ(O))
= ∪~s∈S
∪~s∈S
∪σ(O)∈f1(~s)
f2(~s|I , σ(O))
= ∪~s∈S
∪σ(O)∈f1(~s)
f2(~s|I , σ(O))
= ∪~s∈S
(f1 ⋄ f2)(~s)
=∨
~s∈S
(f1 ⋄ f2)(~s)
=∨
(f1 ⋄ f2)(S)
Also,
(f1 ⋄ f2)(σ⊥(I), σ⊥(O))
= ∪σ′(O)∈f1(σ⊥(I),σ⊥(O))
f2(σ⊥(I), σ′(O))
= f2(σ⊥(I), σ⊥(O))
= {σ⊥(O)}
In the above derivation, f1 and f2 are both continuous, both of them map the
least element 〈σ⊥(I), σ⊥(O)〉 of SI ×SO to the least element {σ⊥(O)} of P(SO).
Since the sequential composition ‘⋄’ of two continuous actor functions preserves
both the chain supremas and the least element, the resulting actor function is
again continuous actor and thus a member of A.
124 Chapter 7 TSM Actors and their Kleene Algebra
An identity actor or connector is an actor which produces only a single
possible output situation which is the same as the one available as its argu-
ment. Thus an identity actor 1A is defined as follows. For any input situation
〈σ(I), σ(O)〉 ∈ SI × SO,
1A(σ(I), σ(O)) = {(σ(O)}
Observe that 1A is the multiplicative identity for all elements of A (under ‘⋄’).
7.4.2 Concurrent Composition of Actors
Given two Scott-continuous actor functions f1 and f2 ∈ A, the concurrent com-
position of f1 and f2, i.e., f1 + f2, is defined as (f1 + f2)(~s) = f1(~s) ∪ f2(~s),
∀~s ∈ SI × SO where ~s = (σ1(I), σ2(O)) for some σ1(I) ∈ SI and σ2(O) ∈ SO.
The outputs produced by the two concurrent actors are merged by merge net-
work. Merging is by union so that all the possible signals due to both f1 and
f2 are preserved leading to fairness among nondeterministic choices. For any
output port v, if v is driven by both f1 and f2, (f1 + f2)(~s)|v = f1(~s)|v ∪ f2(~s)|v.
Note that the execution semantics as defined above can be interpreted as a
non-deterministic choice among the actors f1 and f2, regarding which is really
going to act (either f1 or f2 but not both). This models the case for interleaved
execution as shown in Fig 7.3.
i
f1
f2
o∪
F
Figure 7.3: Concurrent composition of actors modeling interleaved execution.
Lemma 7.4.9. The set A is closed under concurrent composition operation ‘+’.
7.4 Actors in Tagged Signal Models 125
Proof. The fact that ‘+’ preserves continuity can be proved as follows. First we
need to prove that ‘+’ preserves monotonicity. Let (σ1(I), σ2(O)), (σ3(I), σ4(O)) ∈
SI × SO such that (σ1(I), σ2(O)) ⊑ (σ3(I), σ4(O)); then by continuity of f1
and f2 we have, f1(σ1(I), σ2(O)) ⊑p f1(σ3(I), σ4(O)) and f2(σ1(I), σ2(O)) ⊑p
f2(σ3(I), σ4(O)). Thus, by corollary 7.4.4, f1(σ1(I), σ2(O)) ∪ f1(σ3(I), σ4(O)) ≡
f1(σ3(I), σ4(O)) and f2(σ1(I), σ2(O))∪f2(σ3(I), σ4(O)) ≡ f2(σ3(I), σ4(O)). Thus,
f1(σ1(I), σ2(O)) ∪ f1(σ3(I), σ4(O)) ∪ f2(σ1(I), σ2(O)) ∪ f2(σ3(I), σ4(O)) ≡
f1(σ3(I), σ4(O)) ∪ f2(σ3(I), σ4(O)). Hence,
(f1 + f2)(σ1(I), σ2(O)) ∪ (f1 + f2)(σ3(I), σ4(O)) ≡ (f1 + f2)(σ3(I), σ4(O))
⇒ (f1 + f2)(σ1(I), σ2(O)) ⊑p (f1 + f2)(σ3(I), σ4(O)).
Now we prove that + preserves the suprema of chains. Consider S to be
a chain in SI × SO and f1, f2 ∈ A. We have, (f1 + f2)(∨
S) = f1(∨
S) ∪
f2(∨
S) = (∨
f1(S)) ∪ (∨
f2(S)) = ( ∪~s∈S
f1(~s)) ∪ ( ∪~s∈S
f2(~s)) = ∪~s∈S
(f1(~s) ∪ f2(~s)) =
∪~s∈S
(f1 + f2)(~s) =∨
(f1 + f2)(S). Also note that, (f1 + f2)(σ⊥(I), σ⊥(O)) =
f1(σ⊥(I), σ⊥(O)) ∪ f2(σ⊥(I), σ⊥(O)) = {σ⊥(O)} ∪ {σ⊥(O)} = {σ⊥(O)}. Thus,
(f1 + f2) is again a member of A.
We conceive an empty actor as an actor which always produces an empty
output situation for any given input situation. Hence, an empty actor ⊥A is
defined as follows. For any input situation 〈σ(I), σ(O)〉 ∈ SI × SO,
⊥A (σ(I), σ(O)) = {σ⊥(O)}
Observe that ⊥A is the additive identity for all elements of A (under ‘+’).
Lemma 7.4.10. For any a, b ∈ (A,⊑A), a ⊑A b iff a + b ≡ b.
Proof. Consider any a, b ∈ A, such that a ⊑A b. Thus, for any input situation
~s ∈ SI × SO we have a(~s) ⊑p b(~s). Hence, (a + b)(~s) = a(~s) ∪ b(~s) ≡ b(~s) and
the backward implication is trivial.
Lemma 7.4.11. For any a, b ∈ (A,⊑A), a + b =∨
{a, b}.
Proof. First of all, note that and for any ~s ∈ SI ,
a(~s) ⊑p a(~s) ∪ b(~s) = (a+ b)(~s) ∧ b(~s) ⊑p a(~s) ∪ b(~s) = (a+ b)(~s).
Thus, for any a, b ∈ (A,⊑A), a ⊑A a+ b ∧ b ⊑A a+ b;
126 Chapter 7 TSM Actors and their Kleene Algebra
hence a+ b is an upper bound of {a, b}.
For any other upper bound c of {a, b}, we can easily observe that
a ⊑A c ∧ b ⊑A c⇒ a+ b ⊑A c. Thus, a + b =∨
{a, b}.
Note that the proof of lemma 7.4.11 can be easily generalized to any collection
of elements from A by the associativity of + which in turn follows from the
associativity of ∪.
7.4.3 The Star Operation on Actors
The fact that an actor f is conceived to be a mapping, f : SI×SO → P(SO) does
not mean that every actor, per se, needs to have its output(s) fed back as inputs.
It is possible that an actor f has inputs i ∈ I and o ∈ O, where o is the output of
some other actor, g say, and f has some output o′ ∈ O. The member(s) (tuples) of
SO which will be fed as input(s) to f will have signal components corresponding
to o′ with values designated as ⊥ whereas the output(s) produced by f will be
such members of SO which will have the signal components corresponding to o
with values designated as ⊥. However, from the definition of actors as given
in our framework, it is entirely possible that some of the variables in O which
are connected to the input of f are also the same variables on which the actor
produces meaningful outputs. Thus, for an actor function f and a situation ~s, let
f ∗(~s) denote the set of possible output signals when the actor chooses to act any
finite number of times until it ceases to produce more defined output signals. In
other words, f ∗(~s) = fn(~s) for some n ∈ N such that f(fn(~s)) = fn(~s) i.e., fn(~s)
is the fixed point of f provided it exists. In general, f ∗(~s) = ∪{fn(~s)|n ∈ N}.
Note that, the situation resembles an actor function f with feedback and we have
the following lemma.
Lemma 7.4.12. For any f ∈ A, f ∗ = 1A + f + f 2 + f 3 + · · · .
Proof. For any ~s ∈ SI × SO,
f ∗(~s) = ∪{fn(~s)|n ∈ N}
= f 0(~s) ∪ f 1(~s) ∪ f 2(~s) ∪ · · ·
= (1A + f + f 2 + · · · )(~s) by definition of ‘+’
7.4 Actors in Tagged Signal Models 127
Lemma 7.4.13. ∀f ∈ A, ∀~s ∈ SI × SO, if 1A ⊑A f ∨ f ⊑A 1A, then f ∗(~s) is
well defined.
Proof. For 1A ⊑A f , we have ∀~s ∈ SI × SO, 1A(~s) = {~s|O} ⊑p f(~s). Thus we
have,
f(~s) = (1A ⋄ f)(~s) = ∪σ(O)∈{1A(~s)}
f(~s|I , σ(O)) ⊑p ∪σ(O)∈f(~s)
f(~s|I , σ(O)) = f 2(~s), by
lemma 7.4.5.
Proceeding this way we have, {~s|O} ⊑p f(~s) ⊑p f2(~s) · · · ⊑p f
n(~s) ⊑p · · · .
For f ⊑A 1A, we will get the reverse chain,
· · · ⊑p fn(~s) ⊑p · · · ⊑p f
2(~s) ⊑p f(~s) ⊑p {~s|O}.
By corollary 7.4.3, the lub of the chains in P(SO) given by,∨
{fn(~s) |n ∈ N} = ∪{fn(~s) |n ∈ N} exists.
Now, from continuity of f , f(∨
n∈N
fn(s)) =∨
n∈N
fn+1(s) =∨
n∈N
fn(s).
Hence,∨
n∈N
fn(s) is a fixed point of f .
Thus, f ∗(~s) is well defined and f ∗(~s) =∨
n∈N
fn(~s).
From the reverse chain we may note that, for any f ∈ A, ~s ∈ SI × SO,
f ⊑A 1A ⇒ f ∗ ≡ 1A.
For any TSM actor f , a structural (schematic) interpretation of f ∗ is shown
in Fig 7.4. The merge networks connected to the input and output of f merges
f f
ii
oo o
o∪
∪
Figure 7.4: Actor with feedback.
the older signal present in o1 with the newer ones due to f and the actor chooses
to act any finite number of times until it ceases to produce more defined output
signals (merging is by ∪). Thus, we may denote an actor function f with feedback
lines by an equivalent actor f ∗ in general. Note that we will keep ignoring the
128 Chapter 7 TSM Actors and their Kleene Algebra
merge networks in our future drawings of feedback networks for simplicity (i.e.
assuming the dotted box as the actor itself with the networks being its structural
component). The definition of ‘∗’ further leads to the following lemma.
Lemma 7.4.14. The set A is closed under ‘∗’.
Proof. Closure of ‘∗’ in A follows directly from the closure of A under ‘+’ and
‘⋄’.
The proponents of the theory of tagged signals defined the causality of discrete
events actor functions in terms of Cantor metric and the causality of untimed
actor functions in terms of Scott-continuity (Lee and Sangiovanni-Vincentelli,
1998). Causality guarantees that computations flow from the past to the future.
Later in (Liu and Lee, 2008), it was shown that for causality of all kinds of TSM
actors, Scott-continuity is a necessary condition and along with some added
clauses it becomes a sufficient condition. However, by augmenting the tagged
signal model with the CPO construction of D⊥, it has been shown in (Caspi
et al., 2009) that Scott-continuity is a sufficient condition for causality of all
kinds of TSM actors. In the present work, we use D⊥ (a CPO) as the domain
from which signals assume values (instead of D), similar to (Caspi et al., 2009).
Hence, we may infer that the operators ⋄,+ and ∗ also preserve causality due to
the fact that they preserve continuity as proved in the lemmas 7.4.8, 7.4.9 and
7.4.14 respectively.
In this regard it is to be noted that in the original TSM formulation, certain
primitive actors like switch, select, merge, look-ahead, etc., have been defined.
These actors of the original tagged signal model formulation can be included in
A provided they are continuous. Thus, select, merge and switch are members
of A since they are continuous, which, however, is not the case with with the
look-ahead actor (Liu and Lee, 2008).
In the next section we establish that the set (A,⊑A) of TSM actor func-
tions equipped with the operations ‘⋄’, ‘+’ and ‘∗’ form a Kleene algebra. We
discuss the axioms of Kleene algebra and subsequently prove that the structure
(A,+, ⋄, ∗,⊥A, 1A,⊑A) satisfies all such axioms.
7.5 A Kleene Semantics for TSM actors 129
7.5 A Kleene Semantics for TSM actors
Recall that a Kleene algebra (KA) is an algebraic structure (K,+, ·, ∗, 0, 1,≤)
with the multiplicative identity 1 and the additive identity 0 (Kozen, 1997). The
relation ≤ is the natural order on K defined by, a ≤ b iff a+ b = b. The axioms
of Kleene algebra are as follows : ∀p, q, r ∈ K,
p+ (q + r) = (p+ q) + r (7.1)
p + q = q + p (7.2)
p + 0 = p (7.3)
p+ p = p (7.4)
p(qr) = (pq)r (7.5)
1p = p (7.6)
p1 = p (7.7)
p(q + r) = pq + pr (7.8)
(p+ q)r = pr + qr (7.9)
0p = 0 (7.10)
p0 = 0 (7.11)
1 + pp∗ = p∗ (7.12)
1 + p∗p = p∗ (7.13)
q + pr ≤ r → p∗q ≤ r (7.14)
q + rp ≤ r → qp∗ ≤ r (7.15)
In short, (K,+, ·, 0, 1) is an (additively) idempotent semiring (i-semiring)
and ∗ is a unary operation defined by the star unfold axioms : 1 + pp∗ = p∗
and 1 + p∗p = p∗ and star induction axioms : q + pr ≤ r → p∗q ≤ r and
q + rp ≤ r → qp∗ ≤ r for p, q, r ∈ K. The order of precedence of the operations
is ‘+’≺‘·’≺‘∗’.
The operations of KA can be viewed in the context of TSM actors as follows.
1. ‘·’ represents sequential composition (⋄) of actor functions4.
4We omit the operator ⋄ in expressions and write pq instead of p ⋄ q
130 Chapter 7 TSM Actors and their Kleene Algebra
2. ‘+’ represents concurrent composition (+).
3. ‘∗’ represents finite iteration.
4. The identity element represents the identity actor 1A.
5. The zero element represents the empty actor 0A.
6. The relation ≤ is the ordering relation ⊑A.
Note that for KA, ≤ is defined using + and equality as a ≤ b iff a+ b = b. Since
the relation ⊑A is a pre-order on A, we define it using + and the equivalence
relation ‘≡’ characterized by lemma 7.4.10. With the above definitions in place,
consider the algebraic structure, KA = (A,+, ⋄, ∗,⊥A, 1A,⊑A). The identity
element of KA (w.r.t. ⋄) is given by the identity actor 1A and the 0 element
(w.r.t. ‘+’) is given by the empty actor ⊥A as discussed previously. Note that
the empty actor ⊥A is the least element of A w.r.t. ⊑A since ⊥A⊑A a, ∀a ∈ A.
Observe that strict equality among actors also imply equivalence but the reverse
is not true in general. On the set A,
1. the associativity, commutativity and idempotency of ‘+’, given by axioms
7.1,7.2 and 7.4 respectively are satisfied in terms of strict equality.
2. the fact that ⊥A is the zero element w.r.t. ‘+’ (axiom 7.3) is satisfied in
terms of equivalence since ∀a ∈ A, (⊥A⊑A a)⇔ (⊥A + a ≡ a).
3. the associativity of ‘⋄’ (axiom 7.5) is satisfied in terms of strict equality.
4. 1A being the left and the right identity w.r.t. ‘⋄’ (axioms 7.6 and 7.7) is
satisfied in terms of strict equality.
5. ⊥A being the left and the right annihilator w.r.t. ‘⋄’ i.e.,
∀p ∈ A, (⊥A ⋄ p) = (p ⋄ ⊥A) =⊥A (axioms 7.10 and 7.11) is satisfied in
terms of strict equality.
Next we check the left and the right distributivity of ⋄ through ‘+’ (axioms 7.8
and 7.9).
7.5 A Kleene Semantics for TSM actors 131
For any ~s = (σ1(I), σ2(O)) ∈ SI × SO with A,B,C ∈ A,
(A ⋄ C +B ⋄ C)(~s)
= (A ⋄ C +B ⋄ C)(σ1(I), σ2(O))
= (A ⋄ C)(σ1(I), σ2(O)) ∪ (B ⋄ C)(σ1(I), σ2(O))
= ∪σ′(O)∈A(σ1(I),σ2(O))
C(σ1(I), σ′(O)) ∪ ∪
σ′(O)∈B(σ1(I),σ2(O))C(σ1(I), σ
′(O))
= ∪σ′(O)∈(A(σ1(I),σ2(O))∪B(σ1(I),σ2(O)))
C(σ1(I), σ′(O))
= ∪σ′(O)∈(A+B)(σ1(I),σ2(O))
C(σ1(I), σ′(O))
= ((A+B) ⋄ C)(σ1(I), σ2(O))
= ((A+B) ⋄ C)(~s)
Also
= (C ⋄ (A +B))(~s)
= (C ⋄ (A +B))(σ1(I), σ2(O))
= ∪σ′(O)∈C(σ1(I),σ2(O))
(A +B)(σ1(I), σ′(O))
= ∪σ′(O)∈C(σ1(I),σ2(O))
A(σ1(I), σ′(O)) ∪ ∪
σ′(O)∈C(σ1(I),σ2(O))B(σ1(I), σ
′(O))
= (C ⋄A)(σ1(I), σ2(O)) ∪ (C ⋄B)(σ1(I), σ2(O))
= (C ⋄A)(~s) + (C ⋄B)(~s)
= (C ⋄A + C ⋄B)(~s)
Thus, we have the left and the right distribution of ‘⋄’ through ‘+’ defined in
terms of strict equality. In this context it is worthwhile to observe that in certain
works towards creating a Kleene algebraic closure (Kot and Kozen, 2005), only
the distribution of ⋄ from left to right is satisfied giving a left-handed Kleene
algebra. In such cases the other case of distribution of ⋄ from right to left is only
pre-distributive, i.e., ba + ca ≤ (b+ c)a.5
The importance of Scott-continuity for forming the algebra of actor functions
can be understood as follows. Consider TSM actors A,B,C ∈ A with domain
D = {1, 2} and T = {τ1, τ2, · · · } where,
1. Actor A takes an input signal with n consecutive 1 values given by
{(τ1, 1), (τ2, 1), · · · , (τn, 1), (τn+1,⊥), (τn+2,⊥), · · · } written as 1n in short
hand and produces a signal {(τ1, 1), (τ2, 1), · · · , (τ2n, 1), (τ2n+1,⊥), (τ2n+2,⊥
5We intend to skip the explicit use of ‘⋄’ whenever it is clear from the context.
132 Chapter 7 TSM Actors and their Kleene Algebra
), · · · } written as 12n in shorthand.
2. Actor B takes an input signal 1n and produces a signal 1n.
3. Actor C takes an input signal 1n and produces a signal
{(τ1, 1), (τ2, 1), · · · , (τn, 1), (τn+1, 2), (τn+2,⊥), · · · } written as 1n2 in short-
hand. Further, C may also take an input signal 12n and produce a signal
{(τ1, 1), (τ2, 1), · · · , (τ2n, 1), (τ2n+1, 2), (τ2n+2,⊥), · · · } written as 12n2 in short-
hand.
It may be noted that the distributivity axiom does not hold in this case as shown
in Fig 7.5.
1n
1n
1n
1n
12n
12n
12n
12n2
12n2A
A
B
B
C
C
C
(A+B)C 6= AC +BC
1n2
{1n2, 12n2}
Figure 7.5: Violation of distributivity
Now, for actor C with inputs i1 = 1n ⊑ i2 = 12n, it is not the case that
C(i1) = {1n2} ⊑p C(i2) = {12n2}. Hence the actor C, as defined above, is not a
monotone and thus not a weakly Scott-continuous map. Therefore, C /∈ A, the
set of weakly Scott-continuous TSM actors closed under KA.
Following lemma 7.4.12, we next check that the star unfold axioms hold in
KA.
Lemma 7.5.1. For all a ∈ A, 1A + aa∗ = a∗ and 1A + a∗a = a∗.
Proof of lemma 7.5.1 follows directly from lemma 7.4.12 and left and right
distributivity of ‘⋄’ through ‘+’. We now show that the star induction axioms
(axioms 7.14 and 7.15) hold in KA.
7.5 A Kleene Semantics for TSM actors 133
Lemma 7.5.2. For all p, q, r ∈ A, q + pr ⊑A r → p∗q ⊑A r and q + rp ⊑A r →
qp∗ ⊑A r.
Proof. For p, q ∈ A, consider the map f : A → A such that f(x) = q+ px. Note
that for any c, d ∈ A such that c ⊑A d ⇔ c + d ≡ d, we have f(c) + f(d) =
(q + pc) + (q + pd) = q + pc + pd = q + p(c + d) ≡ q + pd = f(d). Thus,
f(c)+f(d) ≡ f(d)⇔ f(c) ⊑A f(d) by lemma 7.4.10; hence, f is order preserving.
Also,∨
n≥0 fn(⊥A) =
∨
{⊥A, f(⊥A), f 2(⊥A), · · · }
=∨
{⊥A, q + p⊥A, f(q + p⊥A), · · · } =∨
{⊥A,⊥A + q, f(⊥A + q), · · · }
=∨
{⊥A,⊥A + q, f(⊥A + q), f 2(⊥A + q), · · · }
=∨
{⊥A,⊥A + q,⊥A + q + pq, f(⊥A + q + pq), f 2(⊥A + q + pq), · · · }
= · · · =∨
{⊥A,⊥A + q,⊥A + q + pq,⊥A + q + pq + p2q, · · · }
= ∪{⊥A,⊥A + q,⊥A + q + pq,⊥A + q + pq + p2q, · · · }
= ⊥A + q + pq + p2q · · · = ⊥A + p∗q.
Note that f(⊥A + pq∗) = q + p(⊥A + p∗q) = ⊥A + p∗q.
Hence, ⊥A + p∗q is a fixed point of f .
Now let β be any other fixed point of f . Hence, ∀n ≥ 0, fn(β) = β. Since,
⊥A⊑A β and f is order preserving, ∀n ≥ 0, fn(⊥A) ⊑A fn(β) = β which implies
∪n≥0fn(⊥A) ⊑A β . Thus,
⊥A + p∗q =∨
n≥0 fn(⊥A) = ∪n≥0f
n(⊥A) ⊑A β and we have ⊥A + p∗q as the
least fixed point of f . Finally we have, if q + pr ⊑A r then r is a prefix-point of
f . Since a least fixed point is also a least prefix point, ⊥A + p∗q ⊑A r. Again,
since ⊥A ⊑A p∗q ⇔ ⊥A + p∗q ≡ p∗q, we have, p∗q ⊑A r.
Similar reasoning holds for q + rp ⊑A r → qp∗ ⊑A r.
Hence, we have the following theorem.
Theorem 7.5.3. For a given tagged system 〈Σ, V = I ∪ O, T 〉, the set A of
all possible TSM actor functions forms a Kleene algebra given by the structure
KA = (A,+, ⋄, ∗,⊥A, 1A,⊑A).
Proof. As discussed previously, it can be easily checked that the axioms (7.1 -
7.11) of an i-semiring are satisfied in KA. Axioms (7.12 - 7.13) hold by lemma
7.5.1 and axioms (7.14 - 7.15) hold by lemma 7.5.2.
134 Chapter 7 TSM Actors and their Kleene Algebra
Note that the set A contains certain actors which exhibit unbounded non-
determinism. This happens because we neither have a bound on the length of
tagged signals nor have any restriction on the domain from where the variables
assume values at different tags. However, it can be easily proved that a network
of a finite number of actor functions, where each actor produces a finite number
of possible outputs as nondeterministic choices, will always yield an overall actor
adhering to the same property. Next we provide certain well known rules of KA
which are derivable from the set of basic axioms.
7.5.1 Kleene Algebra : some derived rules
Note that the following lemma always holds in KA (and hence in KA).
Lemma 7.5.4. For any a, b ∈ K, a ≤ a+ b.
Proof. Note that for any a, b ∈ K, a+a+b = a+b by idempotency of elements in
K with respect to +. From the definition of ‘≤’, a+a+b = a+b⇒ a ≤ a+b.
Lemma 7.5.5. For a, b, c ∈ K, a+ b ≤ c⇔ a ≤ c ∧ b ≤ c.
Proof. For a + b ≤ c, a ≤ a + b ≤ c following lemma 7.5.4 and similarly b ≤
a + b ≤ c. Thus, a + b ≤ c→ a ≤ c ∧ b ≤ c. Also, a ≤ c ∧ b ≤ c→ a+ b ≤ c by
idempotency of elements in K with respect to +.
Corollary 7.5.6. For a, b, c ∈ K, a ≤ b⇒ ac ≤ bc ∧ ca ≤ cb.
Proof. By definition of + and distributivity (left and right).
Lemma 7.5.7. For any p, r ∈ K, pr ≤ r → p∗r ≤ r and rp ≤ r → rp∗ ≤ r.
Proof. Let us consider the star induction axioms (axioms 7.14 and 7.15) with
q = r. Thus we have r + pr ≤ r → p∗r ≤ r and r + rp ≤ r → rp∗ ≤ r. From
definition of ≤, pr ≤ r ⇒ r + pr = r. Thus pr ≤ r → p∗r ≤ r. Similarly, we can
prove the other statement.
7.6 Conclusion 135
Apart from the given axioms of KA, two identities which are very useful in
the context of equational reasoning are,
1. p(qp)∗ = (pq)∗p and 2. p∗(qp∗)∗ = (p+ q)∗
which are known as the sliding rule and the denesting rule respectively (Kozen,
1997). A Kleene algebra is said to be ∗-continuous if it satisfies the infinitary
condition
pq∗r =∨
n≥0
pqnr
where the supremum is w.r.t. the order ≤. The condition for ∗-continuity can be
seen as a conjunction of infinitely many axioms, that is, ∧n≥0
(pqnr ≤ pq∗r) and
the infinitary Horn formula as given in (Kozen, 1997),
∧n≥0
(pqnr ≤ s)→ pq∗r ≤ s
7.6 Conclusion
In this chapter, we have developed and presented an algebra of tagged system
actors. For doing so, we defined certain operations over the set of actor func-
tions and proved the closure of such operations over the set. Next we showed
that the axioms of Kleene algebra hold for the set of actor functions. The subse-
quent chapters will focus on exhibiting how the axioms of KA and its extensions
facilitate reasoning mechanisms over such actors. For that purpose, we will
demonstrate the application of KA based reasoning mechanisms for behavioural
transformation verification through equivalence checking and for property veri-
fication of heterogeneous embedded systems.
Chapter 8
Equivalence Checking of Actor
Networks
8.1 Introduction
The Kleene algebraic axiomatization of TSM actors as discussed in the previous
chapter may help in reasoning about such actors in order to perform both be-
havioural and property verification. In this chapter we discuss how the axioms of
Kleene algebra (KA) and its extensions can help in behavioural transformation
verification by equivalence checking of actor networks.
We discuss about an extension of Kleene algebra known as “Kleene algebra
with Tests” in section 8.2 and demonstrate the application of the algebra towards
equvalence checking of TSM actor networks in section 8.3. Then we demonstrate
the modeling of synchronous reactive (SR) systems as TSM actors in section 8.4.
Further, we model two different specifications of a Reflex game as SR actors and
prove their equivalence. Finally, we conclude the discussion over behavioural
transformation verification of TSM actors in section 8.5.
137
138 Chapter 8 Equivalence Checking of Actor Networks
8.2 Kleene Algebra with Test
A Kleene algebra with test (Kozen, 1997) is a two-sorted structure given by
(K,B,+, ·, ∗, 0, 1,≤, −), where B ⊆ K and − is a unary operator defined only on
B such that (K,+, ·, ∗, 0, 1) is a Kleene algebra and (B,+, ·, −, 0, 1) is a Boolean
algebra. In general, B is only a subalgebra of all the elements below 1 (w.r.t.
≤) in K (Moller and Struth, 2004, 2006). We refer to the elements of B as tests
and write test(K) instead of B. We have, ∀p ∈ test(K), p∗ = 1. The class of all
Kleene algebras with tests is denoted by KAT. Also, the operation ‘·’ over tests
is idempotent. From now on, we will consistently use a, b, c, · · · for Kleenean
elements and p, q, r, · · · for Boolean/test elements. Some important identities of
KAT are given below.
bp = pb⇔ bp = pb⇔ pbp + pbp = 0 (8.1)
pa ≤ aq ⇔ aq ≤ pa⇔ paq ≤ 0⇔ pa = paq (8.2)
qb = bq ⇒ qb∗ = (qb)∗q = b∗q = q(bq∗) (8.3)
Detailed discussion on KAT with the proofs of the identities given above can be
found in (Kozen, 1997, 2000). The paper also discusses many instances of more
complex identities frequently used for checking program equivalences.
In the context of TSM actors, an actor p ∈ test(A), p ⊑A 1A ⇒ p + p ≡ 1A
and p∗ ≡ 1A (vide proof of lemma 7.4.13). All standard laws of Boolean algebra
hold for test elements. A test element can be thought of as a property over the
input signals which, if satisfied, will make the element act as 1A and as ⊥A,
otherwise. Thus, for any p, q ∈ test(A), a KAT expression of the form p ⊑A q
can be treated as a logical implication ”⇒” where the left hand side of ⊑A is
the antecedent and the right hand side is the consequent. For example, let p
be a test condition signifying that ‘the last defined value for a tagged signal is
even’ and q be a test condition signifying that ‘the last defined value for a tagged
signal is divisible by 4’. Obviously, q ⊑A p and we may write q ⇒ p. Similarly,
q ≡A p is the same as q ⇔ p. Another way to encode q ⇒ p is q + p ≡A 1A,
where q is an actor such that q+ q ≡A 1A. Thus, for test elements, the following
encodings are equivalent to each other. For any p, q ∈ test(A),
8.2 Kleene Algebra with Test 139
• q ⇒ p
• q ⊑A p
• q + p ≡A 1A
The composite actor pa represents the restriction of actor a to p. In other
words, it is a restriction of a on inputs which satisfy the assertion p on the set
of possible inputs and aborts otherwise. Similarly, ap represents the restriction
of the outputs of a to only those which satisfy p.
For example, consider a ‘multiply by 2’ actor (a say) which acts on tagged
signals with T = N. Actor a with a single input i and a single output o and be-
haviours defined upto tag τ given by 〈στ (i), στ (o)〉 may be described as follows.1
a(στ (i), στ (o)) = στ+1(o) where,
σ(o)(τ + 1) :=
‘init′ if τ = 0 //some initial value
2× σ(o)(τ) if τ > 0 ∧ σ(o)(τ) 6= ⊥
⊥ if τ > 0 ∧ σ(o)(τ) = ⊥
It may be observed that for an empty input σ⊥(o), the actor meets the third
clause in order to produce the empty signal again. Similarly, we may have a
‘divide by 2’ actor (b say) described as follows.
b(στ (i), στ (o)) = στ+1(o) where
σ(o)(τ + 1) :=
‘init′ if τ = 0 //some initial valueσ(o)(τ)
2if τ > 0 ∧ σ(o)(τ) 6= ⊥
⊥ if τ > 0 ∧ σ(o)(τ) = ⊥We follow the convention that the value of a signal, if not specified at any tag
should be taken as ⊥. We may also define a test (sub-identity actor) p as follows.
p(στ (i), στ (o)) :=
{
στ (o) if σ(o)(τ) ≤ σ(i)(τ)
σ⊥(o) otherwise
Also,
p(στ (i), στ (o)) :=
{
στ (o) if σ(o)(τ) > σ(i)(τ)
σ⊥(o) otherwise
Thus pa is an actor which extends the signal on variable o by a value which is
twice the value of o at tag τ in case the value of o at tag τ is less than or equal
to the value of external input i at tag τ . Similarly, pb is an actor which extends
1Recall that, for any variable v, στ (v) denotes the prefix of the signal σ(v) up to tag τ . By
a prefix upto tag τ of a signal, we mean the same signal upto tag τ and undefined (= ⊥) for
all succceeding tags.
140 Chapter 8 Equivalence Checking of Actor Networks
the signal on variable o by a value which is half the value of o at tag τ in case the
value of o at tag τ is greater than the value of external input i at tag τ . Hence,
the actor (pa+ pb)∗ will continuously track the last defined value on input signal
i and try to produce an approximate equal value on o.
8.3 Equivalence of Actor Networks
KAT provides a complete deductive system with Hoare-style inference rules for
partial correctness assertions (Hofner and Struth, 2007). The equational theory
of KA is decidable (Antimirova and Mosses, 1995); for the equational theory of
KAT, coinductive proof techniques exist (Chen and Pucella, 2003). An immedi-
ate application of our algebra of TSM actors can be the equivalence checking of
actor networks using the axioms of KA/KAT. Since TSM actors can effectively
capture heterogeneous embedded systems, the equivalence of two such different
implementations can be checked by deriving the corresponding KAT expressions
and checking their equivalence using proof assistants equipped with axioms and
proof tactics specific to KA/KAT. As an example, consider two actor networks
with external input variables I, output variables O, actors a, b and test elements
p, q. The networks are shown in Fig 8.1. Using the axioms of KAT, we arrive at
(i)
(ii)
p
p
a
a
a
q
q
b
b
q
q
p
p
O
O
OO
O
O
OO OOO
O
O
O
II
I
III
I
I
I
IIIIII
(p + q)
p+ q
Figure 8.1: Equivalent actor networks.
the KAT expression corresponding to the second implementation from the KAT
8.4 Equivalence of Synchronous Reactive Systems 141
expression corresponding to the first one as follows.
Implementation 1
≡ (pa(qb)∗q)∗p
= (1A + pa(qb)∗q(pa(qb)∗q)∗)p by unfold axiom
= p+ pa(qb)∗q(pa(qb)∗q)∗p by distributivity
= p+ pa(qb)∗(qpa(qb)∗)∗q p by sliding rule of KA(Kozen, 1997)
= p+ pa(qb+ qpa)∗q p by denesting rule of KA(Kozen, 1997)
= p+ pa((p+ 1A)qb+ qpa+ qqa)∗q p [∵ p+ 1A ≡ 1A, qq =⊥A]
= p+ pa(pqb+ qb+ qpa + qqa)∗q p
= p+ pa((p+ q)(qb+ qa))∗q p [pq = qp for p, q ∈ B]
= pa((p+ q)(qb+ qa))∗p+ q + p
≡ Implementation 2
The proof was mechanized using the KATml theorem prover equipped with proof
tactics specific to KAT as discussed in (Aboul-Hosn and Kozen, 2003). The above
reasoning is independent of the functionalities implemented by a and b and the
assertions p and q since we need not decompose them into further atomic expres-
sions for the proof. The only constraint is that the actors a, b, p and q have to
conform to the definitions of actors and tests as given in our framework. Next
we introduce the modeling of the synchronous reactive MoC as TSM actors and
model two equivalent specifications of a reflex game accordingly.
8.4 Equivalence of Synchronous Reactive Sys-
tems
A synchronous reactive (SR) model of a system is composed of several communi-
cating blocks. The synchrony hypothesis (Berry, 1996) assumes that the input at
each instant arrives as a sequence of discrete values and each block’s computation
produces the corresponding output sequence instantaneously. The advantages of
the SR MoC for modeling heterogeneous systems has been described in (Edwards,
1998). We choose the tag set T as N×N. The first component denotes the global
time and the second component denotes the ordering of events happening inside
the same clock cycle. The zero-delay feedback loops which are common in the
semantics of such systems are modeled using the ‘∗’ operation. Such a situation
142 Chapter 8 Equivalence Checking of Actor Networks
can be demonstrated in general by considering the input-output scenario of a
typical TSM actor modeling an SR system as shown in Fig 8.2 and 8.3.
We have F : I = {i1, i2} and F : O = {o}. A firing instance of F is defined
as follows.
F (σ(t,1)(i1), σ(t,2)(i2), σ⊥(o)) = σ(t,3)(o)
where σ(i1)(t, 1) = 1, σ(i2)(t, 2) = 0 and σ(o)(t, 3) = 1. The situation implies
that for the input signal on i1 assuming value ‘1’ at tag (t, 1), for the input signal
on i2 assuming value ‘0’ at tag (t, 2) and for an empty output signal (σ⊥(o)), the
signal on o assumes value ‘1’ at tag (t, 3). A second firing instance is shown in
Fi1
i2
o
σ(o)(t, 3) = 1
σ(i1)(t, 1) = 1
σ(i2)(t, 2) = 0
σ(o) = σ⊥(o)
Figure 8.2: A typical SR actor.
Fig. 8.3. In this case,
F (σ(t,1)(i1), σ(t,2)(i2), σ(t,3)(o)) = σ(t,4)(o)
where σ(o)(t, 4) = 1 and all other signals are as defined previously. The situation
implies that for the input signal on i1 assuming value ‘1’ at tag (t, 1), for the
input signal on i2 assuming value ‘0’ at tag (t, 2) and for the output signal on o
assuming value ‘1’ at tag (t, 3), the signal on o assumes value ‘1’ at tag (t, 4).
Fi1
i2
o
σ(i1)(t, 1) = 1
σ(i2)(t, 2) = 0
σ(o)(t, 3) = 1
σ(o)(t, 3) = 1, σ(o)(t, 4) = 1
Figure 8.3: The SR actor’s second firing instance.
8.4 Equivalence of Synchronous Reactive Systems 143
The SR actor cited in the above example extends the output signals in the
second component of the tags only. However, in general, an SR actor takes as
input an overall input situation defined upto some tag τ = (m,n) and extends
the output signals upto some tag τ ′ = (m+ i, n+ j), where m,n, i, j ∈ N. Since
TSM actors can capture the semantics of SR systems with appropriate choice of
tag sets, we must also be able to perform equational reasoning over networks of
SR actors using KAT. As an example, we discuss different implementations of
a reflex game where the component actors are modeled using the SR semantics
and prove their equivalence.
8.4.1 The Reflex Game
We take the example of a reflex game played between a computer and a human
player. The game begins with the PC making a move on the dice board (of
size l × l) shown on a touchscreen. A PC’s move corresponds to highlighting a
specific co-ordinate 〈i, j〉 and setting a timer for s units of time by which the
human has to respond for that move. The human player attempts to hit this
co-ordinate by selecting a co-ordinate, (u,v) say, on the touch-screen. The screen
acts as a transducer which selects the human specified co-ordinate along with 3
adjacent co-ordinates ((u, v + 1), (u+ 1, v + 1), (u+ 1, v)) thus creating a square
which is four times the area of each place in the dice board. The situation is
demonstrated in Fig 8.4. The touch-screen generates the selection at a clock
speed which is at least four times higher than that of the PC. The fact that the
touch-screen works at a clock speed at least four times that of the PC implies
that it may complete its entire selection inside a single clock cycle of the PC and
generate events which may be ordered by the second component of the tags. The
human player pushes another button if he/she confirms about the move so that
the screen stops sampling any further the user selection corresponding to the
move made by PC. The PC then checks if the co-ordinate generated by itself is
matched by any of the co-ordinates inside the selection information sent by the
touch screen inside the timer set for that specific move. If so, then the score of
the human player is incremented by one; otherwise, the score is kept the same2.
2Obviously, the PC will process a signal sent by the touch-screen of finite arity which is
four in this case.
144 Chapter 8 Equivalence Checking of Actor Networks
���������������
���������������
TouchScreen
selected co−ordinates
select signal by user
PC Selection (actor a)
approximate selection generated by touchscreen transducer (actor b)
X
Y
(u+1, v+1)(u, v+1)
(u, v) (u+1, v)
The selection of the (u, v)−th co−ordinate by the user causes actor b to generate the above selections.
Figure 8.4: The reflex game board
8.4.2 TSM Actors for the Reflex Game
The set I of input variables for the game is given by I = {start, u, cnf} where
start is the variable corresponding to the game being turned on by the human
player, u is the variable corresponding to the player’s input on the touchscreen
and cnf is the variable corresponding to the player’s confirmation of a move. The
set O of output variables is given by O = {sel, ts, score} where sel denotes the
co-ordinate generated by the PC, ts denotes the co-ordinates generated by the
touchsceen corresponding to the player’s input and score denotes the current
score of the player. The variable ts is basically a collection of four variables
given as ts = 〈ts1, ts2, ts3, ts4〉. The overall system comprises of three actors.
The actor init produces the initial values of the signals on the output variables.
The actor a models the PC and the actor b models the touchscreen transducer.
We describe the functionalities of the TSM actors using if-then constructs where
the if conditions denote certain input signal situations and the then parts denote
the corresponding output signal situations. For the variables start and cnf , the
domain D is Boolean. For score, it is N and for sel, u, tsi, i ≤ 1 ≤ 4, it is N×N.
The actor init initializes the variables score, sel and ts in the first clock cycle.
Note that for any tag value, if an output variable is not assigned a meaningful
value by any of the if − then conditions of any actor, then the value of the
8.4 Equivalence of Synchronous Reactive Systems 145
variable for that tag is ⊥. The functional description of actor init is as follows.
init = σ(1,0)(O) =
σ(score)(1, 0) := 0
σ(sel)(1, 0) := (0, 0)
σ(ts)(1, 0) := 〈(0, 0), (0, 0), (0, 0), (0, 0)〉
Actor init does not take any tagged signal as input argument. For the out-
put signals, the value of each of the tsi-s (and sel) being (0, 0) at tag (1, 0)
denotes that none of the co-ordinates in the touchscreen has been selected. For
every tick of global clock, actor a does the following.
1. generates (non-deterministically) a selection co-ordinate (i, j) which is high-
lighted in green in the touchscreen by associating the value (i, j) with the
variable sel.
2. resets the older selections on the touchscreen by the human input.
3. checks for a nearest match of its own selection in step 1 with any human
input provided within s time units of its own selection in step 1. In case of
a match, a updates the score by 1; otherwise, if no match is found within
s time units, the score remains the same.
The functional description of actor a modeling the PC is given below3.
a(σ(m,n)(O))
= σ(m+s,0)(O) =
σ(sel)(m+ s, 0) := (1 ≤ i ≤ l, 1 ≤ j ≤ l)
σ(ts)(m+ s, 0) := 〈(0, 0), (0, 0), (0, 0), (0, 0)〉
If ∃i, 1 ≤ i < 4, ∃j ≤ s [σ(sel)(m, 0) = σ(tsi)(m+ j, i)]
then σ(score)(m+ s, 0) := σ(score)(m, 0) + 1
else σ(score)(m+ s, 0) := σ(score)(m, 0)
The actor chooses a value for sel at tag (m+ s, 0) non-deterministically. The
variable ts is assigned the value 〈(0, 0), (0, 0), (0, 0), (0, 0)〉 at tag (m+ s, 0) thus
indicating that all previous selections are de-selected. The functional description
of actor b modeling the touchscreen transducer is given below.
3Actors a and b takes the set O of tagged signals as input argument.
146 Chapter 8 Equivalence Checking of Actor Networks
b(σ(m,n)(O)) = σ(m,n+1)(O)
=
If n = 0 ∧ σ(u)(m,n) = (i, j)
then
σ(sel)(m,n + 1) := σ(sel)(m,n)
σ(score)(m,n+ 1) := σ(score)(m,n)
σ(ts)(m,n + 1) := 〈(i, j), (0, 0), (0, 0), (0, 0)〉
If n = 1 ∧ σ(ts1)(m,n) = (i, j)
then
σ(sel)(m,n + 1) := σ(sel)(m,n)
σ(score)(m,n+ 1) := σ(score)(m,n)
σ(ts)(m,n + 1) := 〈(i, j), (i, j + 1), (0, 0), (0, 0)〉
If n = 2 ∧ σ(ts2)(m,n) = (i, j + 1)
then
σ(sel)(m,n + 1) := σ(sel)(m,n)
σ(score)(m,n+ 1) := σ(score)(m,n)
σ(ts)(m,n + 1) := 〈(i, j), (i, j + 1), (i+ 1, j + 1), (0, 0)〉
If n = 3 ∧ σ(ts3)(m,n) = (i+ 1, j + 1)
then
σ(sel)(m,n + 1) := σ(sel)(m,n)
σ(score)(m,n+ 1) := σ(score)(m,n)
σ(ts1)(m,n+ 1) := 〈(i, j), (i, j + 1), (i+ 1, j + 1), (i+ 1, j)〉
The actions by actor b can be explained as follows. For any user input co-
ordinate ((i, j) say) given by the value of the variable u for the tag (m, 0) say,
actor b assigns the same co-ordinate value (as that of u) to the variable ts1 for
tag (m, 1). The assignment is performed in the first ‘if’ clause by
σ(ts)(m,n+ 1) := 〈(i, j), (0, 0), (0, 0), (0, 0)〉 which results in
σ(ts1)(m, 1) = (i, j). In the subsequent firing, the second ‘if’ clause is satisfied
with n = 1 so that the assignment
σ(ts)(m,n + 1) := 〈(i, j), (i, j + 1), (0, 0), (0, 0)〉 is performed thus resulting in
σ(ts2)(m, 2) = (i, j + 1). Subsequently, in the next two iterations, b generates
two more co-ordinates given by values assigned to the variable ts3 and ts4 for
the tags (m, 3) and (m, 4) respectively. In this way, b completes the approximate
user selection on the touchscreen for a given user input (u).
8.4 Equivalence of Synchronous Reactive Systems 147
Reflex Game: implementation and equivalence
We model two different implementations of the reflex game, both of which have
the SR modules a and b. The module a models the moves by the PC and the
module b takes input from a human player and produces four co-ordinates. The
SR modules, as discussed above, are modeled as TSM actors which conform to
the semantics of Kleene algebra. Along with the actors a and b, we incorporate
two test conditions p and q which respectively check whether the value of ‘start’
is 1 and whether the value of ‘cnf ’ is 0. The functional descriptions of p and q
are as follows.
p(στ (I), στ (O)) =
{
στ (O) If σ(start)(τ) = 1
σ⊥(O) If σ(start)(τ) 6= 1
q(στ (I), στ (O)) =
{
στ (O) If σ(cnf)(τ) = 0
σ⊥(O) If σ(cnf)(τ) 6= 0
The English language specification of the first implementation is as follows.
If start is assertained, then a acts, i.e. the PC makes a selection, initializes
/ updates score. Then, as long as the user does not confirm his/her selection
by pressing the switch corresponding to the variable cnf , b keeps on acting, i.e.
for each of the human player’s moves, a sequence of four adjacent co-ordinates
keep getting produced. Once cnf is ‘1’, b ceases to act, and the whole behaviour
repeats until start is zero.
The English language specification of the second implementation is as follows.
If start is assertained, then a acts, i.e. the PC makes a selection, initial-
izes/updates score. After that, if either start is ‘1’ or cnf is ‘0’, then if cnf
is ‘0’ then b acts, i.e. the human player’s move is processed. Else, if cnf is
‘1’, actor a fires. The actors a or b thus keeps firing repeatedly (depending on
“movetimeset”) until start is ‘0’ and cnf is ‘1’.
The above specifications reduce to the following programs and the corre-
sponding actor network implementations as shown in Fig. 8.5 and 8.6.
1. While (start==1)
{
a;
While (cnf==0)
148 Chapter 8 Equivalence Checking of Actor Networks
b;
}
2. If (start==1)
{
a;
While (start==1 or cnf==0)
if (cnf==0) then b else a;
}
start
p a q b
start
ts
scoresel
ts
scoresel
q pinit
cnf u cnf
scorets
sel
tsscoresel
ts
scoresel
Figure 8.5: Reflex game : Implementation 1.
ascorets
sel
start
scorets
sel p
p+q
b
a
q
ts
scoresel
tsscoresel
tsscoresel
ts
score
sel
selscore
score
sel
ts
ts
score
sel
score
sel
ts
scoresel
selscore
ts
ts
start
cnf
cnfinit
p+ q
q
∪
∪
u
start
cnf
start
p
Figure 8.6: Reflex game : Implementation 2.
Thus, the KAT expression for the first implementation becomes (pa(qb)∗q)∗p and
that for the second implementation becomes pa((p+ q)(qb+ qa))∗p+ q+p. Note
8.5 Conclusion 149
that we have previously shown (vide section 8.3) these two expressions to be
equivalent. Hence, the two different specifications given for the reflex game leads
to equivalent implementations.
8.5 Conclusion
In this chapter we have sketched an outline of an actor based modeling method
of SR systems and showed how design transformations of such systems may
be verified using axioms of KA and KAT. We have arrived at two different
actor network implementations from the specification of a reflex game and its
transformed version. From the work reported in the chapter, it may observed that
behavioural verification of heterogeneous embedded systems may be performed
using deduction rules of KA and KAT by modeling such systems as networks of
TSM actors. In the next chapter, we attempt to perform property verification
of such actor networks.
Chapter 9
Property Verification of Actor
Networks
9.1 Introduction
Property verification is considered to be a very important aspect of any embedded
system design methodology. The complex nature of such systems often leads to
huge state spaces which render automated model checking techniques ineffective.
Semi-automated deductive techniques have been developed for many classes of
specification formalisms. In this chapter, we attempt to verify the safety property
in particular, for a heterogeneous system specified as a network of TSM actors.
In section 9.2, we introduce the notions of discrete and continuous TSM actors.
In section 9.3, we describe the European Train Control System (ETCS) protocol
and in section 9.4 we model the protocol as a network of discrete and continuous
TSM actors. We verify a safety property of the protocol using the axioms of KA
and KAT in section 9.5 and conclude our discussion in section 9.6.
9.2 Discrete and Continuous TSM actors
A discrete TSM actor uses the tag set T = N whereas, a continuous TSM
actor uses the tag set T = R. Thus, in a heterogeneous system having both
151
152 Chapter 9 Property Verification of Actor Networks
discrete and continuous actors, the tagged signals will be defined for the tag set
T = R×N. For some tag τ = (r, n) ∈ R×N, we denote the first projection r by
τ |R and the second projection n by τ |N. The discrete actors keep the real parts
of the tagged signals unchanged and the continuous actors keep the integer parts
of the tagged signals unchanged. We elaborate the description mechanisms of
such actors which has later been used for property verification of heterogeneous
systems.
9.2.1 Discrete TSM actors
Let a be a discrete actor with input variables {i1, · · · , im} and output variables
{o1, · · · , on}. For any variable v, let στ (v) denote the prefix of the signal σ(v)
up to tag τ . Recall that by a prefix upto tag τ of a signal we mean the same
signal upto tag τ and undefined (= ⊥) for all succeeding tags. The discrete TSM
actor a acts on the prefixes of input and output signals up to some tag τ and
produces output signals which are basically extensions of the already present
output signals (before the actor acts) upto the tag τ ′ = (τ |R, τ |N + 1). We
exemplify the situation as follows.
Recall that any TSM actor a is a map given as, a : SI × SO → P(SO).
Let σ1 ∈ ΣI , σ2 ∈ ΣO be the input and output behaviour functions respec-
tively, where σ1τ = 〈σ1
τ (i1), · · · , σ1τ (im)〉 ∈ SI , σ2
τ = 〈σ2τ (o1), · · · , σ2
τ (on)〉 ∈ SO.
Let it be the case that actor a acts on the overall input situation 〈σ1τ , σ
2τ 〉 =
〈σ1τ (i1), · · · , σ
1τ (im), σ2
τ (o1), · · · , σ2τ (on)〉 ∈ SI × SO in order to produce the out-
put situation {σ2τ ′} = {〈σ2
τ ′(o1), · · · , σ2τ ′(on)〉} ∈ P(SO). Thus we may write,
a(σ1τ , σ
2τ ) = σ2
τ ′ when the input and output variables are clear from the con-
text. The value of σ2 at tag τ ′ can be determined as follows. For each output
variable ol, 1 ≤ l ≤ n, there exists a polynomial expression θl over the input
and output tagged signals such that, σ2(ol)(τ′) = θl(σ
1τ , σ
2τ ). Note that the tag
τ ′ = (τ |R, τ |N + 1) is immediately greater than the tag τ in the order ≤τ for the
tag set T = R × N. This scenario models the fact that the outputs of discrete
actors have to be instantaneous w.r.t the inputs (no increment in the real parts
of the tags; the increments in the second component keep track of the number
of events and their ordering). We represent the actors as functional relations
between the input and the output signals. In the functional description, we only
9.2 Discrete and Continuous TSM actors 153
show the extension of the output signal values. The value of the output signals
at the previous tag values remain the same as discussed previously. Since our
actors are equipped to express nondeterministic behaviours, we follow a specific
convention for denoting the possible set of outputs. When the domain values of
an output event can be anything from a range of values, a to b say, we denote it
as ‘[a, b]’. In case there is no such range but only a set of possible values, it is
simply a list of such values. Thus,
a(σ1τ , σ
2τ ) = σ2
τ ′ where
σ2(o1)(τ′) := θ1(σ
1τ , σ
2τ )
...
σ2(on)(τ ′) := θn(σ1τ , σ
2τ )
where τ ′ = (τ |R, τ |N + 1)
(9.1)
9.2.2 Continuous TSM actors
In practice, the continuous actors model some physical system and discrete actors
represent some outputs based on computation carried out on certain inputs from
the environment (including human operators). For example, in an automotive,
the acceleration, the highest (second) order derivative of the car position, is
computed (by a discrete actor) from the pressure exerted on the accelerator
by the driver depending on other environmental inputs like the distance of the
preceding car, etc. As long as these conditions do not change, the lower order
derivatives remain the same.
Let a be a continuous actor with input variables I = {i1, · · · , im} and output
variables O = {o1, · · · , on}. For any output variable ol, let the highest time
derivative (which does not vary in the time interval of computation of one round
of execution of the actor) be of order q given by an expression θl say, over the
input tagged signals. The continuous actor requires that for any output variable
ol ∈ O of order q, all the time derivatives given by o′(q−i)l , i = 0, · · · , q, (where
o′(0)l = ol) should also belong to O. If the system knows the expression θl, it can
compute the expression for any lower order derivative, o′(i)l for i = q − 1 to 0,
using the initial value by applying the Taylor series expansion, ∀ǫ′ ≤ ǫ,1
σ(o′(q)l )(kǫ+ ǫ′, j) = θl(σ
1(kǫ,j), σ
2(kǫ,j))
1The ǫ-interval is decided by the sampling rate of the fastest varying input.
154 Chapter 9 Property Verification of Actor Networks
σ(o′(q−1)l )(kǫ+ ǫ′, j) = σ(o
′(q−1)l )(kǫ, j) + σ(o
′(q)l )(kǫ+ ǫ′, j) · ǫ′
...
σ(o′(q−i)l )(kǫ+ ǫ′, j) = σ(o
′(q−i)l )(kǫ, j) + σ(o
′(q−i+1)l )(kǫ, j) · ǫ′ + · · ·
· · ·+ 1i!σ(o
′(q)l )(kǫ+ ǫ′, j) · ǫ′
...
σ(o′l)(kǫ+ ǫ′, j) = σ(ol)(kǫ, j) + σ(o′l)(kǫ, j) · ǫ′ + · · ·+ 1
q!σ(o
′(q)l )(kǫ+ ǫ′, j) · ǫ′
Using these time derivative expressions, a continuous TSM actor may be defined
as follows.
a(σ1(kǫ,j), σ
2(kǫ,j)) = σ2
(kǫ+ǫ,j) where
∀ǫ′ ≤ ǫ, 1 ≤ l ≤ n
σ2(ol)(kǫ+ ǫ′, j) := σ2(ol)(kǫ, j) + σ2(o′l)(kǫ, j) · ǫ′+
· · ·+ 1i!· σ2(o
′(i)l (kǫ, j) · (ǫ′)i + · · ·+ 1
q!σ2(o
′(q)l (kǫ, j) · (ǫ′)q
where ,
σ2(o′(q)l )(kǫ, j) = θl(σ
1(kǫ,j), σ
2(kǫ,j))
(9.2)
The continuous time actors, which we are defining this way, are basically piece-
wise continuous where the value of ǫ depends on our choice of piece-wise conti-
nuity. The smaller the value of ǫ, the better do we approximate the real time
behaviour guided by the input variations with time. Thus, the ǫ-interval is de-
cided by the sampling rate of the fastest varying input. Inside the interval,
at every point of time, the output signal values are computed by Taylor series
expansion.2
2We intend to model hybrid systems, i.e. finite state controllers interacting with a con-
tinuously evolving environment as compositions of discrete and continuous TSM actors. The
discrete actors are meant to model the finite state controllers and the continuously evolving
environment parameters are modeled using the continuous actors. Note that the finite state
controller components in a real-world hybrid system have their own time delays of operation
which may actually be unknown. In an actor based semantics, we conceive all the component
actors to be working in parallel with their interconnections defining the order in which they
manipulate the data streams (tagged signals). We may thus think of a worst case real-time
interval (ǫ say) inside which the fastest finite state controllers may be assumed to have acted
once. Thus in our simulation model, we assume that the discrete actors take zero real time to
act, i.e, their outputs are instantaneous thus only extending the signals in the natural parts
of the tags. The finite delay ( inside which no discrete actor in assumed to act and extend an
output signal) is captured as continuous evolution of real-time in the continuous actors.
9.3 The European Train Control System 155
9.3 The European Train Control System
The European Train control system (ETCS) (eu, 2002; Meyer et al., 2006) is a
standard to ensure safe and collision-free operation as well as high throughput
of trains at speeds up to 320 km/h. Correct functioning of ETCS is highly
safety-critical, because the upcoming installation of ETCS level 3 will replace
all previous track-side safety measures in order to achieve its high throughput
objectives.
ETCS is an international train control system by the European Commission
that shall replace the existing train control systems in future to ensure cross-
border interoperability and improve railway safety and track utilization. In the
final ETCS implementation level 3, the currently existing systems for detection
of train speed, location, and integrity will not be used anymore. Instead, current
parameters for a moving train are ascertained in cooperation with the train’s on-
board ETCS controller unit with an appropriate radio block controller (RBC),
which controls the traffic in a well-defined area and grants movement authorities
(MA) to the trains. One of the main issues of ETCS is increasing the possible
traffic density for which the MAs are always given up to the safe rear end of the
preceding train. This allows the trains to minimize the gaps between them.
9.3.1 Overview of ETCS protocol
We start with an abstract view of the protocol in the lines of (Platzer and Quesel,
2009). The railway tracks are dynamically segmented into entities called blocks.
ETCS level 3 follows the moving block principle, i.e., “movement authorities”
are not fixed statically; they are assigned to the trains from time to time by an
RBC based on the current track situation. An MA stands for a single block of a
track segment. Trains are allowed to move only within their respective current
MAs; after the MA, there could be open gates, other trains or bridges/tunnels
imposing additional speed restrictions. The train controller, however, seeks fresh
permission from the RBC at a suitable point in its MA for extension of the same.
The permission may or may not be granted by the RBC; in the latter case, the
train has to bring itself to a halt by applying brakes at a suitable point in its
156 Chapter 9 Property Verification of Actor Networks
current MA; if, however, extension is granted, then the train is free to move again
within certain speed limit. The automatic train protection unit (atp) in a train
tr dynamically determines a safety envelope around it, within which it considers
driving safe and adjusts the train acceleration tr.a accordingly.
An MA comprises three regions far, neg (negotiation) and cor (correction).
The scenario is depicted in Fig 9.1. The ETCS controller switches according
far neg
corfsa
RBC
MA
cornegfarSB
ST
Train2Train1
spd
rbc
atp
rbc
rbc
manual
rbc ∨ atp
Figure 9.1: ETCS train co-operation protocol overview
to the protocol pattern given in the simplified version of (Damm et al., 2007).
(Let us ignore the labels associated with the transition arcs for the moment.) In
the far region, the train is permitted to move with any speed up to a certain
limit. In the neg region, negotiation with the RBC is initiated by the train
controller seeking extension of the MA. The process starts from the point ST
(start talking) lying at a distance ahead of the end point of the MA. The cor
region comes into existence only if the negotiation fails to obtain an extension of
the MA explicitly. The region starts at the point SB (start braking) where the
train controller applies brakes to stop the train at the end point of the MA. The
distances of the points ST and SB from the end point of the MA are decided by
the train controller dynamically depending upon its present speed and position
with respect to the end point of the MA. In contrast, if the MA is extended later,
then it may return to far mode. Emergency messages announced by the RBC
can also put the controller into cor mode. If so, the train switches to a failsafe
state (fsa) after the train has come to a full stop in the cor mode and awaits
manual clearance by the train operator.
9.3 The European Train Control System 157
9.3.2 A Formal Model for ETCS
A train tr is designated by the triple tr = 〈p, v, a〉, where the entities tr.p,
tr.v and tr.a denote the position, the velocity and the acceleration of the train,
respectively. An MA m is designated by a triple m = 〈vd, d, v〉. Inside the MA
m, the train should not exceed the recommended speed m.v. Thus, the first
safety criterion may be expressed by the formula :
tr.v ≤ m.v
Also, the train must not have a velocity tr.v greater than m.vd if its position tr.p
lies beyond the point m.d from the start of MA. This second safety criterion for
the train is expressed by the formula :
tr.p ≥ m.d→ tr.v ≤ m.vd (S)
Remember that MAs are allocated uptill the safe distance from the rear end of
the train ahead. The safe rear end distance is assigned to be such that the train
may leave the MA safely with velocity m.vd and decelerate by applying full brakes
to zero velocity avoiding colliding with the train ahead of it. For satisfying S,
the braking distance SB may be computed using the graphical method as shown
in Fig 9.2. The length of the MA is given by m.d. In this figure, note that in
tr.p2
position
train1 train2
maximum deceleration ‘−b’
Trajectory3
Trajectory1
Trajectory2
SB1
SB2
tr.p1
SB3
MA
maximumacceleration ‘A’
velocity
m.d
m.vd
m.vtr.v1tr.v2
Figure 9.2: Braking distance SB for ensuring safety
trajectory 1, for the train velocity tr.v1 at distance SB1 from the end of the MA,
the train can decelerate with full brakes to reduce the speed to m.vd when it
158 Chapter 9 Property Verification of Actor Networks
reaches the distance m.d (the end of MA). Similar is the case of train velocity
tr.v2 at distance SB2 from the end of the MA as shown for trajectory 2. For
trajectory 3, however, the train has crossed the distance m.d with velocity well
below m.vd. As the figure suggests, the train will have a braking distance SB3
in excess (hence negative in value) of the end of MA.
Another possible situation as highlighted in Fig. 9.3 demonstrates a scenario
where the train cannot reach the maximum recommended velocity m.v inside the
MA as well as leave the MA with velocity less than m.vd. Observe in the dotted
trajectory that even if the train applies full brakes immediately after reaching
the velocity m.v with acceleration ‘A’, the safety criterion S gets violated. In
that case the train will leave the MA with velocity well over m.vd.
train1 train2
Safe Trajectory
‘−b’
velocity
positionm.d
m.vd
m.v
Figure 9.3: A trajectory with speed well under m.v for ensuring safety.
The overall ETCS system has two top level components acting in parallel
namely Tr (for train) and rbc (for RBC). The output of the component Tr is
the triple tr and the output of the component rbc is the triple m as defined. The
component Tr can be conceived as a sequential composition of components spd
(speed monitoring unit), atp (automatic train protection unit) and drive (the
engine powertrain).
The component spd checks for the current train velocity tr.v to be less than
the recommended velocity m.v in the current MA (m). In that case, it allows
the train to have any acceleration/deceleration value (tr.a) ranging from −b to
A. On the other hand, if spd finds the current train velocity tr.v to be greater
than the recommended velocity m.v, it forces the train to decelerate by choosing
tr.a to be anything between −b and 0. The component spd also performs the
9.4 A TSM Actor Model for ETCS 159
job of asking the rbc for MA extensions whenever the train is within a distance
ST from the end of its MA.
The component atp checks whether the train has reached the safe braking
distance (for the present velocity) from the end of the MA. In that case the atp
applies full brakes by setting tr.a to ‘-b’.
Depending on the acceleration value set by spd and atp, the component drive
updates the distance and the velocity parameters (tr.p and tr.v, respectively) of
the train for ǫ units of time.
9.4 A TSM Actor Model for ETCS
A TSM model for the ETCS system is shown in Fig 9.4. Let the TSM actors for
Tr
rbc
tr
tr tr
m
m
rbc.msg
rbc.msg
tr1tr2
sys
spd atp drive
∪
talk
talktalk1
Figure 9.4: TSM model for a) the ETCS system and b) the component Train
the train and the rbc be respectively denoted by Tr and rbc (same as the com-
ponent names given previously). The overall Kleene expression for the system is
160 Chapter 9 Property Verification of Actor Networks
given by
sys = (Tr + rbc)∗ (9.3)
where Tr = spd · atp · drive. The actors spd and atp are discrete actors while
drive is a continuous actor. The Kleene expression (Tr + rbc) is referred to as
one round of execution of sys; it means application of concurrent composition
of the actors Tr and rbc once and their completion. Completion of the actor Tr
involves completion of the constituent actors spd, atp and drive in sequence.
For the actor sys, there is no input signal (i.e. a closed system) and the
set O of output variables is given by O = {tr, tr1, tr2, m, rbc.msg, talk, talk1}.
However, for representational convenience, we will use tri to denote all three
variables tri.p, tri.v and tri.a together, 1 ≤ i ≤ 2, and similarly, for tr. Also, all
the MA related variables, i.e., m.vd, m.d and m.v are denoted together as m.
The discrete actor spd acts as an identity actor for the velocity component
tr.v and the position component tr.p (both continuous signals) of tr and keeps
them same in tr1. The acceleration component tr1.a in tr1 is randomly assigned
some value between the maximum deceleration b and the maximum acceleration
A provided the current speed tr.v is not more than the recommended speed m.v.
In case the current speed exceeds m.v, the actor spd chooses to apply brakes
by assigning tr1.a an acceleration value between −b and 0. Another important
task which spd performs is that when the train position is within a pre-defined
distance ST from the end of the MA (at distance m.d from the start of the MA),
it sets a Boolean signal talk1 to ‘1’.. We treat the parameter ST as an abstraction
in the present model.
Before delving into the functional description of the component actors, we
define a predicate last-defined (ld in short) of arity 4 as follows.
last-defined(σ, τ, v, τ1) =
τ1 ≤ τ
∧ σ(v)(τ1) 6= ⊥
∧ ∀τ ′[τ1 < τ ′ ≤ τ ⇒ σ(v)(τ ′) = ⊥]
The predicate last-defined(σ, τ, v, τ1) indicates that the behaviour σ extends
up to τ and the variable v is defined in σ upto and including τ1 ≤ τ . Note
that for a given σ, v and τ , the predicate ld(σ, τ, v, τ1) will always be satisfied
by a unique tag τ1. For variable triplets like tr, m etc., the predicate will actu-
ally denote a conjunction of predicates involving the component variables e.g.,
9.4 A TSM Actor Model for ETCS 161
ld(σ, τ, tr, τ1) = ld(σ, τ, tr.p, τ1)∧ ld(σ, τ, tr.v, τ1)∧ ld(σ, τ, tr.a, τ1). The functional
description of spd is as follows3.
spd(στ ) =
σ(tr1.p)(τ) := σ(tr.p)(τ)
σ(tr1.v)(τ) := σ(tr.v)(τ)
∀τ1, τ2
If [ld(σ, τ, tr.v, τ1) ∧ ld(σ, τ,m.v, τ2)]
then
If [σ(tr.v)(τ1) ≤ σ(m.v)(τ2)]
then σ(tr1.a)(τ |R, τ |N + 1) := [−b, A] //assigned nondeterministically
else
σ(tr1.a)(τ |R, τ |N + 1) := [−b, 0] //assigned nondeterministically
∀τ1, τ2{
If [ld(σ, τ, tr.p, τ1) ∧ ld(σ, τ,m.d, τ2) ∧ {(σ(m.d)(τ2)− σ(tr.p)(τ1)) ≤ ST}]
then σ(talk1)(τ |R, τ |N + 1) := 14
Thus, the actor spd decides (i) the acceleration ensuring that the resulting ve-
locity is not above m.v and (ii) when to negotiate (“talk”) with the rbc for MA
extension. From our functional description, two constraints may be noted. The
actor spd expects to identify a unique tag τ1 such that the signals on both of
tr.v and tr.p are defined exactly upto τ1. Since the tr inputs of spd are provided
by drive, we will later need to design the actor drive accordingly. Similarly, the
actor spd expects to identify a unique tag τ2 such that the signals on both m.v
and m.d are defined exactly upto τ2. Since the m inputs of spd are provided by
rbc, we will later need to design the actor rbc accordingly.
The discrete actor atp acts on the modified train information (signal tuple)
3Since the system is closed, none of the actors have any external inputs. Thus, all the
component actors of sys will have only one behaviour function as input argument.4Also observe that our specification of spd is a bit different from the initial schema given in
section 9.2.1 for specifying discrete actors, the discrepancy being in the ‘if’ clauses which were
absent in the initial schema. Note that we can formulate appropriate test conditions using
each of the ‘if’ clauses (and their complements using the ‘else’ clauses) and decompose spd as
spd = p1 · spd1 + p2 · spd2 + · · · where the spdi’s are exactly of the form of discrete actors as
specified. This kind of decomposition has been carried out later for property verification where
we will elaborate on this.
162 Chapter 9 Property Verification of Actor Networks
tr1 in order to produce a further modified train information tr2. It acts as an
identity actor for signals tr1.p and tr1.v. Depending upon tr.v and m.vd, atp
decides the last point (i.e. the distance m.d ± SB) up to which the maximum
braking force can be deferred from being applied. Unlike ST, SB is dynamically
computed by the atp using tr1.p and tr1.v as depicted in Fig. 9.2. In the ini-
tial model, we abstract out the details of computing SB as a function of other
variables. The crossing of the point m.d ± SB with respect to the current MA
is denoted by the condition σ(m.d)(τ2) − σ(tr1.p)(τ1) ≤ SB, where the variable
tr1.p is last defined at tag τ1 and the variable m.d is last defined at tag τ2. The
other condition for applying the maximum braking force is when the rbc signals
an emergency condition denoted by rbc.msg. The functional description of actor
atp is as follows.
atp(στ ) =
σ(tr2.p)(τ) := σ(tr1.p)(τ)
σ(tr2.v)(τ) := σ(tr1.v)(τ)
∀τ1, τ2
If [ld(σ, τ, tr1.p, τ1) ∧ ld(σ, τ,m.d, τ2) ∧ ld(σ, τ, rbc.msg, τ2)]
then
If
[{(σ(m.d)(τ2)− σ(tr1.p)(τ1)) ≤ SB} ∨ {σ(rbc.msg)(τ2) = emergency}]
then σ(tr2.a)(τ |R, τ |N + 1) := −b
else
σ(tr2.a)(τ) := σ(tr1.a)(τ)
The continuous actor drive acts on the modified train information tr2 and
produces signals on the triple tr by generating signals on the variables tr.p and
tr.v which get defined up to the tag (r + ǫ, n) assuming that they were defined
up to the tag (r, n) beforehand (∀n ∈ N). The actor drive essentially acts as
a delay actor for the variable tr2.a (assigned new values by spd and atp) and
talk1 (assigned new values by spd) thus asserting the fact that their values re-
main asserted to be the same throughout the ǫ real time interval. Furthermore,
every instance of “talk” needs to be passed through drive in order to update the
(real) tags so that the rbc can attach proper tags to the newly issued MAs. The
functional description of the TSM actor for drive is given as,
9.4 A TSM Actor Model for ETCS 163
drive(σ(kǫ,n)) =
∀ǫ′ ≤ ǫ,
σ(tr.a)(kǫ+ ǫ′, n) := σ(tr2.a)(kǫ, n)
σ(talk)(kǫ+ ǫ′, n) := σ(talk1)(kǫ, n)
σ(tr.v)(kǫ+ ǫ′, n) := σ(tr.v)(kǫ, n) + σ(tr2.a)(kǫ, n) · ǫ′
σ(tr.p)(kǫ+ ǫ′, n) := σ(tr.p)(kǫ, n) + σ(tr2.v)(kǫ, n) · ǫ′ + 12· σ(tr2.a)(kǫ, n) · ǫ′2
Observe that for variable tr.p, the highest order of derivative w.r.t. time is 2
(which is tr.a). Its value at the beginning of the interval [(kǫ, n), · · · , ((k+1)ǫ, n)]
is given by the value of tr2.a at (kǫ, n). Further, the value is asserted to remain
the same throughout the interval. Similarly, for the variable tr.v, the highest
order of derivative w.r.t. time is 1 (which is tr.a again).
Whenever the component rbc finds the signal talk to be ‘1’, it may 1) assign
the train an MA extension and may 2) send an emergency message to the train.
We do not incorporate the necessary conditions for these cases and keep them as
non-deterministic choices of component rbc. We denote the domain of possible
meaningful5 values of m.vd, m.d and m.v by D1, D2 and D3 respectively. The
functional description of the actor rbc is as follows.
rbc(σ(r,n)) =
If
σ(talk)(r, n) = 1
then
σ(m.vd)(r, n+ 1) := {d1|d1 ∈ D1}
σ(m.d)(r, n+ 1) := {d2|d2 ∈ D2}
σ(m.v)(r, n+ 1) := {d3|d3 ∈ D3}
σ(rbc.msg)(r, n+ 1) := {emergency, !emergency}
9.4.1 The Notion of Controllability
Recall that the actor atp has to decide the distance SB ahead of m.d where it
must apply the full brakes so that the actor drive (train dynamics) can bring
down the train velocity to m.vd when the train position tr.p equals m.d. A
5In the refined model later on, we shall see how rbc can provide “meaningful” values for m.
164 Chapter 9 Property Verification of Actor Networks
state (dynamics) of the train is captured by the triple tr = 〈p, v, a〉. The state
〈m.d,m.vd,−b〉 = trl say, is a controllable state because from this state the train
can be brought to a halt before the rear of the train ahead. Also, any state from
which the train can be brought to the state trl is a controllable state. Of the
three state variables p, v and a, the position p and the velocity v are determined
by the actor drive depending upon the state variable a (acceleration) decided
by the actors spd and atp; of these last two actors, only atp uses the parameter
SB. So, in order to restrict the state space to controllable states, SB needs to
be constrained; the constraint on SB, termed accordingly as the controllability
constraint, is used by the atp to decide the acceleration over and above spd’s
decision. There are two aspects of controllability : (i) whether the present state is
controllable, i.e. if full brakes are applied, whether the state trl = 〈m.d,m.vd,−b〉
can be reached, and (ii) whether yet another round of drive can be permitted
with maximum acceleration A and still only a controllable state is reached. In
order to understand the need of the first aspect more clearly, let us consider two
consecutive rounds of the concatenated actor (atp · drive) · (spd · atp · drive).
The question that arises is if in the first round, atp has already examined that
even if the actor drive is permitted to work with maximum acceleration, the
first round will not reach an uncontrollable state, then why is it necessary to
examine the first aspect further? This is because before the second round of atp,
the rbc may have changed the m triple, thereby changing trl = 〈m.d,m.vd,−b〉
and consequently, the entire controllable state space. Hence, we designate the
first aspect as the rbc-controllability constraint and the second one as the drive-
controllability constraint.
9.4.2 KAT based Encoding of Safety properties
In our ETCS specification, ST and SB are present only as un-initialized pa-
rameters and the choice of the triple m by rbc is kept non-deterministic as an
abstract specification. For arriving at a model which guarantees safe opera-
tion of the overall system, one needs to appropriately constrain the two design
parameters; while the constraint on ST depends upon various parameters like
communication overhead between spd and rbc etc., SB is entirely concerned with
the train dynamics. In the following, we discuss how the safety property S given
9.4 A TSM Actor Model for ETCS 165
by
tr.p ≥ m.d→ tr.v ≤ m.vd (S)
is used to derive the controllability constraints for SB.
The controllable region of an MA is depicted in Fig. 9.5. In the ETCS system
specification, the point where the train trajectory cuts with the line (PR) of
slope −b drawn through the co-ordinates (m.d,m.vd) (point M) should lie at a
distance SB from the end of MA. Depending on the train dynamics, SB ∈ QM1
(i.e. SB ≥ 0) or SB ∈ M1R (SB < 0). The controllable region for the MA
m is the shaded region OLPR, where O is the point on the Y-axis denoting the
train’s velocity at the start of the snapshot, L is the point where the maximum
acceleration line (with slope = A) starting from O cuts the tr.v = m.v line, P is
the point where the maximum deceleration line RM (with slope = -b) cuts the
tr.v = m.v line. If the train is at any co-ordinate inside this region, it will be
able to come to a stop by applying full brakes before it crosses the point R which
is the safe rear-end of the preceding train.
Max deceleration line
train1 train2
velocity
position
Max acceleration line
m.d
m.v
m.vd
M1
M
P
Q R
MA
O
L
Figure 9.5: Controllable region in m
As a starting point, the initial conditions for the train dynamics are identi-
fied such that the train never violates S under full braking (rbc-controllability).
For doing so, the following test conditions are formulated. Before going into a
detailed discussion about test conditions let us first fix up our description style
for the test elements. Observe that the tests are all (sub-identity) actors hav-
ing actor like representations. The outputs provided by them, however, have
166 Chapter 9 Property Verification of Actor Networks
got only two possibilities. Either the input signals are identically mapped to the
output or the output lines only contain empty signals. Hence, in the test element
descriptions, we only provide the logical relation among the input signals.
1. The test p1 (also referred to as non-neg) denotes that the train has non-
negative velocity with tag τ1 and the rbc has assigned a non-negative value
to m.vd with tag τ2 and both the signals are undefined for all further tags
upto τ . In other words, when observed at τ , both tr.v and m.vd are non-
negative. Formally speaking, p1(στ ) = ∃τ1, τ2
[ld(σ, τ, tr.v, τ1) ∧ ld(σ, τ,m.vd, τ2) ∧ {σ(tr.v)(τ1) ≥ 0} ∧ {σ(m.vd)(τ2) ≥ 0}]
In general, for a test p(στ ), if ∃τ1, τ2 such that p(στ ) is true, we denote the
situation by (τ1, τ2) |= p(στ ).
2. The test p2 (or, not-yet) denotes the fact that the train position is less than
the distance m.d for applying maximum braking force i.e., p2(στ ) =
∃τ1, τ2 [ld(σ, τ, tr.p, τ1) ∧ ld(σ, τ,m.d, τ2) ∧ {σ(tr.p)(τ1) < σ(m.d)(τ2)}]
3. The test p3 (or, full-brake) denotes the application of full brakes i.e.,
p3(στ ) = ∃τ1 [ld(σ, τ, tr2.a, τ1) ∧ {σ(tr2.a)(τ1) = −b}]
4. The test p4 (or, surely-now) denotes the fact that the train position is not
less than the recommended distance m.d for applying maximum braking
force which means that the antecedent of S is satisfied i.e., p4(στ ) =
∃τ1, τ2 [ld(σ, τ, tr.p, τ1) ∧ ld(σ, τ,m.d, τ2) ∧ {σ(tr.p)(τ1) ≥ σ(m.d)(τ2)}]
5. The test p5 (or, speed-below) denotes the fact that the speed of the train
is not greater than the recommended speed m.vd which effectively means
that the consequent of S is satisfied i.e., p5(στ ) =
∃τ1, τ2 [ld(σ, τ, tr.v, τ1) ∧ ld(σ, τ,m.vd, τ2) ∧ {σ(tr.v)(τ1) ≤ σ(m.vd)(τ2)}]
Further, the assertion that every run of the train with full braking satisfies S
can be encoded in KAT as,
∀σ ∈ ΣO, ∀τ ∈ T [{p1 · ∀n ∈ N [(p2 · p3 · drive)n] · (p4 ⊑A p5)} (στ )] (S ′)
The assertion S ′ translates to the following logical statement. If the train has
a non-negative velocity and the rbc assigns a non-negative value to m.vd (p1)
and the train position is still less than m.d (p2) and (still) full brake is applied
9.4 A TSM Actor Model for ETCS 167
consistently (p3), then (eventually i.e. after n rounds of execution) when the
train position is more than m.d (p4), the train speed is ensured to be less than or
equal to m.vd (p5).6 It may be observed that KAT expressions having the same
structure as S ′ comprises two subparts; a context (C say) which in this case is
p1 ·∀n ∈ N [(p2 ·p3 ·drive)n and a post-condition p+ which in this case is p4 ⊑A p5.
Thus, S ′ = C · p+. Next we attempt to discover the parameter constraint which
needs to hold for assertion S ′ to be true.
The next proposition states that the velocity σ(tr.v)(τ1) at position σ(tr.p)(τ1)
is reducible to σ(m.vd)(τ2) within the distance σ(m.d)(τ2)− σ(tr.p)(τ1) with full
brakes. This will be used later to characterize the controllable states of the train.
Proposition 9.4.1. Let p6 be the test condition given by,
p6(στ ) =
∃τ1, τ2{
ld(σ, τ, tr, τ1) ∧ ld(σ, τ,m, τ2)
∧ σ(tr.v)(τ1)2 − σ(m.vd)(τ2)
2 ≤ 2b(σ(m.d)(τ2)− σ(tr.p)(τ1)).
We have,
∀σ ∈ ΣO, τ ∈ T [{p6 ⊑A (p1 · ∀n ∈ N[(p2 · p3 · drive)n] · (p4 ⊑A p5))} (στ )]
or, p6 ⇒S ′.7
Proof. Let (τ1 = (r, k), τ2) |= p1(στ ). After n rounds of execution of drive, the
signals over tr.v and tr.p get defined up to the tag (r + nǫ, k) = τ ′ say and for
all the n rounds, the last defined value of acceleration tr2.a is ‘−b’. Thus, after
n rounds of execution of drive,
σ(tr.v)(r + nǫ, k) = σ(tr.v)(r, k)− bnǫ (9.4)
σ(tr.p)(r + nǫ, k) = σ(tr.p)(r, k) + σ(tr.v)(r, k) · nǫ−1
2bn2ǫ2 (9.5)
6In other words, from the definition of sequential composition (left to right) of TSM actors,
note that any signal στ , which satisfies p1, p2 and p3 and is then acted upon by drive to
generate some σ1τ ′ , will again be acted upon by drive, if it satisfies p2, p3. If this continues for
n iterations and then the resulting signal satisfies p4, then it satisfies p5.7In later propositions, we may simply write like :
p6 ⊑A (p1 · ∀n ∈ N[(p2 · p3 · drive)n] · (p4 ⊑A p5))
168 Chapter 9 Property Verification of Actor Networks
Now, following the context C = p1 · (p2 · p3 · drive)n,
((r + nǫ, k), τ2) |= p4(στ ′)⇒ σ(tr.p)(r + nǫ, k) ≥ σ(m.d)(τ2) (9.6)
((r + nǫ, k), τ2) |= p5(στ ′)⇒ σ(tr.v)(r + nǫ, k) ≤ σ(m.d)(τ2) (9.7)
From Eq. 9.6,
σ(tr.p)(r + nǫ, k) ≥ σ(m.d)(τ2)
or, σ(tr.p)(r, k) + σ(tr.v)(r, k) · nǫ−1
2bn2ǫ2 ≥ σ(m.d)(τ2)
· · · from eq. 9.4 and eq. 9.5
or, (σ(tr.v)(r, k)− bnǫ)2 ≤ σ(tr.v)(r, k)2 + 2bσ(tr.p)(r, k)− 2bσ(m.d)(τ2)
· · ·mutiplying both sides by -2b and then adding σ(tr.v)(r, k)2 to both sides
or, (σ(tr.v)(r, k)− bnǫ)2 ≤ σ(m.vd)(τ2)2 + (σ(tr.v)(r, k)2 − σ(m.vd)(τ2)
2)
− 2b(σ(m.d)(τ2)− σ(tr.p)(r, k))
· · · add and subtract σ(m.vd)(τ2)2 in the R.H.S. of ≤
or, (σ(tr.v)(r, k)− bnǫ)2 − σ(m.vd)(τ2)2 ≤ σ(tr.v)(r, k)2 − σ(m.vd)(τ2)
2
− 2b(σ(m.d)(τ2)− σ(tr.p)(r, k)) (9.8)
From Eq. 9.7,
σ(tr.v)(r + nǫ, k) ≤ σ(m.vd)(τ2)
or, (σ(tr.v)(r, k)− bnǫ)− σ(m.vd)(τ2) ≤ 0
· · ·by applying eq. 9.4 and shifting σ(m.vd)(τ2) to the L.H.S. of ≤ (9.9)
Hence, for S ′= C · p+ = C · (p4 ⊑A p5) to hold, we require (9.8⇒9.9) i.e.,
(σ(tr.v)(r, k)− bnǫ)2 − σ(m.vd)(τ2)2 ≤ σ(tr.v)(r, k)2 − σ(m.vd)(τ2)
2
−2b(σ(m.d)(τ2)− σ(tr.p)(r, k))
⇒ (σ(tr.v)(r, k)− bnǫ)− σ(m.vd)(τ2) ≤ 0 (9.10)
9.4 A TSM Actor Model for ETCS 169
Also note that,
(τ1 = (r, k), τ2) |= p6(στ ) ⇒ (σ(tr.v)(r, k)2 − σ(m.vd)(τ2)2)
≤ 2b(σ(m.d)(τ2)− σ(tr.p)(r, k))
⇒ (σ(tr.v)(r, k)2 − σ(m.vd)(τ2)2)
−2b(σ(m.d)(τ2)− σ(tr.p)(r, k)) ≤ 0 (9.11)
Thus, for (τ1 = (r, k), τ2) |= p6(στ ), we have,
(σ(tr.v)(r, k)2−σ(m.vd)(τ2)2)−2b(σ(m.d)(τ2)−σ(tr.p)(r, k)) ≤ 0 from Eq. 9.11.
Substituting, (σ(tr.v)(r, k)2 − σ(m.vd)(τ2)2)− 2b(σ(m.d)(τ2)− σ(tr.p)(r, k)) ≤ 0
in the antecedent of Eq. 9.10, we may indeed arrive at the consequent given by,
(σ(tr.v)(r, k)− bnǫ)− σ(m.vd)(τ2) ≤ 0.
Hence, the truth of p6 implies the truth of S ′ i.e. Eq. 9.10.
Note that from proposition 9.4.1, we have the following corollary.
Corollary 9.4.2. p6 · p1 · [∀n ∈ N(p2 · p3 · drive)n] · (p4 ⊑A p5).
In other words, starting with the sufficient condition p6 of S ′ followed by the
context p1 · [∀n ∈ N(p2 · p3 · drive)n], the post-condition (p4 ⊑A p5) of S ′ is
satisfied.
Proposition 9.4.1 provides a condition that tr must satisfy vis-a-vis m thus
leading to safe train dynamics for every run of tr in full braking mode. Thus, we
define the controllable state of a train (i.e, a combination of component values of
tr and m) as one in which both the test conditions p1 and p6 get satisfied. Hence,
the test condition pc characterizing the controllable state is given by pc = p1 · p6.
Thus,
pc(στ ) =
∃τ1, τ2
ld(σ, τ, tr, τ1) ∧ ld(σ, τ,m, τ2)
∧ σ(tr.v)(τ1) ≥ 0 ∧ σ(m.vd)(τ2) ≥ 0
∧ σ(tr.v)(τ1)2 − σ(m.vd)(τ2)
2 ≤ 2b(σ(m.d)(τ2)− σ(tr.p)(τ1))
(pc)
A desirable property for rbc would be that it must only issue such MAs
which retain the train dynamics within the controllable range. A situation is
demonstrated in Fig 9.6. Let the currently assigned MA for the train be m which
170 Chapter 9 Property Verification of Actor Networks
velocity
position
L P
O′
M
M′
M′′
max accelerationmax deceleration
max deceleration
P′
L′ = L
′′ = P′′
m.v
max acceleration
m′.v = m
′′.v
m′′.vd = m.vd
Om
′.vd
m′.d = m
′′.d
R
m.d
Rn
Figure 9.6: Assignment of new MA (m′ and m′′).
may get changed to either m′ or m′′ at co-ordinate O′. Observe from the figure
that the assignment of a new MA m′ keeps the train inside the controllable region
by ensuring the prevention of collision by applying full brakes the moment the
train crosses the distance m′.d with velocity m′.vd at co-ordinate M ′. However,
consider the case where the rbc assigns the MA m′′. In such a case, after the
change of MA to m′′ at O′, the train may traverse the dotted trajectory uptill
the point L′′ and attain the maximum recommended speed m′′.v and then apply
full brakes to ensure that the train crosses the distance m′′.d with velocity m′′.vd
at co-ordinate M ′′. The situation eventually leads to a collision as is evident
from the figure.
The above observation necessitates a requirement that given the fact the pre-
vious m set by the rbc was controllable, the newly asserted m should preserve
controllability, i.e, even with the new m, the test condition pc needs to be sat-
isfied. The following proposition provides such a constraint which ensures this
rbc-controllability criterion.
Proposition 9.4.3. (rbc controllability) : Let pm be a test condition defined as
9.4 A TSM Actor Model for ETCS 171
follows.
pm(στ ) =
∃τ1, ∃τ2 ≤ τ
ld(σ, τ,m, τ2) ∧ τ1 < τ2 ∧ σ(m.vd)(τ1) 6=⊥ ∧σ(m.d)(τ1) 6=⊥
∧ ∀τ ′[τ1 < τ ′ < τ2 ⇒ σ(m.vd)(τ′) =⊥ ∧σ(m.d)(τ ′) =⊥]
∧ σ(m.vd)(τ1) ≥ 0 ∧ σ(m.vd)(τ2) ≥ 0
∧ σ(m.vd)(τ1)2 − σ(m.vd)(τ2)
2 ≤ 2b(σ(m.d)(τ2)− σ(m.d)(τ1))
(pm)
where τ2 is the tag value at which a new MA has been made available by the
rbc to tr and τ1 is the tag value where the previous MA was made available.
Using pm, the rbc controllability can be defined as : If the train is residing in a
controllable state (thus satisfying pc), even with the change of MA due to the rbc,
the train dynamics stays in a controllable state (again satisfying pc with new MA
information) iff the test condition pm is satisfied.
This is encoded by the KAT formula, (pc · rbc) · (pm ≡A pc).
Proof. The forward implication, pc · rbc · (pm ⊑A pc) can be proved as follows.
Let (τ, τ2) |= pc(στ ) which implies,
σ(tr.v)(τ) ≥ 0
∧ σ(m.vd)(τ2) ≥ 0
∧ σ(tr.v)(τ)2 − σ(m.vd)(τ2)2 ≤ 2b(σ(m.d)(τ2)− σ(tr.p)(τ))
(9.12)
Recall that pc = p1 ·p6 and note that the first two conjuncts in 9.12 originate from
(τ, τ2) |= p1(στ ) and the last one from (τ, τ2) |= p6(στ ). Let the rbc generate a new
MA information for some tag τ ′ such that (τ2, τ′) |= pm(στ ′) where στ ′ = rbc(στ ).
Hence from pm(στ ′),
τ2 < τ ′
∧ σ(m.vd)(τ2) ≥ 0
∧ σ(m.vd)(τ′) ≥ 0
∧ σ(m.vd)(τ2)2 − σ(m.vd)(τ
′)2 ≤ 2b(σ(m.d)(τ ′)− σ(m.d)(τ2))
(9.13)
We need to prove that if the train dynamics satisfies pc followed by a new choice
of MA by rbc satisfying pm, then the train dynamics also satisfies pc for the new
MA information. In other words,
(τ, τ2) |= pc(στ ) ∧ (τ2, τ′) |= pm(rbc(στ ))⇒ (τ, τ ′) |= pc(rbc(στ )).
172 Chapter 9 Property Verification of Actor Networks
τ2
T
τ = (r, n) τ ′ = (r, n+ 1)
input signal στ
Last defined tr
output signal στ ′ = rbc(στ )
Newly defined MA due to rbcLast defined MA
Figure 9.7: The rbc assigns a new MA at tag τ ′
From the last conjunct of 9.12 we have,
σ(tr.v)(τ)2 − σ(m.vd)(τ2)2 ≤ 2b(σ(m.d)(τ2)− σ(tr.p)(τ))
or, σ(tr.v)(τ)2 + 2bσ(tr.p)(τ) ≤ 2bσ(m.d)(τ2) + σ(m.vd)(τ2)2
Similarly, from the last conjunct of 9.13 we have,
σ(m.vd)(τ2)2 − σ(m.vd)(τ
′)2 ≤ 2b(σ(m.d)(τ ′)− σ(m.d)(τ2))
or, σ(m.vd)(τ2)2 + 2bσ(m.d)(τ2) ≤ σ(m.vd)(τ
′)2 + 2bσ(m.d)(τ ′)
Combining both we get by transitivity of ≤,
σ(tr.v)(τ)2 + 2bσ(tr.p)(τ) ≤ σ(m.vd)(τ′)2 + 2bσ(m.d)(τ ′)
or, σ(tr.v)(τ)2 − σ(m.vd)(τ′)2 ≤ 2b(σ(m.d)(τ ′)− σ(tr.p)(τ))
or, (τ, τ ′) |= p6(rbc(στ ))
or, (τ, τ ′) |= pc(rbc(στ )) · · · since pc = p1 · p6 and p1 is always true after initializa-
tion.
The backward implication, pc · rbc · (pc ⊑A pm) can be easily proved by similarly
proving, (τ, τ2) |= pc(στ ) ∧ (τ, τ ′) |= pc(rbc(στ ))⇒ (τ2, τ′) |= pc(rbc(στ ))
Note that,
(τ, τ2) |= pc(στ )⇒ σ(tr.v)(τ)2 − σ(m.vd)(τ2)2 ≤ 2b(σ(m.d)(τ2)− σ(tr.p)(τ))
(τ, τ ′) |= pc(στ )⇒ σ(tr.v)(τ)2 − σ(m.vd)(τ′)2 ≤ 2b(σ(m.d)(τ ′)− σ(tr.p)(τ))
Subtracting the first inequality from the second we have,
σ(m.vd)(τ2)2−σ(m.vd)(τ
′)2 ≤ 2b(σ(m.d)(τ ′)−σ(m.d)(τ2))⇒ (τ2, τ′) |= pc(rbc(στ ))
since the other necessary conditions are trivially satisfied.
The constraint pm encoded in proposition 9.4.3 characterizes the controlla-
bility of the train after one round of execution of the rbc. Similarly, we need to
find a suitable constraint which will characterize the controllability of the train
after one round of execution of drive. For a worst case estimate, we assume that
starting in a controllable state the train is running with the maximum accelera-
tion A. In such a a scenario, the condition for preservation of controllability is
9.4 A TSM Actor Model for ETCS 173
encoded by the following proposition.
Proposition 9.4.4. (drive controllability ): Consider pd be a test condition
given by,
pd(στ ) =
∃τ1, τ2
ld(σ, τ, tr, τ1) ∧ ld(σ, τ,m, τ2)
∧ σ(m.d)(τ2)− σ(tr.p)(τ1) ≥σ(tr.v)(τ1)2−σ(m.vd)(τ2)2
2b+
(
Ab
+ 1) (
A2ǫ2 + ǫ · σ(tr.v)(τ1)
)
(pd)
Using pd, the drive controllability can be defined as follows : Starting from a
controllable region with maximum acceleration (A), an iteration of drive gets the
train into a controllable region again if the test condition pd is satisfied preceding
the iteration of drive. This is encoded by the KAT formula,
[pd ⊑A ((pc · p · drive) · (1A ⊑A pc))], where p is a test condition given by,
p(στ ) = ∃τ1[ld(σ, τ, tr.v, τ1) ∧ σ(tr.a)(τ1) = A]
which ensures maximum acceleration.
Proof. The situation has been depicted in Fig. 9.8. Note that the train at
position P has velocity σ(tr.v)(τ1) at the distance σ(tr.p)(τ1), denoted by tr.v1
and tr.p1 respectively. Before getting along with the proof, let us note that
position
train1
velocity
tr.p1
tr.v1P
train2
Q
R
S
P1 Q1 R1 S1
Trajectory
slope = maximum deceleration ‘−b’
O1
slope = maximum acceleration ‘A’
m.d
m.vd
m.v
Figure 9.8: Drive controllability : worst case estimate
the R.H.S. of the inequality in the last conjunct of pd is basically the total
distance traveled by the train for reducing the velocity from σ(tr.v)(τ1) to the
safe velocity σ(m.vd)(τ2) for leaving the MA. In the trajectory as shown in Fig
174 Chapter 9 Property Verification of Actor Networks
9.8, we consider an extra cycle of drive starting at P with the maximum possible
acceleration ending at Q so that the distance traveled in the process is P1Q1. At
Q, drive should be able to apply full brakes and reach the velocity m.vd within
the distance Q1S1, where S1 (the end of MA) is at a distance m.d from the start
of MA (O1). Thus,
O1P1 + P1S1 ≤ σ(m.d)(τ2)
or, σ(tr.p)(τ1) + P1S1 ≤ σ(m.d)(τ2)
or, σ(m.d)(τ2)− σ(tr.p)(τ1) ≥ P1S1 (9.14)
where the equality takes care of the case with maximum acceleration. Thus,
comparing Eq. 9.14 with the R.H.S. of the inequality in the last conjunct of pd
i.e.,
σ(m.d)(τ2)−σ(tr.p)(τ1) ≥σ(tr.v)(τ1)2−σ(m.vd)(τ2)2
2b+
(
Ab
+ 1) (
A2ǫ2 + ǫ · σ(tr.v)(τ1)
)
,
we get a desired value of P1S1. Next we show how P1S1 is actually the the
R.H.S. of the inequality in the last conjunct of pd which consists of two termsσ(tr.v)(τ1)2−σ(m.vd)(τ2)2
2band
(
Ab
+ 1) (
A2ǫ2 + ǫ · σ(tr.v)(τ1)
)
.
The term(
Ab
+ 1) (
A2ǫ2 + ǫ · σ(tr.v)(τ1)
)
has the following components.
1) The term(
A2ǫ2 + ǫ · σ(tr.v)(τ1)
)
denotes the segment P1Q1.
2) The term Ab
(
A2ǫ2 + ǫ · σ(tr.v)(τ1)
)
accounts for the additional distance traveled
(with deceleration ‘b’) in order to reduce the speed down to σ(tr.v)(τ1) after
accelerating with ‘A’ for ǫ time units. This distance is denoted in the figure
by the segment Q1R1. Let the train velocity at Q be tr.vQ. Then we have,
tr.v2Q = tr.v2
1 + 2A · P1Q1 = tr.v21 + 2b ·Q1R1. Thus,
Q1R1 = AbP1Q1 = A
b
(
A2ǫ2 + ǫ · σ(tr.v)(τ1)
)
.
The other term σ(tr.v)(τ1)2−σ(m.vd)(τ2)2
2bis the distance traversed in reaching the
velocity σ(m.vd)(τ2) from the velocity σ(tr.v)(τ1) which is basically the segment
R1S1. Thus we have established the motivation behind test pd. Next, we proceed
with the proof of the proposition as follows.
Let it be the case that pc · p · drive · (1A ⊑A pc) holds. For any input στ ,
let there be tags τ = (r, k), r ∈ R, k ∈ N and τ2 such that pc holds initially i.e,
((r, k), τ2) |= pc(στ ). In that case, after drive acts under full acceleration A, pc
has to be true for tags τ ′1 = (r + ǫ, k) and τ2. In other words,
((r, k), τ2) |= pc(στ ) ∧ ((r, k), τ2) |= p(στ )⇒ ((r + ǫ, k), τ2) |= pc(drive(στ ))
9.4 A TSM Actor Model for ETCS 175
τ2
Last defined rbc
T
τ = (r, n) τ ′1 = (r + ǫ, n)
output signal στ ′1
= drive(στ )
Last defined tr Newly defined tr due to drive
input signal στ
Figure 9.9: The drive actor extends the tagged signal for tr uptill τ ′1.
where drive(στ ) = σ(r+ǫ,k). The consequent (of ⊑A) implies,
{
σ(tr.v)(r + ǫ, k) ≥ 0 ∧ σ(m.vd)(τ2) ≥ 0
∧ σ(tr.v)(r + ǫ, k)2 − σ(m.vd)(τ2)2 ≤ 2b(σ(m.d)(τ2)− σ(tr.p)(r + ǫ, k))
(9.15)
The proof requires that,
((r, k), τ2) |= pd(στ )⇒ [ {((r, k), τ2) |= pc(στ ) ∧ ((r, k), τ2) |= p(στ )}
⇒ ((r + ǫ, k), τ2) |= pc(drive(στ ))]
The truth of the antecedent ((r, k), τ2) |= pd(στ ) implies,
σ(m.d)(τ2)− σ(tr.p)(r, k) ≥σ(tr.v)(r,k)2−σ(m.vd)(τ2)2
2b+
(
Ab
+ 1) (
A2ǫ2 + ǫ · σ(tr.v)(r, k)
)(9.16)
Expanding Eq. 9.16 we have,(σ(tr.v)(r,k)2−σ(m.vd)(τ2)2)
2b+
(
Ab
+ 1) (
A2ǫ2 + ǫ · σ(tr.v)(r, k)
)
≤ 2b(σ(m.d)(τ2)− σ(tr.p)(r, k))
or, σ(tr.v)(r, k)2−σ(m.vd)(τ2)2 +A2ǫ2 +2Aǫσ(tr.v)(r, k)+2bǫσ(tr.v)(r, k)+Abǫ2
≤ 2bσ(m.d)(τ2)−2bσ(tr.p)(r, k) · · · ( multiplying both sides by 2b)
or, σ(tr.v)(r, k)2 − σ(m.vd)(τ2)2 + A2ǫ2 + 2Aǫσ(tr.v)(r, k)
≤ 2bσ(m.d)(τ2)− 2bσ(tr.p)(r, k)− 2bǫσ(tr.v)(r, k)− Abǫ2
· · · ( subtracting 2bǫσ(tr.v)(r, k) + Abǫ2 from both sides )
or, σ(tr.v)(r, k)2 + A2ǫ2 + 2Aǫσ(tr.v)(r, k)− σ(m.vd)(τ2)2
≤ 2bσ(m.d)(τ2)− 2bσ(tr.p)(r + ǫ, k)
· · · ( substituting σ(tr.p)(r + ǫ, k) = σ(tr.p)(r, k) + σ(tr.v)(r, k)ǫ+ 12Aǫ2)
σ(tr.v)(r + ǫ, k)2 − σ(m.vd)(τ2)2 ≤ 2b(σ(m.d)(τ2)− σ(tr.p)(r + ǫ, k))
· · · ( substituting σ(tr.v)(r + ǫ, k) = σ(tr.v)(r, k) + Aǫ)
or, ((r + ǫ, k), τ2) |= pc(στ ) .
176 Chapter 9 Property Verification of Actor Networks
Note that, similar to the implication of corollary 9.4.2 from proposition 9.4.1,
we have the following corollary in this case as implication from proposition 9.4.4,
Corollary 9.4.5. pd · pc · p · drive(1A ⊑A pc),
Since pd · pc · p · drive · (pc ⊑A 1A) is obvious in KAT8 we have,
pd · pc · p · drive(1A ≡A pc). (9.17)
Also note that p is a test element indicating maximum acceleration (the worst
case for controllability) and it can be easily proved that even without enforcing
p (because no actors (spd & atp) ascertain acceleration more than A) we may
have,
Corollary 9.4.6. pd · pc · drive · (1A ≡A pc)
Proof. From the proof of proposition 9.4.4, we have that
if (τ = (r, k), τ2) |= pd(στ ), then the inequality,
σ(tr.v)(r, k)2 − σ(m.vd)(τ2)2 + A2ǫ2 + 2Aǫσ(tr.v)(r, k)
≤ 2bσ(m.d)(τ2)− 2bσ(tr.p)(r, k)− 2bǫσ(tr.v)(r, k)−Abǫ2
holds where the symbols are as defined previously. Since p is not enforced, the
train may have any acceleration A′ with the constraint that A′ ≤ A. Thus,
σ(tr.v)(r, k)2 − σ(m.vd)(τ2)2 + A2ǫ2 + 2Aǫσ(tr.v)(r, k)
≤ 2bσ(m.d)(τ2)− 2bσ(tr.p)(r, k)− 2bǫσ(tr.v)(r, k)−Abǫ2
or, σ(tr.v)(r, k)2 − σ(m.vd)(τ2)2 + A′2ǫ2 + (A2 − A′2)ǫ2
+2A′ǫσ(tr.v)(r, k) + 2(A−A′)ǫσ(tr.v)(r, k)
≤ 2bσ(m.d)(τ2)− 2b{
σ(tr.p)(r, k) + ǫσ(tr.v)(r, k) + 12A′ǫ2
}
+ A′bǫ2 −Abǫ2
· · · (Adding and subtracting A′2ǫ2 and 2A′ǫσ(tr.v)(r, k) in the L.H.S and
adding and subtracting A′bǫ2 in the R.H.S)
or, σ(tr.v)(r + ǫ, k)2 − σ(m.vd)(τ2)2 + (A2 − A′2)ǫ2 + 2(A− A′)ǫσ(tr.v)(r, k)
≤ 2b {σ(m.d)(τ2)− σ(tr.p)(r + ǫ, k)} − (A− A′)bǫ2
· · · (substituting σ(tr.v)(r + ǫ, k)2 = (σ(tr.v)(r, k) + ǫA′)2
= σ(tr.v)(r, k)2 + 2A′ǫσ(tr.v)(r, k) + A′2ǫ2 in the L.H.S. and substituting
σ(tr.p)(r + ǫ, k) ={
σ(tr.p)(r, k) + ǫσ(tr.v)(r, k) + 12A′ǫ2
}
in the R.H.S.)
8any test element is below 1A w.r.t. ⊑A.
9.4 A TSM Actor Model for ETCS 177
or, σ(tr.v)(r + ǫ, k)2 − σ(m.vd)(τ2)2 ≤ 2b {σ(m.d)(τ2)− σ(tr.p)(r + ǫ, k)}
· · · since A− A′ ≥ 0.
Hence, ((r + ǫ, k), τ2) |= pc(drive(στ )) and we have, pd · pc · drive · (1A ⊑A pc).
Since, pd · pc · drive · (pc ⊑A 1A) holds in KAT in general, we have corollary 9.4.6
proved.
Observe that in the above proof the fact that controllability (pc) holds initially
does not come into play. Hence we have,
Corollary 9.4.7. pd · drive · (1A ≡A pc)
We will use the lower bound (given by Eq. 9.16) for setting SB dynamically
by actor atp in a refined model as discussed below.
9.4.3 A Refined TSM Model for ETCS
Based on the parametric constraints defined previously, the refined TSM models
for atpr and rbcr actors are as follows. The suffix r is used for denoting the
refined actors. The actor atpr is a refinement over the actor atp (described in
section 9.4) in the sense that the quantity SB, which was simply represented
as an abstraction, is now replaced by a polynomial over the train parameters.
Also, the actor rbcr is a refinement over the actor rbc in the sense that rbcr is
constrained to assign MAs which preserve rbc-controllability.
atpr(στ ) =
σ(tr2.p)(τ) := σ(tr1.p)(τ)
σ(tr2.v)(τ) := σ(tr1.v)(τ)
∀τ1, τ2
If [ld(σ, τ, tr1.p, τ1) ∧ ld(σ, τ,m.d, τ2) ∧ ld(σ, τ, rbc.msg, τ2)]
then
If[{
σ(m.d)(τ2)− σ(tr1.p)(τ1) <σ(tr1.v)(τ1)2−σ(m.vd)(τ2)2
2b
+(
Ab
+ 1) (
A2ǫ2 + ǫ · σ(tr1.v)(τ1)
)}
∨ {σ(rbc.msg)(τ2) = emergency}]
then σ(tr2.a)(τ |R, τ |N + 1) := −b
else
σ(tr2.a)(τ) := σ(tr1.a)(τ)
178 Chapter 9 Property Verification of Actor Networks
rbcr(σ(r,n)) =
∀τ1
If [ld(σ, τ,m, τ1) ∧ σ(talk)(r, n) = 1]
then{
〈σ(m.vd)(r, n+ 1), σ(m.d)(r, n+ 1), σ(m.v)(r, n+ 1)〉 := {〈d1, d2, d3〉 |
d1 ∈ D1 ∧ d2 ∈ D2 ∧ d3 ∈ D3 ∧ σ(m.vd)(τ1)2 − d2
1 ≤ 2b(d2 − σ(m.d)(τ1))}
σ(rbc.msg)(r, n+ 1) := {emergency, !emergency}
Note that rbcr = rbc · pm.
Controllability: some derived results
With n = 0 in corollary 9.4.2 we have,
p6p1 · (p4 ⊑A p5)
or, pc · (p4 ⊑A p5) · · · (∵ p6p1 = p1p6 commutativity holds for test elements)
(9.18)
Similarly, for n = 1,
p6p1p2p3 · drive · (p4 ⊑A p5)
or, pcp2p3 · drive · (p4 ⊑A p5)(9.19)
and in general, pc · (p2p3 · drive)n · (p4 ⊑A p5).
Some important laws of KA/KAT
We now discuss two important laws of KAT which will be used later for safety
verification of ETCS.
Weakening Law : ∀a, a1 ∈ B and ∀b, d ∈ K, ab ≤ ad ∧ a1 ≤ a⇒ a1b ≤ a1d.
Multiplying both sides of ‘≤’ in ab ≤ ad by a1 we get (a1ab ≤ a1ad) = (a1b ≤ a1d)
since a1 and a are both tests and a1 ≤ a⇒ a1a = a1.
Monotonicity of ‘·’ over ‘≤’ : ∀a, b, c ∈ K, b ≤ c⇒ ab ≤ ac.
Decomposition Law : a ≤ c ∧ b ≤ d ⇒ a + b ≤ c + d, i.e., the simultaneous
truth of clauses a ≤ c and b ≤ d implies the truth of a+ b ≤ c + d.
9.5 Formal Property Verification using Kleene Algebra 179
9.5 Formal Property Verification using Kleene
Algebra
The following proposition discusses the safety of the refined ETCS model.
Proposition 9.5.1. Starting from a controllable state, after a round of execution
of the refined ETCS system, if the train position exceeds the end of the MA
(m.d), then the train velocity is less than m.vd, i.e., pc · (Trr + rbcr) · (p4 ⊑A p5),
where Trr = spd · atpr · drive and the test conditions pc, p4 and p5 are as defined
previously. In other words, starting from a controllable state, a round of execution
of the refined ETCS system is safe.
Proof.
pc · (Trr + rbcr) · (p4 ⊑A p5)
⇐ pc · Trr · (p4 ⊑A p5) ∧ pc · rbcr · (p4 ⊑A p5) · · ·by decomposition law
(C1)
The second conjunct can be proved as follows.
pc · rbcr · (p4 ⊑A p5)
⇔ pc · rbc · pm · (p4 ⊑A p5) · · · since rbcr = rbc · pm
⇔ pc · rbc · pc · (p4 ⊑A p5) · · ·by proposition 9.4.3
⇐ pc · (p4 ⊑A p5) · · ·by monotonicity of ‘·’
which is true due to 9.18. Thus, the second conjunct in C1 holds true.
The proof of the first conjunct is as follows. Note that the actor spd may be
decomposed as spd = spd0 + c1 · spd1 + c2 · spd2 + c3 · spd3, where
spd0(στ ) =
{
σ(tr1.p)(τ) := σ(tr.p)(τ)
σ(tr1.v)(τ) := σ(tr.v)(τ)
c1(στ ) = ∃τ1, τ2 [ld(σ, τ, tr.v, τ1) ∧ ld(σ, τ,m.v, τ2) ∧ {σ(tr.v)(τ1) ≤ σ(m.v)(τ2)}]
spd1(στ ) ={
σ(tr1.a)(τ |R, τ |N + 1) := [−b, A] //assigned nondeterministically
c2(στ ) = ∃τ1, τ2 [ld(σ, τ, tr.v, τ1) ∧ ld(σ, τ,m.v, τ2) ∧ {σ(tr.v)(τ1) > σ(m.v)(τ2)}]
spd2(στ ) ={
σ(tr1.a)(τ |R, τ |N + 1) := [−b, 0] //assigned nondeterministically
180 Chapter 9 Property Verification of Actor Networks
c3(στ ) = ∃τ1, τ2 [ld(σ, τ, tr.p, τ1) ∧ ld(σ, τ,m.d, τ2) ∧ {σ(m.d)(τ2)− σ(tr.p)(τ1) ≤ ST}]
spd3(στ ) ={
σ(talk1)(τ |R, τ |N + 1) := 1
Observe how the decomposition of spd works out. The identity maps are captured
by actor spd0 and each of the ‘If’ clauses leads to a test condition (ci, 1 ≤ i ≤ 3)
which is acting as a guard (in sequential composition) for an actor (spdi, 1 ≤
i ≤ 3) which does the tagged-value assignment. Further, it is worthwhile to
note that the ‘else’ clauses lead to test conditions which are complementary.
In the present situation, we have tests c1 and c2 which may appear to be non-
complementary since both are existentially quantified. However, note that for
any non-empty στ , there exist unique τ1, τ2 which satisfies one of c1 or c2. Thus,
∀στ 6= σ⊥, [c1 + c2 ≡ 1A]. For στ = σ⊥, none of c1 and c2 is satisfied. Thus, for
στ = σ⊥, c1 + c2 ≡ ⊥A. However, for στ = σ⊥, 1A(στ ) = ⊥A(στ ) = σ⊥. Thus, in
general, c1 + c2 ≡ 1A and hence c1 = c2.
Based on the above fact, we decompose actor atpr as,
atpr = atpr0 + d1 · atpr1 + d1 · atpr2 where,
atpr0(στ ) =
σ(tr2.p)(τ) := σ(tr1.p)(τ)
σ(tr2.v)(τ) := σ(tr1.v)(τ)
σ(talk2)(τ) := σ(talk1)(τ)
atpr1(στ ) ={
σ(tr2.a)(τ |R, τ |N + 1) := −b
atpr2(στ ) ={
σ(tr2.a)(τ) := σ(tr1.a)(τ)
d1(στ ) =
∃τ1, τ2, τ3
ld(σ, τ, tr1.p, τ1) ∧ ld(σ, τ,m.d, τ2) ∧ ld(σ, τ, rbc.msg, τ3)
∧ [{(σ(m.d)(τ2)− σ(tr1.p)(τ1)) <σ(tr1.v)(τ1)2−σ(m.vd)(τ2)2
2b+
(
Ab
+ 1) (
A2ǫ2 + ǫ · σ(tr1.v)(τ1)
)
}
∨{σ(rbc.msg)(τ3) = emergency}]
d1(στ ) =
∃τ1, τ2, τ3
ld(σ, τ, tr1.p, τ1) ∧ ld(σ, τ,m.d, τ2) ∧ ld(σ, τ, rbc.msg, τ3)
∧ {(σ(m.d)(τ2)− σ(tr1.p)(τ1)) ≥σ(tr1.v)(τ1)2−σ(m.vd)(τ2)2
2b+
(
Ab
+ 1) (
A2ǫ2 + ǫ · σ(tr1.v)(τ1)
)
}
∧ {σ(rbc.msg)(τ3)! = emergency}
9.5 Formal Property Verification using Kleene Algebra 181
Using this decomposed KAT expression, we have
atpr = atpr0 + d1 · atpr1 + d1 · atpr2
= (d1 + d1) · atpr0 + d1 · atpr1 + d1 · atpr2
= d1 · (atpr0 + atpr1) + d1 · (atpr0 + atpr2)
= d1 · atp′r1 + d1 · atp
′r2
where, atp′r1 = atpr0 + atpr1 and atp′r2 = atpr0 + atpr2 = 1A since all the inputs
(tr and talk1) of atp′r2 are identity mapped. The first conjunct in C1 may now
be decomposed as,
pc · Trr · (p4 ⊑A p5)
⇔ pc · spd · atpr · drive · (p4 ⊑A p5)
⇔ pc · spd · (d1 · atp′r1 + d1 · atp′r2) · drive · (p4 ⊑A p5)
⇔ pc · spd · (d1 · atp′r1 + d1 · 1A) · drive · (p4 ⊑A p5)
⇐ pc · spd · d1 · atp′r1 · drive · (p4 ⊑A p5) ∧ pc · spd · d1 · drive · (p4 ⊑A p5)
by decomposition law
(C2)
Thus we have the following clauses which are required to be proved :
pc · spd · d1 · atp′r1 · drive · (p4 ⊑A p5) (G1)
pc · spd · d1 · drive · (p4 ⊑A p5) (G2)
Proof of G1 :
Observe that controllability is an invariant of actors spd and atp (which leaves
tr.v, tr.p unchanged) since the tagged signals (tr.v, tr.p and m) involved in pc
are modified by actors drive and rbc only. Thus, we have
pc · spd · pc = pc · spd (9.20)
pc · atp′r1 · pc = pc · atp
′r1 (9.21)
Also observe that,
d1 · atp′r1 = d1 · atp
′r1 · p3 (9.22)
where p3 has been defined previously as
p3(στ ) = ∃τ1[ld(σ, τ, tr2, τ1) ∧ σ(tr2.a)(τ1) = −b]. Observe that Eq. 9.22 holds
182 Chapter 9 Property Verification of Actor Networks
because, whenever it is the case that d1 holds, the actor atpr1 will define the
variable tr2.a with the value −b and satisfy p3. Thus, considering G1, we have,
pc · spd · d1 · atp′r1 · drive · (p4 ⊑A p5)
= pc · spd · pc · d1 · atp′r1 · drive · (p4 ⊑A p5) · · · by 9.20
= pc · spd · d1 · pc · atp′r1 · drive · (p4 ⊑A p5) · · · by commutativity of tests
= pc · spd · d1 · pc · atp′r1 · pc · drive · (p4 ⊑A p5) · · · by 9.21
= pc · spd · pc · d1 · atp′r1 · pc · drive · (p4 ⊑A p5) · · · by commutativity of tests
= pc · spd · pc · d1 · atp′r1 · p3 · pc · drive · (p4 ⊑A p5) · · · by 9.22
= (pc · spd · pc · d1 · atp′r1) · pc · p3 · drive · (p4 ⊑A p5) · · · by commutativity of tests
⇐ pc · p3 · drive · (p4 ⊑A p5) · · ·by monotonicity of ‘·’
Hence, for proving clause G1, it will be sufficient to prove,
pc · p3 · drive · (p4 ⊑A p5) (9.23)
We have,
pc · p3 · drive · (p4 ⊑A p5)
⇔ pc · (p2 + p2) · p3 · drive · (p4 ⊑A p5)
⇐ pc · p2 · p3 · drive · (p4 ⊑A p5) ∧ pc · p2 · p3 · drive · (p4 ⊑A p5)
⇔ pc · p2 · p3 · drive · (p4 ⊑A p5) ∧ pc · p4 · p3 · drive · (p4 ⊑A p5) · · · since p2 = p4
(9.24)
The left conjunct in 9.24 is true due to 9.19 obtained previously from corollary
9.4.2 (with n = 1). For the right conjunct in 9.24, we proceed as follows. Let
(τ1 = (r, k), τ2) |= pc(στ ) ∧ p4(στ ). Thus, we have,
(τ1 = (r, k), τ2) |= pc(στ )
⇒σ(tr.v)(r, k)2 − σ(m.vd)(τ2)2 ≤ 2b(σ(m.d)(τ2)− σ(tr.p)(r, k)) (9.25)
(τ1 = (r, k), τ2) |= p4(στ )⇒ σ(tr.p)(r, k) ≥ σ(m.d)(τ2) (9.26)
For the right conjunct in 9.24 we need to prove,
(τ1 = (r, k), τ2) |= pc(στ ) ∧ p4(στ ) ∧ p3(στ )⇒
[(τ1 = (r + ǫ, k), τ2) |= p4(drive(στ ))⇒ (τ1 = (r + ǫ, k), τ2) |= p5(drive(στ ))]
Eq. 9.25 and 9.26 together imply that σ(tr.v)(r, k) ≤ σ(m.vd)(τ2). Since p3
ensures negative acceleration, after drive acts we will always have,
σ(tr.v)(r + ǫ, k) ≤ σ(tr.v)(r, k) ≤ σ(m.vd)(τ2) thus satisfying p5.
Hence, the right conjunct in 9.24 holds. With this we have equation 9.23 proved
thus implying the truth of G1.
9.6 Conclusion 183
Proof of G2 :
(pc · spd) · d1 · drive(·p4 ⊑A p5)
⇐ d1 · drive · (p4 ⊑A p5) · · ·monotonicity of ‘·’
⇐ pd · drive · (p4 ⊑A p5) · · ·weakening rule, since d1 ⊑A pd
⇔ pd · drive · pc · (p4 ⊑A p5) · · · by corollary 9.4.7
⇐ pc · (p4 ⊑A p5) · · ·monotonicity of ‘·’
← Is true by 9.18 · · · from corollary 9.4.2 with n = 0
9 Thus we have successfully verified the safety of the refined ETCS protocol using
the Kleene algebra of TSM actors.
9.6 Conclusion
Deductive techniques for verifying embedded system specifications have been at-
tempted previously. However, in such works, the notion of heterogeneity has been
limited to certain classes of hybrid systems only. Although our example speci-
fication is a simple hybrid system (which could have been modeled and verified
using techniques specific to hybrid automata), it clearly shows that Kleene alge-
braic reasoning over TSM actor networks is a potential method for performing
property verification of heterogeneous embedded systems. Since TSMs capture
heterogeneity in a more general form, we envisage that much more general cases
of heterogeneous specifications can be captured using TSMs. Hence it seemed
worthwhile to explore deductive methods for property verification of TSM actor
networks.
9Observe that d1 is basically pd along with an additional conjunct. Thus d1 ⊑A pd.
Chapter 10
Conclusion
In the present work we have tried to provide some insight into possible techniques
for asymptotic performance evaluation and correctness verification using TSMs.
The primary motivation for the work has been that TSMs are advocated as a
generalized metamodel of computation which can capture the commonly used
MoCs in embedded system specification. It has been used for providing sound
execution semantics to heterogeneous system simulation tools like Ptolemy (Eker
et al., 2003). Using TSM as an underlying MoC, we have proposed a method-
ology for performance evaluation of schedules for job-shops modeled using tag
machines. Comparing the method with existing ones revealed that the proposed
method has no dependence on schedule length in terms of modeling efficiency
and it shares the same order of complexity with the existing approaches. How-
ever, the proposed method has been shown to bear promise of applicability to
performance evaluation of systems specified using other models of computation
(MoCs) by applying the method for an SDFG specification and a heterogeneous
system model having multiple constituent models.
For correctness verification of TSM based system models we have formulated
a Kleene algebra of TSM actors thereby permitting systems and their properties
to be encoded as Kleene algebraic expressions. We further illustrated mechanisms
for both behavioural verification through equivalence checking and property ver-
ification of embedded systems based on this algebraic representation.
Next, we briefly summarize the contributions of the present work and follow
185
186 Chapter 10 Conclusion
it up with discussion on the future scope of the work reported.
10.1 Summarizing the contributions
We addressed the problem of asymptotic performance evaluation of job-shop
schedules using TSMs. We demonstrated that the tag structure Tdep, commonly
used for dependency based modeling, is inadequate for computing the asymptotic
performance of job-shop schedules since Tdep does not capture direct dependencies
among the tasks. Accordingly, we constructed a new tag structure Timm. We
proposed an algorithm for performance evaluation of job-shop schedules modeled
using tag machines and analysed its complexity to show that our algorithm for
performance evaluation does not suffer from any extra computational overhead
compared to the most efficient method known till date (Gaubert and Mairesse,
1999) for the same. More specifically, we showed that our approach is free from
any dependency on the schedule length similar to the approach using heaps of
pieces (Gaubert and Mairesse, 1999) and do not incur the overhead of computing
a new event graph for every schedule specification as in (Hillion and Proth, 1989).
We demonstrated how concurrent snapshots of executions in different MoCs
can be captured using tag pieces so that execution scenarios, which are sequences
of such snapshots, can be conceived as tag vectors. We proposed such formu-
lations for the self-timed execution of an SDFG and a heterogeneous system
comprising a discrete event component and a data-flow component. For the het-
erogeneous system, we showed how the corresponding tagged system can be de-
rived using product of tag structures. Since our performance evaluation method
computes the asymptotic performance by using dependency informations cap-
tured by tag vectors, we perceive that our performance evaluation mechanism
may thus be applicable for multiple MoCs and their compositions.
Embedded systems are often specified using finitary MoCs like timed au-
tomata which however do not explicitly model the dependency information among
tasks as required for asymptotic performance evaluation of task sequences. As
an exercise to demonstrate that it is possible to provide correct-by-construction
methodologies for translating an MoC with finitary representation to a tag ma-
10.2 Future scope 187
chine, we provided a method for structural translation from a timed automaton
to a tag machine. Subsequently, we defined a notion of equivalence among the
configurations of timed automaton and those of the corresponding tag machine.
We then verified the correctness of our translation method by proving that for
any configuration reachable by a timed run in the automaton, there exists a cor-
responding tag trace in the tag machine leading to an equivalent configuration.
For the purpose of formal correctness verification of tagged systems, we mod-
eled such systems as networks of TSM actor functions. We conceived TSM actor
functions as partial maps from one set of tagged signals (sequences of events) to
another. We introduced the notion of weakly Scott-continuous nondeterministic
actors and their equivalence. We defined the operations of sequential and con-
current composition and finite iteration over the set A of such actor functions
and proved the closure of the operations in A. Through order theoretic analysis,
the present work revealed that the set of TSM actor functions, equipped with
sequential and concurrent composition and finite iteration operations, forms a
Kleene Algebra (KA).
Our Kleene algebraic formulation of TSM actors (described in functional
forms) revealed that any such actor networks can be represented by Kleene ex-
pressions defined over the component actors. We applied the axioms of KA and
its extension KAT for behavioural verification through equivalence checking and
property verification of actor networks. We proved the equivalence of two dif-
ferent specifications of a Reflex game by constructing the corresponding actor
networks and proved their equivalence using the axioms of KA and KAT. We
further defined the notion of discrete and continuous TSM actors and modeled
the Europian Train Control Protocol (ETCS) as a TSM actor network imple-
mentation. Thereafter, we proved a safety property of the ETCS protocol using
KA and KAT based reasoning.
10.2 Future scope
The contributions that could really be arrived at in the present work point to
certain future directions of research which we discuss in brief as follows.
188 Chapter 10 Conclusion
TSMs can capture different MoCs by identifying the set of variables in the
underlying tagged system and the possible dependency relations among events
on the variables, i.e., the tag pieces. In case of MoCs with finitary represen-
ations, e.g., timed automata, we identified the corresponding set of states in
a tag machine and the transition relation labeled with tag pieces. Since het-
erogeneous system modeling involves many popular MoCs, we need to identify
translation mechanisms from many other such MoCs to TSMs. In that way, a
software tool-set can be created which supports system specification using such
MoCs, automatically constructs the TSM represenations and then indentifies the
asymptotic performance, given a periodic execution in the specification.
We proposed a Kleene algebra of TSM actor functions which may be used for
reasoning about such actor networks. We modeled discrete-time systems as dis-
crete actors and continuous-time systems as continuous actors both defined over
the tag set T = R×N. A possible future work in this regard is to find the TSM
actor oriented representations for different MoCs (with appropriate choice of tag
set) so that heterogeneous embedded systems may be modeled as a network of
such actors. In such a scenario, it will be possible to automatically construct
such actor networks from some initial heterogeneous specification involving mul-
tiple MoCs and provide the network with an execution semantics. Reasoning
over such specifications may then be performed by any theorem prover which is
enabled with the axioms of KA and KAT.
In the present work, we proved the safety property of the ETCS protocol
using our method of Kleene algebraic reasoning over TSM actor networks. In
future, we intend to explore encoding and verification of liveness properties using
our methodology. A major impediment in course of the present work has been
that we could not get hold of any real-life heterogeneous benchmark suite from
domains like automotive, control, etc. Although the theoretical foundations seem
to be well founded, we intend to model more complex specifications as TSM actor
networks and perform correctness verification via algebraic reasoning as part of
our future research efforts.
The primary concern of the present work has been to evolve methodologies for
asymptotic performance evaluation and algebraic verification of system specifica-
tions modeled using TSMs. For demonstrating the applicability of such methods
10.2 Future scope 189
in the context of heterogeneous embedded systems, we have touched upon dif-
ferent popular MoCs and their corresponding TSM represenations. For certain
MoCs, we have derived the corresponding operational models in TSM while for
certain other MoCs, we have derived the corresponding denotational models in
TSM. However, each of the MoCs deserve independent and more thorough inves-
tigations regarding the applicability of TSM based modeling and formal analysis
techniques. For example, it seems worthwhile to model SDFGs using the for-
malism of TSM actor functions and investigate the scope of property verification
in such a context. MoC specific exploration of modeling and formal analysis
techniques based on TSMs and evolving possible refinements to our proposed
techniques are worthy of future research attention.
Bibliography
(2002). ERTMS User Group, UNISIG: ERTMS/ETCS System requirements
specification.
URL: http://www.era.europa.eu
Aarts, C., Backhouse, R. C., Boiten, E. A., Doornbos, H., van Gasteren, N., van
Geldrop, R., Hoogendijk, P. F., Voermans, E. and van der Woude, J. (1995).
Fixed-Point Calculus, Inf. Process. Lett., vol. 53, no. 3, pp. 131–136.
Abdeddaım, Y., Asarin, E. and Maler, O. (2006). Scheduling with Timed Au-
tomata, Theoretical Computer Science, vol. 354, pp. 272–300.
Abdi, S. and Gajski, D. (2006). Verification of System Level Model Transfor-
mations, International Journal of Parallel Programming, vol. 34, no. 1, pp.
29–59.
Aboul-Hosn, K. and Kozen, D. (2003). KAT-ML: An interactive theorem prover
for Kleene Algebra with Tests, Journal of Applied Non-Classical Logics, vol. 16,
no. 1-2, pp. 9–34.
Abraham-Mumm, E., Steffen, M. and Hannemann, U. (2001). Verification of
Hybrid Systems: Formalization and Proof Rules in PVS, in ICECCS, pp. 48–
57, IEEE Computer Society, ISBN 0-7695-1159-7.
URL: http://csdl.computer.org/comp/proceedings/iceccs/2001
/1159/00/11590048abs.htm
Abramsky, S. and Jung, A. (1995). Domain theory, Handbook of logic in com-
puter science, Oxford University Press, Oxford, UK, vol. 3, pp. 1–168.
Agha, G. and Hewitt, C. (1987). Object-oriented concurrent programming, chap.
Concurrent programming using actors, pp. 37–53, MIT Press, Cambridge, MA,
USA.
191
192 BIBLIOGRAPHY
Allsop, K. (1994). Cossap stream driven simulator integration with AT T
DSP1610 LFS, in Computer-Aided Modeling, Analysis, and Design of Commu-
nication Links and Networks, 1994. (CAMAD ’94) Fifth IEEE International
Workshop on, p. 0 87.
Alur, R. (1999). Timed Automata, in Computer Aided Verification, pp. 8–22.
URL: citeseer.ist.psu.edu/alur99timed.html
Alur, R., Courcoubetis, C. and Dill, D. L. (1990). Model-Checking for Real-Time
Systems, in LICS, pp. 414–425.
Alur, R., Courcoubetis, C., Halbwachs, N., Henzinger, T. A., Ho, P.-H., Nicollin,
X., Olivero, A., Sifakis, J. and Yovine, S. (1995). The algorithmic analysis of
hybrid systems, Theor. Comput. Sci., vol. 138, pp. 3–34.
URL: http://portal.acm.org/citation.cfm?id=202379.202381
Alur, R., Dang, T., Esposito, J., Hur, Y., Ivancic, F., Kumar, R. V., Lee, I.,
Mishra, P., Pappas, G. J. and Sokolsky, O. (2003). Hierarchical Modeling and
Analysis of Embedded Systems, Proc. IEEE, vol. 91, no. 1, pp. 11–28.
Alur, R. and Dill, D. L. (1994). A theory of timed automata, Theoretical Com-
puter Science, vol. 126, no. 2, pp. 183–235.
URL: citeseer.ist.psu.edu/alur94theory.html
Anai, H. and Weispfenning, V. (2001). Reach Set Computations Using Real
Quantifier Elimination, in Proceedings of the 4th International Workshop on
Hybrid Systems: Computation and Control, HSCC ’01, pp. 63–76, Springer-
Verlag, London, UK, ISBN 3-540-41866-0.
URL: http://portal.acm.org/citation.cfm?id=646881.710601
Antimirova, V. M. and Mosses, P. D. (1995). Rewriting extended regular expres-
sions, Theoretical Computer Science, vol. 143, no. 1, pp. 51–72.
Asarin, E., Dang, T. and Girard, A. (2003). Reachability Analysis of Nonlin-
ear Systems Using Conservative Approximation, in Maler, O. and Pnueli, A.
(Eds.), HSCC, vol. 2623 of Lecture Notes in Computer Science, pp. 20–35,
Springer, ISBN 3-540-00913-2.
URL: http://link.springer.de/link/service/series/0558
/bibs/2623/26230020.htm
BIBLIOGRAPHY 193
Ashar, P., Bhattacharya, S., Raghunathan, A. and Mukaiyama, A. (1998). Ver-
ification of RTL generated from scheduled behavior in a high-level synthesis
flow, in ICCAD, pp. 517–524.
URL: http://doi.acm.org/10.1145/288548.289080
Baccelli, F., Cohen, G., Olsder, G. J. and Quadrat, J.-P. (1992). Synchronization
and Linearity, John Wiley & Sons.
Baeten, J. C. M. and Bergstra, J. A. (1997). Process algebra with propositional
signals, Theoretical Computer Science, vol. 177, no. 2, pp. 381–405.
Baeten, J. C. M. J. and Middelburg, C. A. K. (2002). Process Algebra with
Timing, EATCS Monographs, Springer-Verlag, Berlin, Germany.
Balarin, F., Watanabe, Y., Hsieh, H., Lavagno, L., Passerone, C. and
Sangiovanni-Vincentelli, A. (2003). Metropolis: an Integrated Electronic Sys-
tem Design Environment, IEEE Computer, vol. 36, no. 4, pp. 45–52.
Balluchi, A., Benvenuti, L., Benedetto, M. D. D., Pinello, C., Sangiovanni-
Vincentelli, A. L. and vincentelliy, A. L. S. (2000). Automotive Engine and
Power-Train Control: A Comprehensive Hybrid Model, in Mediterranean Con-
ference on Control and Automation.
Balluchi, A., Di Benedetto, M., Pinello, C. and Sangiovanni-Vincentelli, A.
(2001). Mixed models of computation in the design of automotive engine con-
trol, in Proceedings of the 40th IEEE Conference on Decision and Control,
2001., vol. 4, pp. 3308 –3313 vol.4.
Beckert, B., Hahnle, R. and Schmitt, P. H. (2007). Verification of object-oriented
software: The KeY approach, Springer-Verlag, Berlin, Heidelberg, ISBN 3-540-
68977-X, 978-3-540-68977-5.
Beckert, B. and Platzer, A. (2006). Dynamic logic with non-rigid functions: A ba-
sis for object-oriented program verification, in IJCAR, volume 4130 of LNCS,
pp. 266–280, Springer.
Benveniste, A. and Berry, G. (2002). Readings in hardware/software co-design,
chap. The synchronous approach to reactive and real-time systems, pp. 147–
159, Kluwer Academic Publishers, Norwell, MA, USA, ISBN 1-55860-702-1.
URL: http://portal.acm.org/citation.cfm?id=567003.567015
194 BIBLIOGRAPHY
Benveniste, A., Caillaud, B., Carloni, L., Caspi, P. and Sangiovanni-Vincentelli,
A. (2003). Causality and Scheduling Constraints in Heterogeneous Reactive
Systems Modeling, in FMCO, pp. 1–16.
Benveniste, A., Caillaud, B., Carloni, L., Caspi, P. and Sangiovanni-Vincentelli,
A. (2008). Composing Heterogeneous Reactive Systems, ACM TECS, vol. 7,
no. 4.
Benveniste, A., Caillaud, B., Carloni, L. P., Caspi, P. and Sangiovanni-
Vincentelli, A. L. (2004). Heterogeneous reactive systems modeling: capturing
causality and the correctness of loosely time-triggered architectures (LTTA),
in EMSOFT, pp. 220–229, ACM, New York, NY, USA, ISBN 1-58113-860-1.
Benveniste, A., Caillaud, B., Carloni, L. P. and Sangiovanni-Vincentelli, A. L.
(2005). Tag machines, in EMSOFT, pp. 255–263.
Bergamaschi, R. A., O’Connor, R. A., Stok, L., Moricz, M. Z., Prakash, S.,
Kuehlmann, A. and Rao, D. S. (1995). High-level synthesis in an industrial
environment, IBM Journal of Research and Development, vol. 39, no. 1.2, pp.
131 –148.
Bergstra, J. A. and Middleburg, C. A. (2005). Process algebra for hybrid systems,
Theor. Comput. Sci., vol. 335, pp. 215–280.
URL: http://portal.acm.org/citation.cfm?id=1085667.1085672
Berry, G. (1996). Constructive Semantics of Esterel: From Theory to Prac-
tice (Abstract), in Proceedings of the 5th International Conference on Alge-
braic Methodology and Software Technology, AMAST ’96, pp. 225–, Springer-
Verlag, London, UK, ISBN 3-540-61463-X.
URL: http://portal.acm.org/citation.cfm?id=646057.678195
Berry, G. (2000). The Foundations of Esterel, in Plotkin, G., Stirling, C. and
Tofte, M. (Eds.), Proof, Language and Interaction: Essays in Honour of Robin
Milner, The MIT Press, Cambridge, Mass.
Berry, G. and Gonthier, G. (1992). The ESTEREL synchronous programming
language: design, semantics, implementation, Sci. Comput. Program., vol. 19,
pp. 87–152.
URL: http://portal.acm.org/citation.cfm?id=147276.147279
BIBLIOGRAPHY 195
Bishop, R. H. (1996). Modern Control Systems Analysis and Design Using MAT-
LAB and SIMULINK, Addison-Wesley Longman Publishing Co., Inc., Boston,
MA, USA, 1st edn., ISBN 0201498464.
Blumenroehr, C., Eisenbiegler, D. and Kumar, R. (1996). Applicability of formal
synthesis illustrated via scheduling.
URL: http://digbib.ubka.uni-karlsruhe.de/volltexte/339596
Bornot, S., Sifakis, J. and Tripakis, S. (1998). Modeling Urgency in Timed Sys-
tems, vol. 1536 of LNCS , pp. 103–129.
URL: citeseer.ist.psu.edu/bornot97modeling.html
Borrione, D., Dushina, J. and Pierre, L. (2000). A compositional model for the
functional verification of high-level synthesis results, IEEE Trans. Very Large
Scale Integr. Syst., vol. 8, pp. 526–530.
URL: http://dx.doi.org/10.1109/92.894157
Brauer, W., W. Reisig and Rozenberg, G. (1987). Petri nets: Central Models
and Their Properties, LNCS, vol. 254.
Breslau, L., Estrin, D., Fall, K., Floyd, S., Heidemann, J., Helmy, A., Huang, P.,
McCanne, S., Varadhan, K., Xu, Y. and Yu, H. (2000). Advances in Network
Simulation, Computer, vol. 33, pp. 59–67.
URL: http://portal.acm.org/citation.cfm?id=619051.621475
Buck, J. and Lee, E. (1993). Scheduling dynamic dataflow graphs with bounded
memory using the token flow model, in Acoustics, Speech, and Signal Pro-
cessing, 1993. ICASSP-93., 1993 IEEE International Conference on, vol. 1, pp.
429–432 vol.1.
Burch, J. (2001). Overcoming heterophobia: Modeling concurrency in heteroge-
neous systems, in Application of Concurrency to System Design, pp. 13–32.
Campos, S. and Clarke, E. (1994). Real Time Symbolic Model Checking for
Discrete Time Systems, AMAST series in computing.
Camposano, R. (1991). Path-based scheduling for synthesis, IEEE Trans. on
CAD of Integrated Circuits and Systems, vol. 10, no. 1, pp. 85–93.
URL: http://doi.ieeecomputersociety.org/10.1109/43.62794
196 BIBLIOGRAPHY
Caspi, P., Benveniste, A., Lublinerman, R. and Tripakis, S. (2009). Actors with-
out Directors: A Kahnian View of Heterogeneous Systems, in HSCC ’09, pp.
46–60, Springer-Verlag, Berlin, Heidelberg, ISBN 978-3-642-00601-2.
Chang, W., Ha, S. and Lee, E. (1997). Heterogeneous Simulation : Mixing
Discrete-Event Models with Dataflow, The Journal of VLSI Signal Process-
ing, vol. 15, no. 1, pp. 127–144.
URL: http://dx.doi.org/10.1023/A:1007930622942
Chaochen, Z., Ji, W. and Ravn, A. P. (1996). A formal description of hybrid
systems, Lecture Notes in Computer Science, vol. 1066, pp. 511–??
Chaochen, Z., Ravn, A. P. and Hansen, M. R. (1993). An Extended Duration Cal-
culus for Hybrid Real-Time Systems, in Hybrid Systems, pp. 36–59, Springer-
Verlag, London, UK, ISBN 3-540-57318-6.
URL: http://portal.acm.org/citation.cfm?id=646874.709980
Chapman, R., Brown, G. and Leeser, M. (1992). Verified high-level synthesis in
BEDROC, in Design Automation, 1992. Proceedings., [3rd] European Confer-
ence on, pp. 59 –63.
Chen, H. and Pucella, R. (2003). A Coalgebraic Approach to Kleene Algebra
with Tests, ENTCS, vol. 82, no. 1, pp. 94–109.
Cheng, A. and Nielsen, M. (1998). Open maps, behavioural equivalences, and
congruences 1, Theoretical Computer Science, vol. 190, no. 1, pp. 87 – 112.
URL: http://www.sciencedirect.com/science/article/pii/S0304397597000856
Chutinan, A. and Krogh, B. (2003). Computational techniques for hybrid system
verification, Automatic Control, IEEE Transactions on, vol. 48, no. 1, pp. 64–
75.
Clarke, E., Fehnker, A., Han, Z., Krogh, B., Ouaknine, J., Stursberg, O. and
Theobald, M. (2003). Abstraction and counterexample-guided refinement in
model checking of hybrid systems, International Journal of Foundations of
Computer Science, vol. 14, no. 4, pp. 583–604.
URL: http://dx.doi.org/10.1142/S012905410300190X
Clarke, E. M., Grumberg, O. and Peled, D. A. (1999). Model Checking, The
MIT Press, ISBN 0262032708.
BIBLIOGRAPHY 197
Clinger, W. (1981). Foundations of Actor Semantics, Ph.D. thesis, MIT.
Cuninghame-Green, R. (1979). Minimax Algebra, Lect. Notes in Economics and
Math. Syst., , no. 166.
D. Mathaikutty, H. P. and Shukla, S. (2004). A functional programming frame-
work of heterogeneous model of computations for system design, in Forum on
Specification and Design Languages (FDL).
Damm, W., Mikschl, A., Oehlerking, J., Olderog, E., Pang, J., Platzer, A.,
Segelken, M. and Wirtz, B. (2007). Formal methods and hybrid real-time sys-
tems, chap. Automating verification of cooperation, control, and design in
traffic applications, pp. 115–169, Springer-Verlag, Berlin, Heidelberg, ISBN
3-540-75220-X, 978-3-540-75220-2.
URL: http://portal.acm.org/citation.cfm?id=1793874.1793880
Dasdan, A., Dasdan, A., Gupta, R. K. and Gupta, R. K. (1998). Faster Maximum
and Minimum Mean Cycle Algorithms for System-Performance Analysis, IEEE
Transactions on Computer-Aided Design of Integrated Circuits and Systems,
vol. 17, pp. 889–899.
Davey, B. A. and Priestley, H. A. (2002). Introduction to Lattices and Order,
Cambridge University Press, ISBN 0521784514.
Davies et al., J. (1992). Timed CSP: Theory and Practice, in de Bakker, J. W.,
Huizing, C., de Roever, W. P. and Rozenberg, G. (Eds.), Proceedings REX
Workshop on Real-Time: Theory in Practice, Mook, The Netherlands, June
1991, vol. 600 of Lecture Notes in Computer Science, pp. 640–675, Springer-
Verlag.
Davoren, J. (1999). On Hybrid Systems and the Modal -Calculus, in In Antsaklis
et al, pp. 38–69, Springer-Verlag.
Davoren, J. and Nerode, A. (2000). Logics for hybrid systems, Proceedings of
the IEEE, vol. 88, no. 7, pp. 985–1010.
De Alfaro, L., Kapur, A. and Manna, Z. (1997). Hybrid Diagrams: A Deductive-
Algorithmic Approach to Hybrid System Verification, Lecture Notes in Com-
puter Science, vol. 1200, pp. 153–??
198 BIBLIOGRAPHY
Desel, J. and Esparza, J. (1995). Free choice Petri nets, Cambridge University
Press, New York, NY, USA, ISBN 0-521-46519-2.
Desharnais, J., Edalat, A. and Panangaden, P. (2002). Bisimulation for Labelled
Markov Processes, Information and Computation, vol. 179, no. 2, pp. 163 –
193.
URL: http://www.sciencedirect.com/science/article/pii/S0890540101929621
Edwards, S., Lavagno, L., Lee, E. A. and A.Sangiovanni-Vincentelli (1997). De-
sign of Embedded Systems: Formal Models, Validation, and Synthesis, Proc.
of the IEEE, vol. 85, no. 3.
Edwards, S. A. (1998). The specification and execution of heterogeneous syn-
chronous reactive systems, Ph.D. thesis, Berkeley, CA, USA.
Eker, J., Janneck, J. W., Lee, E. A., Liu, J., Liu, X., Ludvig, J., Neuendorffer, S.,
Sachs, S. and Xiong, Y. (2003). Taming heterogeneitythe ptolemy approach,
in Proceedings of the IEEE, pp. 127–144.
Emerson, E. A. and Clarke, E. M. (1982). Using branching time temporal logic
to synthesize synchronization skeletons, Science of Computer Programming,
vol. 2, no. 3, pp. 241–266.
URL: http://www.sciencedirect.com/science/article/B6V17-45F5SH9-
5/2/af7ed0c7531bb991e4f964814d4429a8
Emerson, E. A. and Halpern, J. Y. (1986). Sometimes and not never revisited:
on branching versus linear time temporal logic, J. ACM, vol. 33, pp. 151–178.
URL: http://doi.acm.org/10.1145/4904.4999
Emerson, E. A., Mok, A. K., Sistla, A. P. and Srinivasan, J. (1991). Quantitative
Temporal Reasoning, in Proceedings of the 2nd International Workshop on
Computer Aided Verification, CAV ’90, pp. 136–145, Springer-Verlag, London,
UK, ISBN 3-540-54477-1.
URL: http://portal.acm.org/citation.cfm?id=647759.735030
Ernst, R. and Bhasker, J. (1991). Simulation-based verification for high-level
synthesis, Design Test of Computers, IEEE, vol. 8, no. 1, pp. 14 –20.
Franzle, M. (1999). Analysis of hybrid systems: An ounce of realism can save an
infinity of states, in Flum, J. and Rodriguez-Artalejo, M. (Eds.), Computer
BIBLIOGRAPHY 199
Science Logic (CSL’99), vol. 1683 of Lecture Notes in Computer Science, pp.
126–140, Springer Verlag.
URL: http://www.imm.dtu.dk/ mf/CSL99.ps.Z
Frehse, G. (2008). PHAVer: algorithmic verification of hybrid systems past
HyTech, Int. J. Softw. Tools Technol. Transf., vol. 10, pp. 263–279.
URL: http://portal.acm.org/citation.cfm?id=1388708.1388710
Fujita, M. (2005). Equivalence checking between behavioral and RTL descrip-
tions with virtual controllers and datapaths, ACM Trans. Des. Autom. Elec-
tron. Syst., vol. 10, pp. 610–626.
URL: http://doi.acm.org/10.1145/1109118.1109121
Gaubert, S. and Mairesse, J. (1999). Modeling and analysis of timed Petri nets
using heaps of pieces, IEEE Transactions on Automatic Control, vol. 44, pp.
683–697.
Gaubert, S. and Plus, M. (1997). Methods and applications of (max,+) linear
algebra, in STACS’97, number 1200 in LNCS, Lubeck, pp. 261–282, Springer.
Ghamarian, A. (2008). Timing Analysis of Synchronous Data Flow Graphs,
Ph.D. thesis, Technische Universiteit Eindhoven.
Ghamarian, A., Geilen, M., Stuijk, S., Basten, T., Theelen, B., Mousavi, M.,
Moonen, A. and Bekooij, M. (2006). Throughput Analysis of Synchronous
Data Flow Graphs, in ACSD, pp. 25–36.
Govindarajan, R. and Gao, G. (1995). Rate-optimal schedule for multi-rate DSP
computations, J. VLSI Signal Process. Syst., vol. 9, no. 3, pp. 211–232.
Guernic, P., Benveniste, A., Gautier, T., Bournai, P. and (117), I. (1985). Sig-
nal: a data flow oriented language for signal processing, Publication interne,
Institut national de recherche en informatique et en automatique.
URL: http://books.google.co.in/books?id=FFAzPwAACAAJ
Gunter, C. A. (1992). Semantics of programming languages: structures and tech-
niques, MIT Press, Cambridge, MA, USA, ISBN 0-262-07143-6.
Hagen, G. and Tinelli, C. (2008). Scaling Up the Formal Verification of Lustre
Programs with SMT-Based Techniques, in Cimatti, A. and Jones, R. B. (Eds.),
200 BIBLIOGRAPHY
FMCAD, pp. 1–9, IEEE, ISBN 978-1-4244-2735-2.
URL: http://dx.doi.org/10.1109/FMCAD.2008.ECP.19
Haghverdi, E., Tabuada, P. and Pappas, G. J. (2005). Bisimulation relations for
dynamical, control, and hybrid systems, Theor. Comput. Sci., vol. 342, pp.
229–261.
URL: http://portal.acm.org/citation.cfm?id=1150850.1150852
Halbwachs, N., Caspi, P., Raymond, P. and Pilaud, D. (1991). The synchronous
data flow programming language LUSTRE, Proceedings of the IEEE, vol. 79,
no. 9, pp. 1305–1320.
Harel, D. (1979). First-Order Dynamic Logic, Springer-Verlag New York, Inc.,
Secaucus, NJ, USA, ISBN 0387092374.
Harel, D., Kozen, D. and Tiuryn, J. (1984). Dynamic Logic, in Handbook of
Philosophical Logic, pp. 497–604, MIT Press.
Harel, E., Lichtenstein, O. and Pnueli, A. (1990). Explicit clock temporal logic,
in Logic in Computer Science, 1990. LICS ’90, Proceedings., Fifth Annual
IEEE Symposium on e, pp. 402–413.
He Jifeng (1994). From CSP to Hybrid Systems, in Roscoe, A. W. (Ed.), A
Classical Mind: Essays in Honour of C.A.R. Hoare, pp. 171–189, Prentice Hall
International Series in Computer Science.
Hennessy, M. (1988). Algebraic theory of processes, MIT Press, Cambridge, MA,
USA, ISBN 0-262-08171-7.
Henzinger, T. (1996). The theory of hybrid automata, in Logic in Computer
Science, 1996. LICS ’96. Proceedings., Eleventh Annual IEEE Symposium on,
pp. 278–292.
Henzinger, T., Manna, Z. and Puneli, A. (1992). What Good are Digital Clocks?,
vol. 623 of LNCS , pp. 545–558.
Henzinger, T. A., Nicollin, X., Sifakis, J. and Yovine, S. (1994). Symbolic model
checking for real-time systems, Inf. Comput., vol. 111, pp. 193–244.
URL: http://portal.acm.org/citation.cfm?id=191349.184659
BIBLIOGRAPHY 201
Hillion, H. P. and Proth, J. M. (1989). Performance evaluation of job-shop sys-
tems using timed event-graphs, IEEE Transactions on Automatic Control,
vol. 34, no. 1, pp. 3–9.
Hoare, C. A. R. (1978). Communicating sequential processes, Commun. ACM,
vol. 21, pp. 666–677.
URL: http://doi.acm.org/10.1145/359576.359585
Hofner, P. and Moller, B. (2009). An algebra of hybrid systems, Journal of Logic
and Algebraic Programming, vol. 78, no. 2, pp. 74–97.
Hofner, P. and Struth, G. (2007). Automated Reasoning in Kleene Algebra, in
Proceedings of the 21st international conference on Automated Deduction:
Automated Deduction, CADE-21, pp. 279–294, Springer-Verlag, Berlin, Hei-
delberg, ISBN 978-3-540-73594-6.
URL: http://dx.doi.org/10.1007/978-3-540-73595-3 19
Hutter, D., Langenstein, B., Sengler, C., Siekmann, J. H., Stephan, W. and
Wolpers, A. (1996). Deduction in the Verification Support Environment (VSE),
in Proceedings of the Third International Symposium of Formal Methods Eu-
rope on Industrial Benefit and Advances in Formal Methods, pp. 268–286,
Springer-Verlag, London, UK, ISBN 3-540-60973-3.
URL: http://portal.acm.org/citation.cfm?id=647537.729674
Jacobs, B. and Rutten, J. (2002). Coalgebraic methods in computer science,
Theoretical Computer Science, vol. 280, no. 1-2.
Jain, R., Mujumdar, A., Sharma, A. and Wang, H. (1991). Empirical evaluation
of some high-level synthesis scheduling heuristics, in Proceedings of the 28th
ACM/IEEE Design Automation Conference, DAC ’91, pp. 686–689, ACM,
New York, NY, USA, ISBN 0-89791-395-7.
URL: http://doi.acm.org/10.1145/127601.127751
Jantsch, A. and Sander, I. (2005). Models of computation and languages for
embedded system design, IEE Proceedings, vol. 152, no. 2.
Joyal, A., Nielsen, M. and Winskel, G. (1996). Bisimulation from Open Maps,
Information and Computation, vol. 127, no. 2, pp. 164 – 185.
URL: http://www.sciencedirect.com/science/article/pii/S0890540196900577
202 BIBLIOGRAPHY
Kahn, G. (1974a). The semantics of a simple language for parallel programming,
in Rosenfeld, J. L. (Ed.), Information processing, pp. 471–475, North Holland,
Amsterdam, Stockholm, Sweden.
Kahn, G. (1974b). The Semantics of Simple Language for Parallel Programming,
in IFIP Congress, pp. 471–475.
Karfa, C., Sarkar, D., Mandal, C. and Kumar, P. (2008). An Equivalence-
Checking Method for Scheduling Verification in High-Level Synthesis, IEEE
Trans. on CAD of Integrated Circuits and Systems, vol. 27, no. 3, pp. 556–569.
Karp, R. M. (1978). A characterization of the minimum cycle mean in a digraph,
Discrete Mathematics, vol. 23.
Kesten, Manna and Pnueli (2000). Verification of Clocked and Hybrid Systems,
ACTAINF: Acta Informatica, vol. 36.
Kim, Y., Kopuri, S. and Mansouri, N. (2004). Automated Formal Verification of
Scheduling Process Using Finite State Machines with Datapath (FSMD), in
ISQED, pp. 110–115, IEEE Computer Society, ISBN 0-7695-2093-6.
URL: http://csdl.computer.org/comp/proceedings/isqed/2004
/2093/00/20930110abs.htm
Knijnenburg, P. (1993). Algebraic Domains, Chain Completion and the Plotkin
Powerdomain Construction, Tech. rep.
Kopetz, H., Ieee and Bauer, G. (2003). The Time-Triggered Architecture, in In
Proceedings of the IEEE.
Kot, L. and Kozen, D. (2005). Kleene Algebra and Bytecode Verification, Elec-
tron. Notes Theor. Comput. Sci., vol. 141, pp. 221–236.
URL: http://dx.doi.org/10.1016/j.entcs.2005.02.028
Kozen, D. (1997). Kleene Algebra with Tests, ACM Transactions on Program-
ming Languages and Systems, vol. 19, no. 3, pp. 427–443.
Kozen, D. (2000). On Hoare logic and Kleene algebra with tests, ACM Trans.
Comput. Logic, vol. 1, no. 1, pp. 60–76.
Kozen, D. (2008). Nonlocal Flow of Control and Kleene Algebra with Tests,
Logic in Computer Science, Symposium on, pp. 105–117.
BIBLIOGRAPHY 203
Kumar, R., Blumenrohr, C., Eisenbiegler, D. and Schmid, D. (1996). Formal
Synthesis in Circuit Design - A Classification and Survey, in Proceedings of
the First International Conference on Formal Methods in Computer-Aided
Design, pp. 294–309, Springer-Verlag, London, UK, ISBN 3-540-61937-2.
URL: http://portal.acm.org/citation.cfm?id=646184.682926
Kunkel, J. (1991). COSSAP: A Stream Driven Simulator, in IEEE International
Workshop on Microelectronics in Communications, Interlaken, Switzerland.
Lafferriere, G., Pappas, G. J. and Sastry, S. (1998). O-Minimal Hybrid Systems,
Tech. Rep. UCB/ERL M98/29, University of California at Berkeley, Berkeley,
CA.
Lafferriere, G., Pappas, G. J. and Yovine, S. (1999). A New Class of Decidable
Hybrid Systems, Lecture Notes in Computer Science, vol. 1569, pp. 137–??
URL: http://link.springer-ny.com/link/service/series/0558/papers
/1569/15690137.pdf
Larsen, K. G. and Skou, A. (1991). Bisimulation through probabilistic testing,
Inf. Comput., vol. 94, pp. 1–28.
URL: http://portal.acm.org/citation.cfm?id=117588.117589
Lauwereins, R., Engels, M., Ade, M. and Peperstraete, J. (1995). Grape-II: a
system-level prototyping environment for DSP applications, Computer, vol. 28,
no. 2, pp. 35–43.
Lavagno, L., Sangiovanni-Vincentelli, A. and Sentovich, E. (1999). Models of
computation for embedded system design, pp. 45–102, Kluwer Academic Pub-
lishers, Norwell, MA, USA, ISBN 0-7923-5748-5.
URL: http://portal.acm.org/citation.cfm?id=351069.351079
Lee, E. and Messerschmitt, D. (1987). Synchronous data flow, Proceedings of the
IEEE, vol. 75, no. 9, pp. 1235–1245.
Lee, E. and Sangiovanni-Vincentelli, A. (1998). A framework for comparing mod-
els of computation, Computer-Aided Design of Integrated Circuits and Sys-
tems, IEEE Transactions on, vol. 17, no. 12, pp. 1217–1229.
Lee, E. A. and Parks, T. M. (2002). Readings in hardware/software co-design,
chap. Dataflow process networks, pp. 59–85, Kluwer Academic Publishers,
204 BIBLIOGRAPHY
Norwell, MA, USA, ISBN 1-55860-702-1.
URL: http://portal.acm.org/citation.cfm?id=567003.567010
Lee, E. A. and Seshia, S. A. (2011). Introduction to Embedded Systems, A
Cyber-Physical Systems Approach, ISBN 978-0-557-70857-4.
URL: http://LeeSeshia.org
Lee, E. A. and Xiong, Y. (2000). System-level types for component-based design,
pp. 8–10, Springer.
Lee, J.-H., Hsu, Y.-C. and Lin, Y.-L. (1989). A new integer linear programming
formulation for the scheduling problem in data path synthesis, in Computer-
Aided Design, 1989. ICCAD-89. Digest of Technical Papers., 1989 IEEE In-
ternational Conference on, pp. 20 –23.
Liu, X. (2005). Semantic Foundation of the Tagged Signal Model, Ph.D. thesis,
EECS Department, University of California, Berkeley.
URL: http://www.eecs.berkeley.edu/Pubs/TechRpts/2005/EECS-2005-
31.html
Liu, X. and Lee, E. A. (2008). CPO semantics of timed interactive actor networks,
Theor. Comput. Sci., vol. 409, no. 1, pp. 110–125.
Mansouri, N. and Vemuri, R. (1998). A Methodology for Automated Verification
of Synthesized RTL Designs and Its Integration with a High-Level Synthesis
Tool, in Proceedings of the Second International Conference on Formal Meth-
ods in Computer-Aided Design, FMCAD ’98, pp. 204–221, Springer-Verlag,
London, UK, ISBN 3-540-65191-8.
URL: http://portal.acm.org/citation.cfm?id=646185.683062
Maraninchi, F. (1991). The Argos Language: Graphical Representation of Au-
tomata and Description of Reactive Systems, in In IEEE Workshop on Visual
Languages.
Maszkowski, B. and Manna, Z. (1983). Reasoning in Interval Temporal Logic.
Mathaikutty, D. A., Patel, H. D., Shukla, S. K. and Jantsch, A. (2008). SML-
Sys: a functional framework with multiple models of computation for modeling
heterogeneous system, Design Automation for Embedded Systems, vol. 12, no.
1-2, pp. 1–30.
BIBLIOGRAPHY 205
Mendıas, J. M., Hermida, R., Molina, M. C. and Penalba, O. (2002). Efficient
Verification of Scheduling, Allocation and Binding in High-Level Synthesis,
in Proceedings of the Euromicro Symposium on Digital Systems Design, pp.
308–, IEEE Computer Society, Washington, DC, USA, ISBN 0-7695-1790-0.
URL: http://portal.acm.org/citation.cfm?id=789098.790859
Meyer, R., Faber, J. and Rybalchenko, A. (2006). Model Checking Duration
Calculus: A Practical Approach, in Barkaoui, K., Cavalcanti, A. and Cerone,
A. (Eds.), Theoretical Aspects of Computing - ICTAC 2006, vol. 4281 of Lec-
ture Notes in Computer Science, pp. 332–346, Springer Berlin / Heidelberg,
10.1007/11921240 23.
URL: http://dx.doi.org/10.1007/11921240 23
Milner, R. (1982). A Calculus of Communicating Systems, Springer-Verlag New
York, Inc., Secaucus, NJ, USA, ISBN 0387102353.
Milner, R. (1989). Communication and Concurrency, Prentice-Hall, Inc., Upper
Saddle River, NJ, USA, ISBN 0131149849.
Milner, R. (1999). Communicating and Mobile Systems: The π-calculus, Cam-
bridge University Press.
Milner, R. and Sangiorgi, D. (1992). Barbed Bisimulation, in Proceedings of the
19th International Colloquium on Automata, Languages and Programming,
ICALP ’92, pp. 685–695, Springer-Verlag, London, UK, ISBN 3-540-55719-9.
URL: http://portal.acm.org/citation.cfm?id=646246.684864
Moller, B. and Struth, G. (2004). Modal Kleene Algebra and Partial Correctness,
in AMAST, pp. 379–393.
Moller, B. and Struth, G. (2006). Algebras of modal operators and partial cor-
rectness, Theor. Comput. Sci., vol. 351, no. 2, pp. 221–239.
Moller, M. (2002). Parking Can Get You There Faster: Model Augmentation
to Speed up Real-Time Model Checking, Electronic Notes in Theoretical
Computer Science, vol. 65, no. 6, pp. 202–217, theory and Practice of Timed
Systems (Satellite Event of ETAPS 2002).
URL: http://www.sciencedirect.com/science/article/B75H1-4DD879K-
B5/2/dfb05527f9d9f239f1b92b383174e34d
206 BIBLIOGRAPHY
Murata, T. (1989). Petri nets: Properties, analysis and applications, Proc. IEEE,
vol. 77, no. 4, pp. 541–580.
Mysore, V. P. (2006). Algorithmic algebraic model checking: hybrid automata
and systems biology, Ph.D. thesis, New York, NY, USA, aAI3221980.
Nadjm-Tehrani, S. and Akerlund, O. (1999). Combining theorem proving and
continuous models in synchronous design, in Wing, J., Woodcock, J. and
Davies, J. (Eds.), FM99 Formal Methods, vol. 1709 of Lecture Notes in Com-
puter Science, pp. 722–722, Springer Berlin / Heidelberg.
Nagel, L. W. (1975). SPICE2: A Computer Program to Simulate Semiconductor
Circuits, Ph.D. thesis, EECS Department, University of California, Berkeley.
URL: http://www.eecs.berkeley.edu/Pubs/TechRpts/1975/9602.html
Navabi, Z. (1997). VHDL: Analysis and Modeling of Digital Systems, McGraw-
Hill, Inc., New York, NY, USA, 2nd edn., ISBN 0070464790.
Norstrom, C., Wall, A. and Yi, W. (1999). Timed Automata as Task Models
for Event-Driven Systems, in In proceedings of RTCSA’99, IEEE Computer
Society, Hong Kong.
Ostroff, J. S. (1992). Formal methods for the specification and design of real-time
safety critical systems, J. Syst. Softw., vol. 18, pp. 33–60.
URL: http://portal.acm.org/citation.cfm?id=145180.145184
Pappas, G. J. (2003). Bisimilar linear systems, Automatica, vol. 39, no. 12, pp.
2035 – 2047.
URL: http://www.sciencedirect.com/science/article/pii/S0005109803002553
Platzer, A. (2008). Differential Dynamic Logic for Hybrid Systems, J. Autom.
Reason., vol. 41, pp. 143–189.
URL: http://portal.acm.org/citation.cfm?id=1409455.1409462
Platzer, A. and Clarke, E. M. (2007). The Image Computation Problem in Hy-
brid Systems Model Checking., in Bemporad, A., Bicchi, A. and Buttazzo,
G. (Eds.), Hybrid Systems: Computation and Control, 10th International
Conference, HSCC 2007, Pisa, Italy, Proceedings, vol. 4416 of LNCS , pp.
473–486, Springer-Verlag, http://dx.doi.org/10.1007/978-3-540-71493-4 37(c)
BIBLIOGRAPHY 207
Springer-Verlag.
URL: http://symbolaris.com/pub/happroximation.pdf
Platzer, A. and Quesel, J.-D. (2009). European Train Control System: A Case
Study in Formal Verification, in Proceedings of the 11th International Con-
ference on Formal Engineering Methods: Formal Methods and Software Engi-
neering, ICFEM ’09, pp. 246–265, Springer-Verlag, Berlin, Heidelberg, ISBN
978-3-642-10372-8.
Plotkin, G. D. (1976). A Powerdomain Construction, SIAM Journal on Comput-
ing, vol. 5, no. 3, pp. 452–487.
Pnueli, A. (1977). The temporal logic of programs, in Proceedings of the 18th
Annual Symposium on Foundations of Computer Science, pp. 46–57, IEEE
Computer Society, Washington, DC, USA.
URL: http://portal.acm.org/citation.cfm?id=1382431.1382534
Radhakrishnan, R., Teica, E. and Vermuri, R. (2000). An approach to high-level
synthesis system validation using formally verified transformations, in Pro-
ceedings of the IEEE International High-Level Validation and Test Workshop
(HLDVT’00), HLDVT ’00, pp. 80–, IEEE Computer Society, Washington, DC,
USA, ISBN 0-7695-0786-7.
URL: http://portal.acm.org/citation.cfm?id=518914.822280
Rahmouni, M. and Jerraya, A. A. (1995). Formulation and evaluation of schedul-
ing techniques for control flow graphs, in EURO-DAC, pp. 386–391, IEEE
Computer Society, ISBN 0-8186-7156-4.
URL: http://doi.acm.org/10.1145/224270.224352
Raudvere, T., Sander, I. and Jantsch, A. (2008). Application and Verification
of Local Nonsemantic-Preserving Transformations in System Design, IEEE
Trans. on CAD of Integrated Circuits and Systems, vol. 27, no. 6, pp. 1091–
1103.
URL: http://dx.doi.org/10.1109/TCAD.2008.923249
Ronkko, M., Ravn, A. P. and Sere, K. (2003). Hybrid action systems,
Theor. Comput. Sci., vol. 290, no. 1, pp. 937–973.
Rounds, W. C. and Song, H. (2003). The Phi-Calculus: A Language for Dis-
tributed Control of Reconfigurable Embedded Systems, in Maler, O. and
208 BIBLIOGRAPHY
Pnueli, A. (Eds.), HSCC, vol. 2623 of Lecture Notes in Computer Science,
pp. 435–449, Springer, ISBN 3-540-00913-2.
URL: http://link.springer.de/link/service/series/0558
/bibs/2623/26230435.htm
Rutten, J. J. (1996). Universal coalgebra: a theory of systems, Tech. rep., Ams-
terdam, The Netherlands, The Netherlands.
Sander, I. and Jantsch, A. (2004). System Modeling and Design Refinement in
ForSyDe, IEEE Trans. on Computer-Aided Design of Integrated Circuits and
Systems, vol. 23, no. 1, pp. 17–32.
Seceleanu, T. (2000). Systematic design of synchronous digital circuits, Ph.D.
thesis, University of Turku.
Singh, S. (2003). System Level Specification in Lava, in DATE, pp. 10370–10375,
IEEE Computer Society, ISBN 0-7695-1870-2.
URL: http://csdl.computer.org/comp/proceedings/date
/2003/1870/01/187010370abs.htm
Sriram, S. and Bhattacharya, S. (2000). Embedded Multiprocessors : Scheduling
and Synchronization, Marcel Dekker, Inc., New York, USA.
Tabuada, P. and Pappas, G. J. (2004). Bisimilar control affine systems, Systems
& Control Letters, vol. 52, no. 1, pp. 49 – 58.
URL: http://www.sciencedirect.com/science/article/pii/S0167691103002962
Thomas, D. E. and Moorby, P. R. (1991). The VERILOG Hardware De-
scription Language, Kluwer Academic Publishers, Norwell, MA, USA, ISBN
0792391268.
Tiwari, A. (2003). Approximate Reachability for Linear Systems, in Maler, O.
and Pnueli, A. (Eds.), HSCC, vol. 2623 of Lecture Notes in Computer Science,
pp. 514–525, Springer, ISBN 3-540-00913-2.
URL: http://link.springer.de/link/service/series/0558/
bibs/2623/26230514.htm
Viennot, G. (1986). Heaps of Pieces. 1: basic definitions and combinatorial lem-
mas, Labelle and Lerox Eds., Combinatoire enumerative, Lect. Notes in Math.,
vol. 1234, pp. 321–350.
BIBLIOGRAPHY 209
Wang, F. (2004). Formal verification of timed systems: a survey and perspective,
Proceedings of the IEEE, vol. 92, no. 8, pp. 1283–1305.