AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

246
AN END-TO-END SYSTEM FOR MODEL CHECKING OVER CONTEXT-SENSITIVE ANALYSES A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy by James Randall Ezick August 2004

Transcript of AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

Page 1: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

AN END-TO-END SYSTEM FOR MODEL CHECKING

OVER CONTEXT-SENSITIVE ANALYSES

A Dissertation

Presented to the Faculty of the Graduate School

of Cornell University

in Partial Fulfillment of the Requirements for the Degree of

Doctor of Philosophy

by

James Randall Ezick

August 2004

Page 2: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

© James Randall Ezick 2004ALL RIGHTS RESERVED

Page 3: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

AN END-TO-END SYSTEM FOR MODEL CHECKING OVER

CONTEXT-SENSITIVE ANALYSES

James Randall Ezick, Ph.D.

Cornell University 2004

Model checking based on validating temporal logic formulashas proven practical

and effective for numerous software engineering applications, including program veri-

fication and code comprehension. A context-sensitive analysis is an analysis in which

program elements are assigned sets of properties that depend upon the context in which

they occur. For analyses on imperative languages, this often refers to considering the

behavior of elements in a called procedure with respect to the call-stack that generated

the procedure invocation. Algorithms for performing or approximating these types of

analyses make up the core of interprocedural program analysis and are pervasive, having

applications in program comprehension, optimization, andpolicy enforcement.

This dissertation introduces Carnauba, a practical and effective end-to-end system

that extends model checking of temporal logic formulas to information generated by

context-sensitive analyses. The most novel features of Carnauba include:

• An optimizing compiler, based on the design of a traditional programming lan-

guage compiler, that is capable of eliminating redundancy and other unnecessary

Page 4: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

computation from large batches of potentially diverse temporal logic queries.

• A model checking engine that operates by treating context-sensitive analyses as

abstract objects that can be unified, projected, and reduced.

• A post-query component that takes refinement constraints on the solution and

admissible stack-contexts and returns a concise representation of the valid stack-

contexts meeting those constraints.

Carnauba operates over aspects of ANSI C programs abstracted or analyzed as un-

restricted hierarchical state machines. In contrast to many other systems, Carnauba

is capable of checking formulas ranging over the full modal mu-calculus. Further, Car-

nauba was developed as an end-to-end solution to the model checking problem, bringing

together solutions to a number of disparate model checking challenges under a single,

unified framework.

This dissertation presents the design objectives of Carnauba, the theory underlying

its implementation, and a discussion of practical experience derived from putting it to

use. A variety of applications are discussed and empirical data generated from these

applications is presented.

Page 5: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

BIOGRAPHICAL SKETCH

James Randall Ezick was born in Clifton Park, New York on October 1, 1975. His first

five years of formal education were spent under the auspices of St. Marie’s Catholic

School in Cohoes, New York. He first experienced the joy of programming at the age

of 8, spending his after school hours entering listings fromRun Magazine into his fam-

ily’s Commodore 64. In June 1993 he graduated with honors from Shenendehowa High

School in Clifton Park, New York.

He attended the State University of New York at Buffalo, graduatingsumma cum

laudewith a B.S. in computer science and applied mathematics in May 1997. He was

the Department of Computer Science Outstanding Senior for the Class of 1997. In the

fall of that year he entered graduate school as a student in the Department of Computer

Science at Cornell University, from which received an M.S. in May 2000 and a Ph.D. in

August 2004.

He was wed to Amanda Holland-Minkley, who he met and fell in love with while at

Cornell, in June 2004. They plan to have children and encourage them to be part of the

next generation of Cornell computer scientists.

iii

Page 6: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

To my parents

iv

Page 7: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

ACKNOWLEDGEMENTS

I am deeply indebted to my advisor, Keshav Pingali. It has been a pleasure and a priv-

ilege working with him. His consistently steady leadership, pursuit of simplicity in

research, and emphasis on striving for a deep understandingof a problem has provided

me with many valuable lessons.

I would like to thank GrammaTech, Inc. for supporting my initial ventures into

model checking and for generously making their software available throughout my grad-

uate work. Special thanks to Tim Teitelbaum for always asking challenging questions, to

David Melski and Paul Anderson for many valuable conversations, and to Radu Gruian

for his excellent prototype of Carnauba’s graphical user interface.

Additional thanks to Dexter Kozen for serving as my first advisor, and to Andrew

Myers and Radu Ragina for providing insightful comments andsuggestions that helped

greatly in steering this research and preparing this dissertation.

I gratefully acknowledge Bill Locke of Bettis Laboratory who taught me that the

compiler is the least important reader of my code.

Jon Kleinberg and Lillian Lee have been tremendous role models, both profession-

ally and socially. As friends, I have enjoyed our many wonderful evenings of conversa-

tion, food, sporting events, and movies.

v

Page 8: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

Thanks to Becky Stewart and Stephanie Meik for always being around to listen and

for letting me waste countless hours of their time chatting in the graduate office.

My fondest appreciation goes to our department ice hockey group, the MegaHurtz.

They introduced me to a sport I have come to have a deep passionfor playing. In my

time at Cornell, nothing has given me more joy than the late nights I spent at Lynah Rink

stopping their shots.

As a student I have always believed that each new academic year is the continuation

of a story built on the foundation of all of the lessons that have come before it. I have

been extremely fortunate to have had many outstanding educators contribute to my story.

I would like to thank Carol Leighton who trained me not to write filler and Charles

Vaccaro who trained me how to write content. Together they taught me 98% of what

I know about writing. The rest I learned from listening to John Facenda narrate NFL

Films productions.

I never had a more influential teacher than Phyllis Yudikaitis. As a student in her

ninth grade math class, she inspired in me a genuine love of the study of mathematics

that persists to this day. Her high standards and demand for precision and excellence

taught me to take pride in my work and constantly challenged me to do better. It was in

her class that I first considered the idea of working in math and science professionally.

Much of the work contained in this dissertation is a direct extension of the lessons on

propositional logic that I learned in her class.

A collective thanks to the University Honors Program, Department of Mathemat-

ics, and Department of Computer Science at SUNY Buffalo. I received an outstanding

education at UB and was always made to feel welcome by the faculty and staff.

vi

Page 9: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

Thanks to Adam Fass, the friend from college that I kept. Through innumerable

phone conversations he was always able to remind me that graduate school is hard for

everyone.

Thanks to Martha Coolidge for “Real Genius” which made beinga serious student

look cool and to James Cameron for “Aliens” which I watched religiously as my reward

for finishing each semester.

I especially want to thank my parents, Randall and Carol Ezick. From my earliest

years they ingrained in me the importance of a good education. They made many, many

sacrifices to see that I got one. This dissertation is dedicated to them. Thanks Mom &

Dad.

Words are not adequate to express my gratitude to my wife, Amanda Holland-

Minkley. Amanda was one of the first people I met at Cornell; she conducted the vis-

iting student tour on Good Friday, 1997. She has been a constant source of strength

and guidance for me throughout this writing process and through much of my graduate

experience. She believed in me when things were toughest andthat made it easier to

keep going.

vii

Page 10: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

TABLE OF CONTENTS

1 Preliminaries 11.1 Unrestricted Hierarchical State Machines . . . . . . . . . . .. . . . . 6

1.1.1 Basic Structures . . . . . . . . . . . . . . . . . . . . . . . . . 61.1.2 Stack-Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . 101.1.3 An Example Machine . . . . . . . . . . . . . . . . . . . . . . 11

1.2 Context-Sensitive Analyses . . . . . . . . . . . . . . . . . . . . . . .. 131.2.1 Analysis Abstraction . . . . . . . . . . . . . . . . . . . . . . . 141.2.2 An Example Analysis . . . . . . . . . . . . . . . . . . . . . . 161.2.3 Demand-Driven Analyses . . . . . . . . . . . . . . . . . . . . 201.2.4 Context-Sensitive Atomic Propositions . . . . . . . . . . .. . 211.2.5 Reduced Context-Sensitive Analyses . . . . . . . . . . . . . .24

1.3 The Modal Mu-Calculus . . . . . . . . . . . . . . . . . . . . . . . . . 261.3.1 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261.3.2 Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291.3.3 An Example Model Checking Query . . . . . . . . . . . . . . . 31

1.4 Implementation Platform . . . . . . . . . . . . . . . . . . . . . . . . . 341.5 Other Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

2 Optimizing Compiler 402.1 The Compiler Framework . . . . . . . . . . . . . . . . . . . . . . . . . 44

2.1.1 Front End . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452.1.2 Intermediate Language . . . . . . . . . . . . . . . . . . . . . . 462.1.3 Symbol Table . . . . . . . . . . . . . . . . . . . . . . . . . . . 552.1.4 Back End . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

2.2 The Optimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582.2.1 Operating Modes . . . . . . . . . . . . . . . . . . . . . . . . . 592.2.2 Optimization Routines . . . . . . . . . . . . . . . . . . . . . . 612.2.3 Dynamic Scheduler . . . . . . . . . . . . . . . . . . . . . . . . 762.2.4 A Comprehensive Example . . . . . . . . . . . . . . . . . . . 77

2.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

viii

Page 11: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

2.4 Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . 86

3 Resolving Queries 893.1 Operating on Context-Sensitive Analyses . . . . . . . . . . . .. . . . 90

3.1.1 Unifying Context-Sensitive Analyses . . . . . . . . . . . . .. 913.1.2 Projecting Context-Sensitive Analyses . . . . . . . . . . .. . . 963.1.3 Live Context Analysis . . . . . . . . . . . . . . . . . . . . . . 983.1.4 Reducing Context-Sensitive Analyses . . . . . . . . . . . . .. 102

3.2 Generating the Environment . . . . . . . . . . . . . . . . . . . . . . . 1123.2.1 Processing a Strongly Connected Component . . . . . . . . .. 1133.2.2 Resolving an Example Query . . . . . . . . . . . . . . . . . . 1203.2.3 Implementation Considerations . . . . . . . . . . . . . . . . . 127

3.3 Reducing the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

4 Post-Query Analysis 1374.1 Example Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

4.1.1 Environment Dependence Graphs . . . . . . . . . . . . . . . . 1404.1.2 A Complete Example . . . . . . . . . . . . . . . . . . . . . . . 1444.1.3 A Second Complete Example . . . . . . . . . . . . . . . . . . 1494.1.4 Generating Examples in Practice . . . . . . . . . . . . . . . . . 1554.1.5 Semantic Path Analysis . . . . . . . . . . . . . . . . . . . . . . 159

4.2 Constraint Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1654.2.1 Valid Stack-Contexts . . . . . . . . . . . . . . . . . . . . . . . 1694.2.2 Stack-Context Constraints . . . . . . . . . . . . . . . . . . . . 1714.2.3 Solution Constraints . . . . . . . . . . . . . . . . . . . . . . . 1724.2.4 Resolving a Constraint Query . . . . . . . . . . . . . . . . . . 1744.2.5 Optimizing Constraint Queries . . . . . . . . . . . . . . . . . . 1754.2.6 Application to Code Comprehension . . . . . . . . . . . . . . . 1764.2.7 Conclusions and Future Work . . . . . . . . . . . . . . . . . . 178

5 Additional Applications 1815.1 Application-Level Checkpointing . . . . . . . . . . . . . . . . . .. . . 183

5.1.1 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1855.1.2 Optimizations and Code Transformations . . . . . . . . . . .. 1875.1.3 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 1895.1.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . 1915.1.5 Experience withC3 . . . . . . . . . . . . . . . . . . . . . . . . 1935.1.6 Conclusions and Future Work . . . . . . . . . . . . . . . . . . 199

5.2 Enforcing Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2025.2.1 Format String Vulnerabilities . . . . . . . . . . . . . . . . . . .203

ix

Page 12: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

5.2.2 Policy Enforcement . . . . . . . . . . . . . . . . . . . . . . . . 2035.2.3 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 2075.2.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . 2105.2.5 Related and Future Work . . . . . . . . . . . . . . . . . . . . . 212

6 Conclusions 2146.1 Using the Modal Mu-Calculus . . . . . . . . . . . . . . . . . . . . . . 2156.2 Models with Context-Sensitive Labeling . . . . . . . . . . . . .. . . . 2176.3 Directions for Future Work . . . . . . . . . . . . . . . . . . . . . . . . 218

6.3.1 Code Obfuscation . . . . . . . . . . . . . . . . . . . . . . . . . 2186.3.2 Predicate Determination . . . . . . . . . . . . . . . . . . . . . 219

BIBLIOGRAPHY 221

x

Page 13: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

LIST OF FIGURES

1.1 Sample Programexample1 . . . . . . . . . . . . . . . . . . . . . . . 111.2 Procedure Graphs forexample1 . . . . . . . . . . . . . . . . . . . . 121.3 Kripke Structure Expansion of Unrestricted Hierarchical State Machine

for example1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.4 Equations for Generating a Context-Sensitive Live Variable Analysis . . 171.5 Least Fixed Point of Live Variable Data-Flow Equations for example1 171.6 Solution to Live Variable Equations over Expansion ofexample1 . . . 181.7 A Context-Sensitive Analysis overexample1 for Computing the Live-

ness of VariablesG andH . . . . . . . . . . . . . . . . . . . . . . . . . 191.8 Labeling on UHSM Representation ofexample1 Induced by Context-

Sensitive Atomic PropositionD . . . . . . . . . . . . . . . . . . . . . 231.9 A Reduced Context-Sensitive Analysis overexample1 for PropertyD 251.10 Solution to Modal Mu-Calculus Formulaφ = µX.[(R′∧3X)∨µY.(D∨

2Y )] over the Expansion ofexample1 . . . . . . . . . . . . . . . . . 32

2.1 Programming Language and Temporal Logic Compiler Analogy . . . . 442.2 Translation FunctionT from CTL toLµ . . . . . . . . . . . . . . . . . 462.3 Algorithm for Resolving an Equation Block System,BVar , against a

Kripke Structure,K = (S,R, L) . . . . . . . . . . . . . . . . . . . . . 502.4 Procedures for Translating aLµ Formula to Equation List Form . . . . 522.5 Example Translation Sequence fromCTL to Equation Block Form . . . 542.6 Initial Symbol Table Record for Variablez3 from the Translation Se-

quence Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562.7 Example Translation to Remove a Cyclic Unguarded Dependence . . . 572.8 Temporal Logic and Programming Language OptimizationsAnalogy . . 592.9 Sequence of Optimization Passes on a Two-Query Batch Example . . . 622.10 Evaluation Tables for Equation Forms in Generic Solve .. . . . . . . . 642.11 Tests and Reductions for Simplify Redundant Junctions. . . . . . . . . 702.12 Example of Perform Peephole Substitutions . . . . . . . . . .. . . . . 742.13 Interdependencies between Optimization Passes . . . . .. . . . . . . . 762.14 Query Templates Used to Generate Test Batches . . . . . . . .. . . . . 79

xi

Page 14: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

2.15 Performance of Optimizing Compiler in Model-Independent Mode . . . 792.16 Running Time Analysis of Optimizations in Model-Independent Mode . 802.17 Performance of Optimizing Compiler in Model-Dependent Mode . . . . 802.18 Running Time Analysis of Optimizations in Model-Dependent Mode . . 802.19 Optimized Atomic Propositions in Model-Dependent Mode . . . . . . . 81

3.1 Sample Programexample1 . . . . . . . . . . . . . . . . . . . . . . . 933.2 Context-Sensitive AnalysesAG andAH overexample1 for Comput-

ing the Liveness ofG andH separately . . . . . . . . . . . . . . . . . . 943.3 Context-Sensitive AnalysisAGH over example1 for Computing the

Liveness ofG andH together . . . . . . . . . . . . . . . . . . . . . . . 953.4 Context-Sensitive AnalysisAD overexample1 for Computing the Pro-

jected PropertyD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 983.5 Equations for Performing Live Context Analysis . . . . . . .. . . . . . 993.6 Reduced Context-Sensitive AnalysisA′

D overexample1 for Comput-ing PropertyD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

3.7 Optimized Equation Block Form Partitioned into Strongly ConnectedComponents for the Modal Mu-Calculus Formulaφ = µX.[(R′∧3X)∨µY.(D ∨ 2Y )] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

3.8 Context-Sensitive Analyses Encoding the Solution to SCC 1 and SCC 2 1223.9 Context-Sensitive Analysis Encoding the Solution to SCC 3 . . . . . . 1233.10 Context-Sensitive Analysis Encoding the Solution to SCC 4 . . . . . . 1243.11 Reduced Context-Sensitive Analysis Encoding the Solution to SCC 4 . 1253.12 Performance of the Query Resolution System after Model-Dependent

Optimizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1293.13 Optimized Equation Block Form Partitioned into Strongly Connected

Components for the Modal Mu-Calculus Formulaφ = µX.(p ∨33X) 1313.14 UHSM for the Sample Model Reduction Queryφ . . . . . . . . . . . . 1313.15 Context-Sensitive Analysis Encoding the Solution to SCC 1 of the Sam-

ple Model Reduction Queryφ . . . . . . . . . . . . . . . . . . . . . . 131

4.1 Sample Programexample2 for the Example Generator . . . . . . . . 1454.2 Equation Block Form of Sample Queryφ = µX.(Def (G[1]) ∨3X) . 1454.3 Environment for Sample Queryφ overexample2 Cast as a Context-

Sensitive Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1464.4 Fragment of the Environment Dependence Graph for SampleQueryφ

overexample2 Reachable from((ε, v6), x0) . . . . . . . . . . . . . . 1484.5 Equation Block Form of Sample Queryψ = νX.µY.3[(Def (G[1]) ∧

X) ∨ Y ] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

xii

Page 15: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

4.6 Environment for Sample Queryψ overexample2 Cast as a Context-Sensitive Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

4.7 Fragment of the Environment Dependence Graph for SampleQueryψoverexample2 Reachable from((ε, v6), z0) . . . . . . . . . . . . . . 152

4.8 Performance of the Example Generation Heuristic . . . . . .. . . . . . 1584.9 Sample C Programpostfix.c for the Semantic Path Analyzer . . . . 1604.10 Equation Block Form of theCTL QueryE[¬SP+ U SP−] . . . . . . . 1624.11 Sample Programmutual Exhibiting Mutual Recursion . . . . . . . . 1664.12 A Context-Sensitive Analysis overmutual wherep = EF v11 . . . . 1684.13 Automaton Accepting Valid Stack-Context Constraint LanguageLv . . 1714.14 Automaton Accepting Solution Constraint LanguageL∆ . . . . . . . . 1734.15 Minimal Automaton AcceptingL = Lv ∩ Lc ∩ L∆, the Language of

Solutions to the Constraint Query . . . . . . . . . . . . . . . . . . . . . 1754.16 Examples of Constraint Queries for Program Comprehension . . . . . . 177

5.1 Equations for Generating a Context-Sensitive Live Variable Analysis . . 1865.2 Sample Programckpnt-example with Context-Sensitive Checkpoint 1895.3 Context-Sensitive Live Variable Analysis overckpnt-example . . . 1905.4 Minimal Automaton Accepting Contexts whereH can be Excluded . . . 1915.5 Transformed Sample Programckpnt-example-transModified to

Include the Result of the Constraint Query . . . . . . . . . . . . . . .. 1925.6 Running Times for Application-Level Checkpointing Analyses . . . . . 1925.7 Variable Partitioning and Tier 3 Variable Constraint Query Averages De-

rived from Checkpointing Analysis . . . . . . . . . . . . . . . . . . . . 1925.8 Static Memory State Size Reduction Resulting from Variations on Live

Variable Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1945.9 Program Execution Time by Type of Analysis . . . . . . . . . . . .. . 1955.10 Liveness Result for Non-trivial Static Allocations ofxmppi . . . . . . 1965.11 Average Checkpoint Execution Time as a Function of Call-Stack Depth 1985.12 Equations for Generating a Context-Sensitive Incremental Analysis . . . 2005.13 Context-Sensitive Format String Vulnerability Analysis Equations . . . 2045.14 Sample C Programfsv-example.c Exhibiting a FSV . . . . . . . . 2065.15 Context-Sensitive FSV Analysis overfsv-example.c . . . . . . . . 2075.16 Minimal Automaton Accepting Contexts where Vulnerability Exists at

v1 in fsv-example.c . . . . . . . . . . . . . . . . . . . . . . . . . 2085.17 Transformed Sample C Programfsv-example-trans.cModified

to Include the Result of the Constraint Query . . . . . . . . . . . . .. 2095.18 Format String Vulnerability Safety Policy Benchmarks. . . . . . . . . 210

xiii

Page 16: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

Chapter 1

Preliminaries

The origin of computer science as an independent field of theoretical inquiry can be

traced to the discovery, in 1936, that there exist broad and interesting classes of questions

about the behavior of programs that can not be answered by anyprogram [Tur36]. De-

spite this, as programs become increasingly complex, thereis a growing and inescapable

need to be able to ask questions and receive the best possibleanswers about the behav-

ior of a program. Model checking attempts to provide a way around this dilemma by

providing a core set of techniques for validating a formal specification of a property

against an abstraction of a system. This approach has provenpractical and effective

for the automatic verification of complex hardware systems [CES83]. Applied to soft-

ware, for the specification to be decidable it is essential that the model abstract away a

sufficient amount of complexity, as even the simplest logicscan encode questions that

have long been known to be otherwise undecidable. These abstractions generally come

from encapsulating the behavior of a program into a collection of states that transition

from one to another, coupled with a set of predicates that candistinguish one type of

1

Page 17: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

2

state from another. The best abstractions are those which naturally capture the inherent

structure of the system. Model checking can then be seen as the process of reasoning

about the set of possible patterns of transitions between the distinguishable state types.

When the behavior of a program can be mapped to a set of states for which a robust

set of predicates exists, model checking provides a way to ask and answer sophisticated

questions with practical applications.

Context-sensitive analyses are program analyses in which the elements of similar

program abstractions are interpreted with respect to the context in which they occur. For

imperative programs, the most common instances of these analyses are ones in which

the effect of elements of a called procedure are considered in the context of their calling

stack. Numerous program analyses rely on these techniques to aggregate information

over the precise set of paths that the execution of a program may take. The end result is

a mapping of information that associates with each statement, for each possible context,

a set of facts that hold at that statement in that context. It is not atypical for a single

analysis to span hundreds or thousands of such facts. Frequently, these analyses are

highly specialized, using knowledge specific to a problem domain to controllably trade

off precision for tractability. Algorithms for performingor approximating these types of

analyses make up the core of interprocedural program analysis and are pervasive, having

applications in program comprehension, optimization, andpolicy enforcement.

Taken separately, both model checking and program analysisprovide a means of

answering questions about the behavior of a program. Context-sensitive analyses oper-

ate by determining which subset of a set of properties hold atsome state of an explicit

program abstraction. This set is usually defined by a set of equations that relate the set

Page 18: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

3

of properties that hold at one state to the set of properties that hold at the immediate

predecessors and successors of that state. Model checking,in contrast, seeks to deter-

mine whether a formula, cast in some formal logic, is valid atsome state. In global

checking systems, the process for computing the validity ofthe formula returns the

complete set of states in the abstraction where the formula is valid. In many cases, this

formula can ask a complex question about the possible sequences of events that can take

place arbitrarily many states in the future. The connectionbetween these techniques

has been studied and a number of problems have been found to exist in their intersec-

tion [Pod00, SS98, Ste91].

To date, the primary application of model checking has been the autonomous verifi-

cation of the correctness of programs. However, this is not the only application to which

it is well suited. Indeed, model checking provides an ideal basis for an interactive tool

in which questions are cast into formal logic, answered by validation, and then reported

through an example generation system that explains the result. Experience among soft-

ware engineers in the field suggests that there exists a real demand [DN90] for a tool to

address the need for programmers to digest the complexity ofmodern software systems.

This dissertation introduces Carnauba, an end-to-end system for model checking

over context-sensitive analyses for ANSI C, a language thathas raised incomprehensi-

ble complexity to an art form [IOC]. Rather than view model checking and program

analysis as competing approaches in which problems in one domain can sometimes be

cast into the other, the goal of Carnauba is to treat these as complementary approaches

in which information aggregated by program analysis servesas the source of predicates

to a model checking system. In this way, the specialized information generated by cus-

Page 19: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

4

tom analyses can be leveraged by a model checking system capable of asking general-

purpose questions framed in a formal logic. Carnauba is novel in this respect, it defines

an abstract notion of a context-sensitive analysis robust enough to include many “off-

the-shelf” analyses and then provides an extension of the model checking capability to

those analyses.

In keeping with the objective of providing an interactive tool, the subsystems of Car-

nauba have been designed to take maximal advantage of a programmer’s knowledge. In

Carnauba this knowledge is incorporated before, during, and after the model check-

ing process. The system is referred to as “end-to-end” sinceit not only incorporates

a unique subsystem to handle batches of queries with sophistication before the model

checking process begins, but it also contains multiple types of post-query analysis to

further refine the output. Some of these post-query techniques are novel and capitalize

on Carnauba’s ability to unify the output of the model checking process with the abstrac-

tion for general context-sensitive analyses. Carnauba is intended as a complete solution

that addresses the complex engineering challenge of integrating approaches to disparate

hard problems into a single unified framework. The pay-off ofthis approach is a sys-

tem that allows model checking techniques to be applied to problems for which model

checking solutions have not previously been considered. Examples of these problems

are provided in the later chapters of this dissertation.

This dissertation is organized as six chapters. Chapter 1 introduces the preliminary

concepts on which this work is based. This includes a formal description of the mod-

els, analyses, and formulas over which the Carnauba model checking system operates as

well as a detailed description of the implementation platform for Carnauba. This chapter

Page 20: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

5

concludes with a description of other approaches to the model checking problem. Chap-

ter 2 presents an optimizing compiler for reducing the complexity of batches of temporal

logic formulas over a common model. Applying the same framework common to tra-

ditional programming language compilers, the core of this compiler is a collection of

eight independent optimization routines that exploit redundancy, logical implications,

and in some cases, model-dependent properties to reduce theload placed on the sub-

sequent query resolution system. Empirical data is provided that favorably compares

the performance of the compiler with the optimal strategy for single query optimization.

The technique for resolving modal mu-calculus queries overcontext-sensitive analyses

is described in Chapter 3. This chapter presents the basic algorithm for resolving mu-

calculus formulas over hierarchical models, extended to context-sensitive predicates, in

terms of manipulating abstract analyses and then demonstrates a general paradigm for

reducing the complexity of the model over which the formulasare resolved. Chapter 4

illustrates two techniques for distilling the output of themodel checker to provide more

detailed information about the behavior of a program. Section 4.1 demonstrates how the

global information compiled by the query resolution systemcan be used to explore the

example space of any individual query. Additionally, a practical heuristic is provided

for generating meaningful source-level examples. Section4.2 follows by introducing

the notion of constraint queries. Constraint queries are a novel means of solving the

dual problem for context-sensitive analyses — given a program location and a desired

solution, what is the set of contexts for that location that map to the desired solution?

In this section, constraint queries are used to solve meaningful program comprehension

queries. Empirical data is provided from applying both techniques. Chapter 5 outlines

Page 21: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

6

how constraint queries can be applied to address two additional emerging software en-

gineering challenges. First, constraint queries are generated to detect opportunities to

reduce the size of the state saved while performing application-level checkpointing. Sec-

ond, a technique is described for incorporating the output of a set of constraint queries

into compiled code to enforce a policy decidable by a context-sensitive analysis. An

application of this technique to the circumvention of potential format string vulnerabil-

ities in C is discussed. Again, empirical data is provided for both applications. Finally,

Chapter 6 summarizes the contributions of this dissertation, discusses observations from

the development process, and suggests directions for future research.

1.1 Unrestricted Hierarchical State Machines

Formulas are validated over properties of programs modeledasKripke structures[CE81].

1.1.1 Basic Structures

Definition 1. An atomic proposition is a predicate that is either true or false at each

state in a Kripke structure. When applied to a Kripke structure an atomic proposition is

interpreted as the set of states where it is true.

Definition 2. A Kripke structure over a set of atomic propositions,AP , is an ordered

triple (S,R, L) whereS is a (possibly infinite) set ofstates, R ⊆ S × S is a transition

relation, andL : S → 2AP is a labeling function that associates with each state in the

structure the set of atomic propositions that are true in that state.

The difficulty in using Kripke structures to model the behavior of aspects of imper-

Page 22: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

7

ative programs arises from the fact that, while the program is finitely represented as an

interdependent set of procedures, a Kripke structure that explicitly models the behavior

of transitions from one statement to the next in a program canbe infinite. For this reason,

program models are generated asunrestricted hierarchical state machines[BGR01].

These machines provide a finite representation for a class ofpotentially infinite

Kripke structures by introducing the notion of hierarchy ona set,P0, . . . , Pn, of la-

beledprocedures.

Definition 3. A procedure, P , is an ordered four-tuple(V, P entry , P exit , R), whereV is

a set ofvertices, P entry ∈ V is a designatedentry-vertex, P exit ∈ V is a designated

exit-vertex, andR ⊆ V × λ, P0, . . . , Pn × V is a transition relation , where eachPi

is a distinct procedure label.

The termprocedurereplaces the more general but less intuitive termcomponent

structuresometimes used in the literature. A procedure in an unrestricted hierarchical

state machine is not required to exactly correspond to a procedure in a program.

Intuitively, the second component of the transition relation distinguishes between

intra-procedure and inter-procedure transitions. A transition of the form,(v0, λ, v1), is

an intra-procedure transition. Vertexv1 is a successorof v0 and an element of the set

denotedsuccs(v0). The vertexv1 is apredecessorof v1 and an element of the set denoted

preds(v1). Given an inter-procedure transition of the form,(v0, Pi, v1), v0 is referred to

as acall-vertexand the implied transition(v0, Pentryi ) as acall-edge. Likewise,v1 is

referred to as areturn-vertexand the implied transition(P exiti , v1) as areturn-edge.

Given this transition in a procedureP , it is said thatP callsPi. The relation induced by

this set of labeled transitions on the set of procedures is referred to as thecall graphof

Page 23: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

8

the model.

It is assumed that in any collection of procedures, the vertex sets are pairwise dis-

joint. Further, it is assumed without loss of generality that each vertex is the source

of at most one call-edge. Thus, a call-edge can be uniquely identified by its source.

Throughout the remainder of this dissertation, call-edgesare referred to by their unique

call-vertex source.

The inter-procedure transitions of a procedure can be resolved to produce expanded

procedure structures.

Definition 4. GivenP, a collection of procedures closed under the calls relation, the

expansionofP with respect toP0 ∈ P, E(P0,P), is the procedure produced by taking

the closure of the process of substituting a fresh copy ofPi for each inter-procedure

transition of the form(v0, Pi, v1) and making explicit the transitions(v0, λ, Pentryi ) and

(P exiti , λ, v1), starting fromP0. The entry- and exit-vertex of the initial instance ofP0 is

the entry- and exit-vertex, respectively, of the resultingprocedure.

The notion of an expansion is necessary for describing the labeling function of an

unrestricted hierarchical state machine.

Definition 5. Anunrestricted hierarchical state machine (UHSM)over a set of atomic

propositions,AP , is an ordered triple,(P, P0, L) whereP is a finite set ofprocedures

that are closed under the calls relation,P0 ∈ P is a designatedmain procedure that is

not called by any other procedure, andL : VE(P0,P) → 2AP is a labeling function that

associates with each vertex of the expansion ofP with respect toP0, E(P0,P), the set

of atomic propositions that are true at that vertex.

Page 24: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

9

The termunrestrictedin the definition refers to the fact that the procedures, withthe

exception of the designated proceduremain, are allowed to call one another in a cyclic

fashion. The above definition describessingle-entry, single-exitmachines since each

procedure is limited to having a unique entry- and exit-vertex.

Intuitively, unrestricted hierarchical state machines mirror the call and return struc-

ture of imperative programs. The designated proceduremaincorresponds to the initial

procedure executed by a program. By convention, the exit-vertex of this procedure con-

tains a self-transition,(P exit0 , λ, P exit

0 ). Also, for an UHSM,U , modeling the control

flow of a C program, for each procedure vertex,v, with no control flow successor there

is also a self transition,(v, λ, v). The set of such vertices is referred to asUhalt . These

self-transitions ensure that every path in the model has an infinite extension over the

supplied transition relation. This property is assumed by some temporal logics such as

CTL∗ and its sub-logics.

An unrestricted hierarchical state machine,U = (P, P0, L), is equivalent to the the

Kripke structure produced by applying the labeling function,L, over the vertex set and

transition relation of the expansionE(P, P0). Every transition in this expansion is an

intra-procedure transition and(v0, λ, v1) in the expansion corresponds to a transition

(v0, v1) in the Kripke structure. This Kripke structure is referred to asK(U). In this

representation, each vertex in a procedure can expand to multiple states in the corre-

sponding Kripke structure. This definition is a refinement ofthe original definition. In

the original definition, the labeling function only mapped vertices from the procedures

to sets of atomic propositions. That is, the labeling function could not distinguish be-

tween different sets of atomic propositions holding at distinct copies of a specific vertex.

Page 25: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

10

This capability will be useful for expressing atomic propositions derived from context-

sensitive analyses. The relation between standard unrestricted hierarchical machines

and these extended machines is discussed in more detail in Section 6.2. To distinguish

between copies of a procedure vertex in the full Kripke structure expansion, the notion

of a stack-context is introduced.

1.1.2 Stack-Contexts

Given an UHSM,U = (P, P0, L), let V refer to the union of the disjoint vertex sets in

P and letΣ refer to the union of all of the call-edges inP.

Definition 6. A stack-context, σ, is a finite sequence of the call-edges comprisingΣ. A

stack-context is avalid stack-contextfor v ∈ V if and only ifσ = ε andv is a vertex in

P0 or σ = σ0 . . . σn, the source ofσ0 is a call-vertex inP0, for eachi : 0 ≤ i < n the

target ofσi is the entry-vertex of the procedure containing the source of σi+1, and the

target ofσn is the entry-vertex of the procedure containing the vertexv.

There is a one-to-one correspondence between valid stack-contexts for a vertex and

a copy of that vertex in the full Kripke structure expansion;a valid stack-context for

a vertex uniquely identifies a copy of a vertex in the full expansion,K(U). Like the

number of copies of a vertex produced by the full expansion, the number of valid stack-

contexts for a vertex can be infinite.

Definition 7. A state in an unrestricted hierarchical state machine is an orderedpair,

(σ, v), wherev ∈ V and σ is a valid stack-context forv in the machine.

Given this definition, the labeling function for an unrestricted hierarchical state ma-

chine maps states to sets of atomic propositions that hold atthe state. As with Kripke

Page 26: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

11

example1:Global Integer G, H

v0: Procedure P() v12: Procedure R() v1: while(...cond_1...) v13: H := G + 2;v2: G := G + 1; v14: print(G);s3: R() v15: v4: H := H + 1;

v16: Procedure main() v5: v17: G := 0;

v18: H := 1;v6: Procedure Q() v19: if(...cond_2...) v7: H := 7; s20: P();s8: R(); else s9 : P(); s21: Q();v10: G := 2; v11: v22: print(G);

v23:

Figure 1.1: Sample Programexample1

structures, an atomic proposition,p, labeled by an UHSM can be interpreted as the set

of states,s, such thatp ∈ L(s).

1.1.3 An Example Machine

Figure 1.1 introduces a sample imperative program,example1, with four procedures.

Figure 1.2 illustrates how the control flow of each procedurecan be modeled as a proce-

dure. Notice that proceduremaincontains a self-transition,(v23, λ, v23), for the terminal

statement of the program. The labeled statements correspond to vertices in the pro-

cedures with the s-designated labels corresponding to the call-edges by their unique

source. These edges compriseΣ. Taken together with the designated proceduremain,

an unrestricted hierarchical state machineU = (main, P,Q,R,main, L) is defined

Page 27: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

12

Ps9 v11v10

Procedure: Q

λλλ R

Procedure: main

v17

s21

s20

v19v18v16 v22 v23

P

Q

λ

λ

λλλλλ

λ

λ

λ λ λλv0 s3v2v1

Rv4

s8v7v6

Procedure: R

λλλv12 v15v14v13

Procedure: P

v5

Figure 1.2: Procedure Graphs forexample1

v15

v23

s3

s21

v6 v7 v10 v11s8 s9

v12 v13 v14 v15 v0 v1 v2

v14

v1 v2 v4 v5

v16 v18 v19

s20

v12 v13

v4

G

H, H’

v22

G’

v17

stack-context: s21 s9

stack-context: ε

stack-context: ε

stack-context: ε

stack-context: s21

stack-context: s20

stack-context: s20 s3

stack-context: s21 s8

stack-context: s21 s9 s3

G, H’

v5

v12 v13 v14 v15

s3

H’ G

G, G’

G, G’

H’ G’

G, H’ G H, H’

G, H’ G

v0

Figure 1.3: Kripke Structure Expansion of Unrestricted Hierarchical State MachineU =(main, P,Q,R,main, L) over Atomic PropositionsG,H,G′, H ′ for example1

Page 28: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

13

over a set of four atomic propositions,G,H,G′, H ′. Intuitively, G andH hold at

states where global variablesG andH are used, respectively, andG′ andH ′ hold at

states whereG andH are defined, respectively. Figure 1.3 illustrates the Kripke structure

expansion ofU , K(U), with each state labeled by its unique stack-context, vertex pair.

As an example, the expansion produces three copies of vertexv14 from procedure

R. These copies translate into three distinct states that aredistinguished by their stack-

context,(s20s3, v14), (s21s8, v14), and(s21s9s3, v14). These are the only three valid stack-

contexts for this vertex. The atomic propositionG holds at each copy. The absence of

recursion inexample1 allowsK(U) to be finitely represented in this case.

1.2 Context-Sensitive Analyses

Context-sensitive analyses are analyses in which program elements are assigned sets of

properties that depend upon the context in which they occur.For imperative programs,

the most common instances of these analyses are ones in whichthe effect of statements

within a procedure are considered in the context of the procedure’s invoking call-stack,

its stack-context. One popular approach for generating analyses of this type is the so-

calledsecond-orderapproach [SP81]. In this approach, transfer functions are derived

for each vertex and procedure that translate a set of data-flow facts that hold before the

relevant procedure is considered into a set of facts that hold after the vertex or procedure

has been considered. This technique allows a meet-over-all-paths solution [Kil73] to a

data-flow analysis problem to be found in such a way that only those paths that respect

the natural call and return structure of the program are considered. Given a vertex and

a stack-context, it is possible to use the transfer functions generated for the vertex and

Page 29: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

14

stack elements to obtain the precise set of properties that hold at that vertex for that

stack-context. Analyses that fit this framework have numerous applications to program

comprehension, optimization, and policy enforcement.

1.2.1 Analysis Abstraction

Context-sensitive analyses are generally performed over interprocedural control-flow

graphs. These graphs can be regarded as instances of unrestricted hierarchical state

machines representing control flow over an empty set of atomic propositions. Thus, it

is straightforward to define a context-sensitive analysis over an unrestricted hierarchical

state machine,U , with vertex setV and call-edge setΣ.

Definition 8. A context-sensitive analysisover an unrestricted hierarchical state ma-

chine is an ordered quintuple(C,X ,Γ,Φ, κ), whereC is a finite set ofcontexts, X is

a set ofproperties, Γ : Σ → (γ : C → C) is a collection ofcontext transform-

ers that associate to each element ofΣ a function,γ, mapping a context to a context,

Φ : V → (φ : C → 2X ) is a collection ofproperty transformers that associate to each

element ofV a function,φ, mapping a context to a subset of the set of properties, and

κ ∈ C is theinitial context .

Given a context-sensitive analysis,(C,X ,Γ,Φ, κ), the analysis is said tospanthe

property setX . The notion of acontextintroduced by this definition is separate from the

previous notion of astack-context. A context in the scope of a context-sensitive analysis

corresponds to a set of assumptions that hold prior to considering the effect of sequences

of statements in a procedure.

In essence, the set of contexts define an equivalence relation over the set of copies

Page 30: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

15

of each vertex inV. The property transformer associated with each vertex thentakes

a context and returns the set of properties associated with that equivalence class. The

context transformers are used to translate one context to the next at each call transition

between the procedures. This makes it possible to translatea stack-context into a context

of the analysis.

Definition 9. Given a context-sensitive analysis, thestack-context transformer, Γ∗ :

Σ∗ → C, induced by the analysis maps a stack-context to a context ofthe analysis,

Γ∗(σ0 . . . σn) = [Γ(σn) . . . Γ(σ0)](κ).

The transformer is well-defined and hence trivially inducesan equivalence relation

on the set of all possible stack-contexts for the UHSM over which the analysis is defined.

Thus, for each vertex inV, it induces an equivalence relation on the set of copies of that

vertex.

Definition 10. Given a context-sensitive analysis,v ∈ V, and σ, a stack-context forv,

thesolution for vertexv in stack-contextσ is the subset of properties,X ,

ρ(σ, v) = [Φ(v)](Γ∗(σ)).

The transformer does not require that the stack-context be valid for any particular

vertex. The solution maps a valid stack-context for a particular vertex to the set of

properties that the analysis has determined hold at that vertex for a given stack-context.

The solution is found by first mapping the stack-context downto a context of the analysis

and then applying the property transformer associated withthe vertex to that context.

Notice that if the solution function is restricted to the setof states,(σ, v), whereσ is

Page 31: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

16

a valid stack-context forv, the result is a non-trivial labeling function for the UHSM

where the properties are the atomic propositions. This ideais explored in greater detail

in Section 1.2.4.

The purpose of abstracting context-sensitive analyses in this way is twofold. First,

this definition separates the means of generating the analysis from the representation

of its output. In so doing, it provides a uniform representation for both forward- and

backward-flow analyses while also abstracting away many popular optimizations for

demand-driven analyses. Second, by separating the notion of contexts from the sets of

properties that hold at vertices it is easier to describe certain transformations that will

later be necessary on the analysis. That analyses can be abstracted in this way is a

simple but crucial insight that affords both improvements to the general model checking

algorithm, presented in Section 3.2, in the form ofcontext collection, as well as a new

type of post-query analysis, theconstraint query, presented in Section 4.2.

1.2.2 An Example Analysis

A live variable analysis associates with each point in a program the set of memory

locations (variables) that may be read before being overwritten in the continuation of

the program from before that point. That is, it determines the set of locations whose

value is necessary to the continuation of the program from each point.

The analysis requires knowledge of the set of variables thatmust be defined and may

be used at each program point. Given these sets, the live variable analysis is performed

by computing the least fixed point of the second-order distributive equations shown in

Figure 1.4 over an UHSM assuming the usual lattice of functions induced by the subset

Page 32: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

17

Data-Flow Equations:

φv =

Fv if v isP exit for some procedurePFv φP entry φvret if v is acall to procedurePFv (

v′∈succs(v) φv′) otherwise

whereFv = λX.Use(v) ∪ (X − Def (v))

Operations on Data-Transforming Functions:

Initial function: ⊥ = (∅,U), whereU is the universal set of locationsIdentity function: id = (∅, ∅)Data-flow functions: Fv = (Use(v),Def (v))Application: (G,K)(X) = G ∪ (X −K)Union confluence: (G1, K1) t (G2, K2) = ((G1 ∪G2), (K1 ∩K2))Composition: (G1, K1) (G2, K2) = (G1 ∪ (G2 −K1), K1 ∪K2)Canonical form: 〈(G,K)〉 = (G,K −G)

Figure 1.4: Equations for Generating a Context-Sensitive Live Variable Analysis

relation. Figure 1.5 provides the least fixed point of these equations overexample1.

Given a vertex,v, and a valid stack-context forv, σ = σ0 . . . σn, where eachσi is a

call-edge, the set of live variables for the statement associated withv in stack-contextσ

is

LV (σ, v) = φv φσretn . . . φσret

0(∅),

whereσreti refers to the return-vertex corresponding to the call-vertex source of call-edge

σi. For example, the solution associated with vertexv14 in stack-contexts21s8 is

LV (s21s8, v14) = φv14 φs9 φv22(∅) = (G, ∅) (G, ∅) (G, ∅)(∅) = G.

Figure 1.6 illustrates the solution for each state to the live variable analysis equations

over the expansion ofexample1 introduced in Figure 1.1.

For analyses derived as the product of a second-order fixed point computation within

Page 33: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

18

v Fv φv v Fv φv

v0 (∅, ∅) (G, ∅) v12 (∅, ∅) (G, H)v1 (∅, ∅) (G, ∅) v13 (G, H) (G, H)v2 (G, ∅) (G, H) v14 (G, ∅) (G, ∅)s3 (∅, ∅) (G, H) v15 (∅, ∅) (∅, ∅)v4 (H, ∅) (G,H, ∅) v16 (∅, ∅) (∅, G,H)v5 (∅, ∅) (∅, ∅) v17 (∅, G) (∅, G,H)v6 (∅, ∅) (G, H) v18 (∅, H) (G, H)v7 (∅, H) (G, H) v19 (∅, ∅) (G, ∅)s8 (∅, ∅) (G, H) s20 (∅, ∅) (G, ∅)s9 (∅, ∅) (G, ∅) s21 (∅, ∅) (G, H)v10 (∅, G) (∅, G) v22 (G, ∅) (G, ∅)v11 (∅, ∅) (∅, ∅) v23 (∅, ∅) (∅, ∅)

Figure 1.5: Least Fixed Point of Live Variable Data-Flow Equations forexample1

v0 v1 v2 v4 v5

v12 v13 v14 v15

s3

v0

v22v17

G G G,H

G G G G

v15

v16 v18 v19

s20

v12 v13 v14 v15

v23

s3

s21

v6 v7 v10 v11s8 s9

v12 v13 v14

G

G,H

G,H

G G

stack-context: ε

stack-context: ε

stack-context: ε

stack-context: s21 s9stack-context: s21 s8

stack-context: s20

stack-context: s21

stack-context: s20 s3

G,H

G

G,H

stack-context: s21 s9 s3

G

G

GGGGG

GG

G G G G G

G,HGG

G

v5v1 v2 v4

Figure 1.6: Solution to Live Variable Equations over Expansion of example1

Page 34: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

19

Context Transformers Property Transformersv [Γ(v)](α0) [Γ(v)](α1) [Γ(v)](α2) [Γ(v)](α3) [Φ(v)](α0) [Φ(v)](α1) [Φ(v)](α2) [Φ(v)](α3)v0 G G G, H G, Hv1 G G G, H G, Hv2 G G G Gs3 α3 α3 α3 α3 G G G Gv4 G, H G, H G, H G, Hv5 ∅ G H G, Hv6 G G G Gv7 G G G Gs8 α1 α1 α3 α3 G G G Gs9 α0 α0 α2 α2 G G G, H G, Hv10 ∅ ∅ H Hv11 ∅ G H G, Hv12 G G G Gv13 G G G Gv14 G G G, H G, Hv15 ∅ G H G, Hv16 ∅ ∅ ∅ ∅v17 ∅ ∅ ∅ ∅v18 G G G Gv19 G G G, H G, Hs20 α1 α1 α3 α3 G G G, H G, Hs21 α1 α1 α3 α3 G G G Gv22 G G G, H G, Hv23 ∅ G H G, H

Thes-designated vertices also identify the elements ofΣ by call-edge source.

Figure 1.7: A Context-Sensitive Analysis,(α0, α1 α2, α3, G,H,Γ,Φ, α0), overexample1 for Computing the Liveness of VariablesG andH

a distributive framework, the translation to this abstraction is trivial. The contexts are

simply a renaming of the property sets that are needed as domains to the various property

transformers. Figure 1.7 presents the least fixed point solution to the data flow equations

as a context-sensitive analysis,(α0, α1 α2, α3, G,H,Γ,Φ, α0). In this case the re-

naming function,r, is as follows:∅ 7→ α0, G 7→ α1, H 7→ α2, andG,H 7→ α3.

Given this renaming,[Φ(v)](r(x)) = φv(x) and[Γ(σ)](r(x)) = φσret(x). Had this been

a forward-flow analysis, then[Γ(σ)](r(x)) = φσcall(x), whereσcall would be the source

of call-edgeσ ∈ Σ.

The properties spanned by this analysis correspond to the liveness of the individual

variables. Since this analysis was generated as a backward-flow analysis, the initial

context,α0, corresponds to the renaming of the empty set, the set of variables necessary

to the continuation from the end of the program.

Repeating the above example using this abstraction, the solution associated with

Page 35: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

20

vertexv14 in stack-contexts21s8 is

ρ(s21s8, v14) = [Φ(v14) Γ(s8) Γ(s21)](α0) = G.

The solution associated with the vertex is dependent on the choice of valid stack-

context. The solution associated with vertexv14 in the alternate valid stack-context

s21s9s3 is

ρ(s21s9s3, v14) = [Φ(v14) Γ(s3) Γ(s9) Γ(s21)](α0) = G,H.

It is in this sense that the analysis is referred to ascontext-sensitive.

1.2.3 Demand-Driven Analyses

While it is necessary that bothΓ andΦ be total functions, it is not strictly necessary

that the context-sensitive analysis provide a total function for each element in their re-

spective ranges. Specifically, givenσ ∈ Σ andα ∈ C, the analysis need only provide

a result for[Γ(σ)](α) if there is some valid stack-context,σσ, for the target ofσ such

thatΓ∗(σ) = α. Likewise, givenv ∈ V, Φ(v) need only be provided for[Φ(v)](α) if

there is some valid stack-context,σ, for v such thatΓ∗(σ) = α. This is referred to as

the live contextset,LC (v), for a vertex. A technique for efficiently computing these

sets is the subject of Section 3.1.3. The relaxation of the totality requirement allows this

abstraction to represent demand-driven analyses where transformers are only defined for

vertex, context pairs for which there exists some corresponding valid stack-context for

the vertex. However, throughout the remainder of this dissertation it is assumed that the

functions are total, with the assumptions thatγ(α) = κ andφ(α) = ∅ if γ or φ is not

defined onα.

Page 36: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

21

As an example of where this is useful, consider the context-sensitive pointer analysis

for C of Wilson and Lam [WL95]. In this analysis, properties are elements of the point-

to relation that may be assumed to hold at the entry to a function. Contexts in the analysis

are then the points-to relations that hold at the entry to some function. Because of the

size of the possible points-to space, it is infeasible to construct a property transformer

for each vertex that maps every possible points-to relationto the relation that holds at

that vertex under that assumed initial relation. The analysis only generates points-to sets

for a vertex for the specific possible points-to relations that can hold at the entry of the

function containing this vertex, the contexts in the analysis corresponding to some valid

stack-context. This significantly optimizes the generation of the analysis. By relaxing

the totality requirement, the abstraction of context-sensitive analyses presented in this

section can represent the output of analyses that rely on this sort of demand-driven

approach within the distributive framework.

1.2.4 Context-Sensitive Atomic Propositions

The labeling function associated with an unrestricted hierarchical state machine maps

states to sets of atomic propositions that hold at that state. The solution function induced

by a context-sensitive analysis can be seen as a labeling function that maps states to sets

of properties from the analysis that hold at that state. Sometimes, however, an analysis

spans more properties than are of interest to the model checking system. Other times,

what is of interest is not the set of properties that hold at each particular state, rather, it

is a property of the set of properties that hold at each state,a meta-property that can be

decided by the analysis. The notion of a context-sensitive atomic proposition addresses

Page 37: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

22

these considerations.

Definition 11. A property,p, isdecidableby a context-sensitive analysis spanning prop-

erty setX if there exists a total functionθp : 2X → true, false that decidesp.

Definition 12. Given a context-sensitive analysis(C,X ,Γ,Φ, κ) over an unrestricted

hierarchical state machine with vertex setV and a property,p, decidable by the analysis

via θp, thecontext-sensitive atomic proposition, p, derived from the context-sensitive

analysis is the solution induced by the new context-sensitive analysis(C, p,Γ,Φ′, κ),

where[Φ′(v)](α) = θp([Φ(v)](α)), for all v ∈ V andα ∈ C.

A context-sensitive atomic proposition is the solution to acontext-sensitive analysis

that has been projected down to a single property decidable by the analysis. These

single properties can then be regarded as atomic propositions that can be used to label

an unrestricted hierarchical state machine. Notice that each property,p, spanned by the

analysis is trivially a context-sensitive atomic proposition decided by

θp(X) =

true if p ∈ X

false otherwise.

Properties decidable by a context-sensitive analysis include not only the presence or

absence of specific properties in the solution associated with a state but can also include

aggregateproperties that are derived from the presence or absence of multiple elements

of the property set associated with the state.

As an example, consider the live variable context-sensitive analysis presented in

Figure 1.7. That analysis spans two properties,G andH, corresponding to the liveness

of the two variablesG andH, respectively. A property,D, decidable by the analysis,

Page 38: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

23

v18 v19

s20

v12 v13 v14 v15

v23

s3

s21

v6 v7 v10 v11s8 s9

v16

v1 v2 v4 v5

v12

stack-context: ε

stack-context: ε

stack-context: s21 s9stack-context: s21 s8

stack-context: s20

stack-context: s21

stack-context: s20 s3

stack-context: s21 s9 s3

D D

D

D D

D

ε

v13 v14 v15 v0 v1 v2 v4 v5

v12 v13 v14 v15

s3

v0

v22v17

stack-context:

Figure 1.8: Labeling on UHSM Representation ofexample1 Induced by Context-

Sensitive Atomic PropositionD

could be defined byθD,

θD(X) =

true if |X| ≥ 2

false otherwise.

This is an example of an aggregate property decidable by the analysis. Figure 1.8

illustrates the labeling of the unrestricted hierarchicalstate machine forexample1

that is induced by the context-sensitive atomic proposition,D, derived from the solution

function of the live variable analysis.

Recall that an atomic proposition can be regarded as a set of states in an unrestricted

hierarchical state machine. For atomic propositions derived from a context-sensitive

Page 39: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

24

analysis, an alternate means of representing the proposition is as set of context, vertex

pairs(α, v), whereα is a context of the analysis andv is a vertex from the machine.

In this representation, the context,α, stands for the equivalence class of stack-contexts,

σ, such thatΓ∗(σ) = α. This representation provides a way to finitely represent any

context-sensitive atomic proposition explicitly.

Note that not every labeling function for an unrestricted hierarchical state machine

can be represented as a set of context-sensitive atomic propositions. For example, con-

sider an UHSM consisting of proceduremain and one other procedure,F , that recur-

sively calls itself. A labeling could be defined for a single atomic propositionp such

thatp ∈ L(σ0 . . . σn, Fentry) ↔ n is prime. This property cannot be represented by a

context-sensitive analysis since the definition allows foronly a finite number of contexts.

Only those labeling functions for which underlying analyses exist are considered.

1.2.5 Reduced Context-Sensitive Analyses

The process of projecting a context-sensitive analysis to an analysis encoding a single

context-sensitive atomic-proposition often results in ananalysis that encodes more con-

text information than is necessary to specify the property.In these cases it is sometimes

possible to reduce the analysis to one that is equivalent with respect to the property, but

requires a smaller set of contexts.

Definition 13. Two context-sensitive analyses,A1 andA2, over a common unrestricted

hierarchical state machine with set of statesS are equivalent with respect to property

setX if and only ifρA1(s)|X = ρA2(s)|X for all s ∈ S.

Figure 1.9 provides a reduced equivalent context-sensitive analysis requiring only

Page 40: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

25

two contexts for encoding the context-sensitive atomic propositionD. This analysis

produces the same solution as the one illustrated as a labeling in Figure 1.8.

Any atomic proposition that can be encoded by a context-sensitive analysis consist-

ing of only a single context is said to becontext-insensitive. The atomic propositionsG,

H,G′, andH ′ from Figure 1.3 are examples of context-insensitive atomicpropositions,

they hold or do not hold at a vertex irrespective of its corresponding stack-context.

Techniques for producing reduced equivalent context-sensitive analyses are pre-

sented in Section 3.1.

1.3 The Modal Mu-Calculus

Model checking queries are posed in the modal mu-calculus [Koz83]. The modal mu-

calculus, denotedLµ, is a temporal logic based on fixed points for specifying properties

of sequences of states (paths) in a Kripke structure.

1.3.1 Syntax

Given a Kripke structure,(S,R, L), over a set of atomic propositions,AP , a set of

variables,Var , each capable of representing a subset ofS, and a set of modalities (or

actions),Act , each capable of representing a subset ofR, a modal mu-calculus formula

has the form

φ := p | X | φ ∧ φ | φ ∨ φ | ¬φ | [a]φ | 〈a〉φ | µX.φ | νX.φ

wherep ∈ AP ,X ∈ Var , anda ∈ Act .

Page 41: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

26

Context Transformers Property Transformersv [Γ(v)](β0) [Γ(v)](β1) [Φ(v)](β0) [Φ(v)](β1)

v0 ∅ Dv1 ∅ Dv2 ∅ ∅s3 β1 β1 ∅ ∅v4 D Dv5 ∅ Dv6 ∅ ∅v7 ∅ ∅s8 β0 β1 ∅ ∅s9 β0 β1 ∅ Dv10 ∅ ∅v11 ∅ Dv12 ∅ ∅v13 ∅ ∅v14 ∅ Dv15 ∅ Dv16 ∅ ∅v17 ∅ ∅v18 ∅ ∅v19 ∅ Ds20 β0 β1 ∅ Ds21 β0 β1 ∅ ∅v22 ∅ Dv23 ∅ D

Thes-designated vertices also identify the elements ofΣ by call-edge source.

Figure 1.9: A Reduced Context-Sensitive Analysis(β0, β1, D,Γ,Φ, β0) overexample1 for PropertyD

.

Page 42: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

27

The symbols[a] and 〈a〉 are referred to asmodal operatorsover the modalitya.

As a subset ofR, this modality is denotedRa. The symbols2 and 3 are used to

denote modal operators over the full modality,R. The symbolsµ andν arefixed point

operators. Given a fixed point expression of the formµX.φ(X) or νX.φ(X), instances

of X insideφ(X) areboundto the fixed point operator. These variables are assumed

to be unique. Instances of variables inside a formula that are not bound to a fixed point

operator arefree in the formula. Formulas without any occurrence of a free variable are

closed. Queries are always posed as closed formulas.

Given a mu-calculus formula, the formula can be put intopositive normal formby

recursively applying the functionpnf .

pnf (φ) =

p if φ = p

X if φ = X

pnf (φ1) ∧ pnf (φ2) if φ = φ1 ∧ φ2

pnf (φ1) ∨ pnf (φ2) if φ = φ1 ∨ φ2

neg(φ1) if φ = ¬φ1

[a]pnf (φ1) if φ = [a]φ1

〈a〉pnf (φ1) if φ = 〈a〉φ1

µX.pnf (φ1) if φ = µX.φ1

νX.pnf (φ1) if φ = νX.φ1

The functionpnf id mutually dependent on the recursive functionneg that pushes

Page 43: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

28

negations inward.

neg(φ) =

¬p if φ = p

¬X if φ = X

neg(φ1) ∨ neg(φ2) if φ = φ1 ∧ φ2

neg(φ1) ∧ neg(φ2) if φ = φ1 ∨ φ2

pnf φ1 if φ = ¬φ1

〈a〉neg(φ1) if φ = [a]φ1

[a]neg(φ1) if φ = 〈a〉φ1

νX.neg(φ1[¬X/X]) if φ = µX.φ1

µX.neg(φ1[¬X/X]) if φ = νX.φ1

Formulas that are in positive normal form can then be put infixed point normal form

by the recursive functionfnf .

fnf (φ) =

p if φ = p

X if φ = X

fnf (φ1) ∧ fnf (φ2) if φ = φ1 ∧ φ2

fnf (φ1) ∨ fnf (φ2) if φ = φ1 ∨ φ2

¬fnf (φ1) if φ = ¬φ1

[a]fnf (φ1) if φ = [a]φ1

〈a〉fnf (φ1) if φ = 〈a〉φ1

µX.fnf (φ1) if φ = µX.φ1 andX occurs free inφ1

νX.fnf (φ1) if φ = νX.φ1 andX occurs free inφ1

fnf (φ1) if φ = µX.φ1 andX does not occur free inφ1

fnf (φ1) if φ = νX.φ1 andX does not occur free inφ1

Page 44: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

29

Fixed point normal form removes unnecessary fixed point operators form the for-

mula. A closed formula in fixed point normal form iswell-formedif it contains no

sub-expressions of the formφ = ¬X. If it is assumed that the set of atomic propositions

is closed under negation, then a well-formed formula is a closed mu-calculus formula

without any negations.

1.3.2 Semantics

A well-formed modal mu-calculus formula,φ, is interpreted as the subset of the set of

states in a Kripke structure whereφ is true. This set of states is denotedJφKMe, where

M = (S,R, L) is a Kripke structure ande : Var → 2S is anenvironmentthat maps

variables to sets of states. The notatione[Q ← W ] signifies a new environment that is

the same ase except thate[Q←W ](Q) = W . The semantics of a well-formed formula,

φ, is defined recursively as follows.

• JpKMe = s | p ∈ L(s)

• J¬pKMe = s | p /∈ L(s)

• JQKMe = e(Q)

• Jφ1 ∧ φ2KMe = Jφ1KMe ∩ Jφ2KMe

• Jφ1 ∨ φ2KMe = Jφ1KMe ∪ Jφ2KMe

• J〈a〉φ1KMe = s | ∃t.(s, t) ∈ Ra andt ∈ Jφ1KMe

• J[a]φ1KMe = s | ∀t.(s, t) ∈ Ra impliest ∈ Jφ1KMe

Page 45: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

30

• JµQ.φ1KMe =⋃

i τi(∅) whereτ : 2S → 2S definedτ(W ) = Jφ1KMe[Q←W ]

• JνQ.φ1KMe =⋂

i τi(S) whereτ : 2S → 2S definedτ(W ) = Jφ1KMe[Q←W ]

That the semantics is well-defined is ensured by the absence of negation among the

variable expressions. This guarantees that each predicatetransformer,τ , is monotone

which ensures convergence to the prescribed fixed point [Tar55] over the usual lattice

of state set inclusion. For a closed formula, the initial environment can be assumed to

contain no mappings. This environment is denotede∅. A closed formula,φ, is valid (or

holds) at a state,s, denoteds ` φ, if and only if s ∈ JφKMe∅ .

The modal operators,〈a〉 and[a], define a temporal property through the successor

relation. Specifically,〈a〉φ1 holds at a state if and only if there exists a successor state

under thea transition, ana-successor, such thatφ1 holds at that state. Likewise,[a]φ1

holds at a state if and only ifφ1 holds ateverya-successor of that state. Note that[a]φ1

trivially holds at any state for which there are noa-successors.

For a Kripke structure represented as an UHSM with vertex setV, each modality

is assumed to be encoded as a subset ofV × V. That is, given two verticesv1 andv2

from the original machine, every instance of the edge(v1, v2) in the expansion has the

same modality. Put another way, for this presentation, the modalities are assumed to be

context-insensitive.

The fixed point operators,µ andν, define least and greatest fixed points, respectively.

The intuition for their function is that they perform closures over the successor relation.

This functionality allows the mu-calculus to reason about properties of entire paths. For

example, the formulaφ = µX.(p∨3X) holds at precisely those states from which there

is a path to a state wherep holds. In this case, the least fixed point operator closes over

Page 46: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

31

the existential modal operator. In contrast, the formulaφ = νX.(p∨2X) holds at those

states for which the propertyp always holds ateverystate reachable from that state.

1.3.3 An Example Model Checking Query

Given the context-sensitive atomic propositionD over the Kripke structure expansion of

the sample programexample1 illustrated in Figure 1.8, letR′ be the context-sensitive

atomic proposition that holds everywhere except instancesof v12, the entry to procedure

R. Figure 1.10 shows the solution to the modal mu-calculus formula

φ = µX.[(R′ ∧3X) ∨ µY.(D ∨2Y )]

for each state. Intuitively, this formula holds at states from which there exists a path

where the procedureR is not called until an occurrence of a state from which on all

paths in the futureD holds.

The challenge in resolving a mu-calculus formula against anUHSM is that, unlike

the example used throughout this chapter, its expansion cannot, in general, be finitely

represented. This complicates computing the fixed points necessary for determining the

semantics of the formula. This issue is compounded by the presence of context-sensitive

atomic propositions. Chapter 3 describes the algorithm used to globally resolve these

formulas.

As Figure 1.10 illustrates, the formula,φ, is valid at (ε, v16) = P entry0 . A path

fragment that illustrates the behavior prescribed by the formula can be represented as a

finite sequence of states,

(ε, v16), (ε, v17), (ε, v18), (ε, v19), (ε, s20), (s20, v0),

Page 47: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

32

s8 s9

v12 v13 v14 v15 v0 v1 v2 v4 v5

v12 v13 v14 v15

s3

v0

v22

v11

v1 v2 v4 v5

v16 v18 v19

s20

v12 v13 v14 v15

v23

s3

s21

v6 v7 v10

v17

φ φφφ

φ φ φ φ

φ φφ φ

φ

φ

φφφ

φ

φ

φ

stack-context: ε

stack-context: ε

stack-context: ε

stack-context: s21 s9stack-context: s21 s8

stack-context: s20

stack-context: s21

stack-context: s20 s3

stack-context: s21 s9 s3

φ

φ φ φ φ

φ φ

Figure 1.10: Solution to Modal Mu-Calculus Formulaφ = µX.[(R′ ∧3X) ∨ µY.(D ∨2Y )] over the Expansion ofexample1

Page 48: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

33

(s20, v1), (s20, v2), (s20, s3), (s20s3, v12), (s20s3, v13), (s20s3, v14).

While this path alone is not sufficient to confirm the validityof the formula (since a

state whereD holds must be reachable oneverypath from(s20, v2)) it does serve as

a meaningful example. Section 4.1 presents a general purpose heuristic for generating

examples such as this that takes full advantage of the globalinformation generated by

the query resolution system.

Notice that, like the atomic propositionD, the solution to the mu-calculus for-

mula is also dependent on the occurring context. For example, (s20s3, v12) ` φ and

(s21s9s3, v12) ` φ, but (s21s8, v12) 6` φ. A technique for producing a concise descrip-

tion of the set of stack-contexts associated with a vertex for which a query is valid is

presented in Section 4.2. Additional applications of this capability are the subject of

Chapter 5.

Finally, the modal mu-calculus formula forφ contains a non-trivial closed sub-

expression,µY.(D ∨ 2Y ), that is valid at a state precisely whenD holds at a state

in the future on every path from that state. This sub-expression requires some amount of

effort to resolve and could conceivably appear as a sub-expression of other queries over

this same model. Resolving such queries independently leads to redundant computation

that in many cases can be eliminated before model checking begins. Chapter 2 describes

a novel approach to inter-query optimization for temporal logic formulas that eliminates

this and other types of unnecessary computation over a batchof interrelated queries.

Page 49: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

34

1.4 Implementation Platform

Carnauba is wholly implemented as a plug-in to CodeSurfer1, version 1.9 patchlevel 2

(January 2004) [Gra]. CodeSurfer is a commercially licensed product of GrammaTech,

Inc. and was originally developed as a platform for marketing the system dependence

graph based program slicing [HRB90] technology invented bythe company.

The choice to implement Carnauba as a plug-in to CodeSurfer was made so as to

allow Carnauba to use its libraries and repositories of collected program information.

Specifically, Carnauba relies upon CodeSurfer to parse ANSIC programs and decom-

pose them into collections of vertices and associated normalized abstract syntax trees.

The normalized syntax trees represent the decomposition ofcomplex expressions into

a consistent, factored representation. The process involves the introduction of some

additional temporary variables and implies that in some cases there is a many-to-one

relationship between vertices and source-level C expressions. When the UHSM for a

program is a control-flow graph, these vertices comprise thevertex set of the machine.

Because program slicing requires true whole-program knowledge and not just sum-

mary information, CodeSurfer facilitates interprocedural program analysis, a capabil-

ity essential to this research but, unfortunately, lackingfrom most ANSI C compil-

ers. CodeSurfer also generates the DEF/USE sets associatedwith each program point,

as well as MOD/REF analysis [Muc97] information for each procedure. In addition,

CodeSurfer provides a context-insensitive pointer analysis [SH97]. The output of these

analyses are used to generate atomic information about the program for the model

checker. This information is also the basis of the various abstract interpretations [CC92]

1CodeSurfer® is a registered trademark of GrammaTech, Inc.

Page 50: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

35

used to generate many of the context-sensitive analyses discussed in this dissertation.

Beyond program analysis, CodeSurfer provides an extensivelibrary of routines for

manipulating sets of program points and abstract memory locations. Many of the al-

gorithms that comprise Carnauba utilize these routines. The CodeSurfer libraries also

include a complete set of widgets for displaying graphical information. This includes

routines for displaying sets of program points and for building interactive windows.

Carnauba’s graphical user interfaces was constructed using these libraries.

To support the construction of plug-ins, CodeSurfer contains a built-in STk Scheme

interpreter. Scheme [KCE98] is a statically scoped and properly tail-recursive dialect

of the Lisp programming language. It was designed to have an exceptionally clear and

simple semantics with only a few different ways to form expressions. A wide vari-

ety of programming paradigms, including imperative, functional, and message passing

styles, can be conveniently expressed in Scheme. STk Scheme[Gal99] is a compliant

implementation of the Scheme programming language specification, providing a full in-

tegration with the Tk user interface toolkit. STk Scheme also provides built-in support

for hash tables, regular expressions, and pattern matching. Finally, STk provides a prim-

itive object system, STKlos, developed as a mirror to CLOS, the Common Lisp Object

System [Ste90]. All of Carnauba’s components are implemented in STk Scheme.

With the exception of some specifically identified data in Section 5.1, all of the

empirical data presented in this dissertation was generated running Carnauba on a 2.0

GHz Pentium 4 PC with 1.0 GB of RAM under Microsoft Windows2 XP Professional,

Service Pack 1.2Windows® is a registered trademark of Microsoft Corporation.

Page 51: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

36

1.5 Other Approaches

The term “model checking” encompasses a vast panoply of algorithms and query tech-

niques [Eme97, MSS99]. The two most fundamental distinctions to be made are “ex-

plicit vs. implicit” state representation and “global vs. local” validation. Carnauba

is an explicit state, global model checking system which means that the full abstrac-

tion is constructed before model checking takes place and each query is globally solved

over the entire abstraction. Implicit state systems use a transition relation on states to

search a state space without ever completely constructing the abstraction. These are

primarily useful for autonomous verification systems that attempt to solve deep prop-

erties about the specific behavior of certain variables and other program elements. The

design objective of integrating with existing context-sensitive analyses precludes using

an implicit state system, which includes constraint solvers that perform bounded model

checking [Fla04].

Among explicit state systems, global checking is the more computationally inten-

sive approach. The desire to perform extensive post-query analysis drove the decision

to adopt this strategy. For systems intended strictly for verification or policy enforce-

ment, it is often sufficient to solve the query only at the origin of the program, perhaps

providing a single example when the desired property is violated. When this is the ob-

jective, other query resolution methods exist. What follows is a discussion of general

approaches. Specific projects with features related to those of Carnauba are discussed

at relevant points throughout the remainder of this dissertation.

Automata-theoretic approaches are mainly used for linear time logics interpreted as

languages of words. These approaches work by reducing the model checking problem

Page 52: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

37

to the problem of determining whether an automata accepts the empty language. An

automaton,Aφ, is constructed from the formulaφ, Aφ accepts the paths satisfyingφ.

A second automaton,AM , is constructed from the model,M , and accepts the paths ex-

hibited by the model.M then satisfiesφ if and only if L(AM) ∩ L(Aφ)′ = ∅. This

non-emptiness problem is solvable by explicitly constructing the product automaton

and then performing reachability analysis. Originally conceived for use on traditional

Kripke structures, this approach has been extended to unrestricted hierarchical state ma-

chines [YA98]. A similar approach can be used for checking the satisfiability of infinite

paths [VW94] by using substituting automata accepting languages of infinite words,

such as Buchi automata [Tho90]. The principle drawback of these approaches in prac-

tice is the need to construct an explicit product automaton for each query. Besides being

both time and space intensive, these “use once” intersection automata are optimized to

each specific query and tend to frustrate meaningful examplespace exploration.

MOPS [CW02] is a system for finding bugs in security-relevantsoftware that re-

lies on an automata-theoretic framework. Security properties are modeled as finite state

automata and a program as a pushdown automaton (PDA). Model checking then deter-

mines whether certain states representing a violation of the security property are reach-

able in the program automaton. The authors demonstrate how temporal properties of

non-local jumps, resulting form uses ofsetjump andlongjump, that can only be

conservatively approximated by interprocedural control-flow graphs can be modeled ef-

ficiently in the security automaton. This approach to local model checking is sound

with respect to the properties being validated (the absenceof a detectable violation im-

plies the property holds) and scales to operating system sized codes. However, because

Page 53: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

38

it is a local model checking technique it offers neither the possibilities for inter-query

optimization nor post-query refinement that Carnauba offers.

The tableau method is a local model-checking approach that attempts to solve the

model checking problem by solving subgoals, it is a classic divide and conquer strategy.

Essentially, it is a search algorithm that works by attempting to construct a proof that a

given state has the specified logical property. For the modalmu-calculus over a finite

model, the search involves traversing state, formula pairsand always terminates [SW91].

Failure to find a proof after exhaustive search is taken as evidence that the property fails

to hold at the given state. The local nature of the problem makes this approach an excel-

lent candidate for tackling “large model” problems. Furthermore, since this approach

works by search it generates a natural example over the course of its execution, when

it succeeds. However, although these algorithms tend to be very efficient in practice at

finding proofs when they exist, they also tend to be extremelyslow to conclude that a

proof does not exist. Furthermore, when a proof does not exist, they are not in a po-

sition to provide a counter-example. Also, generation of additional examples requires

re-running the algorithm.

As a variant on the tableau method, some temporal logics can be resolved by reacha-

bility computations. These methods seek to find a single paththat proves the validity of

the formula. These approaches sometimes assume that strongly connected components

within the model have already been computed. This aids the detection of loops contain-

ing instances of specific atomic propositions. For hierarchical models, they sometimes

also assume the precomputation of interprocedural edges that summarize the effect of

a procedure invocation. The precomputation of such auxiliary structures is not unrea-

Page 54: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

39

sonable for a system that will be used to answer multiple queries. Such techniques have

proven effective for quickly resolving queries at the starting point of a program. For

CTL∗, an algorithm exists that achieves linear time for single-entry, single-exit mod-

els [BGR01]. However, this assumes a strictly context-insensitive labeling function, not

the context-sensitive labeling functions used by Carnauba.

Beyond formal model checking, there are other approaches tomaking inquires about

the behavior of a program. The Stanford Systems Lab has a compiler based tool de-

signed for finding bugs in systems code that uses techniques similar to those used in

model checking [ECCH00]. Implemented as an extension to theGNU gcc compiler,

their system (xgcc) takes specifications written in a simple but effective state machine

language called Metal, and then analyzes execution paths inthe code at compile time

with respect to the automaton. Simple type- and expression-based patterns are used

to drive the transitions of the Metal automata. Violations result in textual compiler

messages that the user can then resolve. While this system isentirelyad hocfrom an

algorithmic standpoint, it has been very successful at finding real bugs in actual systems

code. The authors report finding hundreds of bugs in the Linuxkernel alone. They

also have an inference-based system that uses statistical analysis to attempt to uncover

rules about the system [ECH+01] leading to patterns of deviant behavior. Although Car-

nauba generates global information, a statistical inference module is a subject for future

work. Despite being integrated with thegcc compiler,xgcc does not utilize any inde-

pendently derived program analyses, nor does it allow significant per-query refinement

after the compilation process.

Page 55: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

Chapter 2

Optimizing Compiler

Model checking systems that validate temporal logic formulas with respect to a model

are pervasive [HP96, God97, EHRS00]. These systems have numerous applications to

software engineering problems including program verification and code comprehension.

When presented with multiple formulas, most of these systems process them individu-

ally in a serial fashion. This is despite the fact that, for many problem domains, batches

of temporal formula queries are generated that share some degree of commonality. For

these systems, commonality can translate into redundant computation and storage that

can be expensive when the model is large. This suggests a needfor a preprocessing tool

that can detect and utilize this commonality to reduce the workload handed off to the

model checking engine.

This chapter makes the following contributions. It explains how the framework

common to programming language compilers can be used to organize an optimization

system for batches of temporal logic formulas. It demonstrates how some core opti-

mizations for these temporal logic formulas can be partitioned and sequenced as inde-

40

Page 56: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

41

pendent optimization routines. It illustrates how an intermediate language can be used

to unify optimization techniques for many practical logics. Finally, it presents experi-

mental evidence that such a system is capable of reducing thecomplexity of batches of

formulas useful for program analysis, and that this reduction translates into improved

model checking performance that is orthogonal to any further improvements achieved

by reducing the state space over which the formulas are laterchecked.

The modern programming language optimizing compiler provides an ideal paradigm

for organizing a tool designed to manipulate batches of temporal logic formulas. Op-

timizing compilers work by translating a range of programming languages into a com-

mon intermediate language that is both the input and output format for an interchange-

able collection of optimization routines. In this framework, each routine performs a

specific, well-defined, optimization within the intermediate representation. Passes may

interact by opening opportunities for others to make progress, but in general they do

not exchange information beyond that which is included in the current incarnation of

the program and what is stored in a small globally accessiblesymbol table. Such an

organization has been shown to provide maximum flexibility with minimal operational

overhead [ASU86, Muc97, CT04].

This chapter presents an optimizing compiler for temporal logic formulas based on

such a framework. The compiler can handle formulas in any logic for which a front

end can be supplied to translate the logic into the modal mu-calculus [Koz83]. This

includes almost every temporal logic known to be useful for performing software model

checking. Internally, the compiler manipulates mu-calculus formulas in equation block

form [CKS92, CS91, CS92] and uses a slightly extended version of this form as its

Page 57: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

42

intermediate representation. Optimization then occurs over this logic language.

The optimizer is organized into eight independent, dynamically scheduled optimiza-

tion routines. Each routine performs a separate optimization designed to reduce the

complexity of the equation block system as measured by the time and space required

to subsequently resolve it. Many of these optimizations mirror traditional programming

language optimizations. Each pass is capable of operating in either a model-dependent

or model-independent mode. In the former, knowledge of the model is used to refine the

notion of equivalence among modalities and atomic propositions, and thus give a more

complete result. In the latter, these constructs are treated as abstract sets and only model-

invariant equivalence and implication are detectable. Themodel-dependent mode can

also incorporate some axiomatic knowledge of the successorrelation into the optimiza-

tion process. These distinctions are transparent to the individual passes. The result is a

system in which generic batches of queries can be optimized before a specific model is

supplied and then further refined afterwards.

The modal mu-calculus is an exponential-time decidable logic [EJ99]. This has

been demonstrated by the existence of both automata-theoretic [Var98] and automata-

free [SY00] algorithms. Despite these results, the goal of the optimizer is not com-

pleteness. Rather, it is to provide a framework for a tractable set of optimizations over

potentially very large query sets that take advantage of query boundaries and, optionally,

some model-specific knowledge. Some of the optimizations use sufficient though not

necessary conditions to perform certain reductions. Theseconditions can, in some cases,

be replaced by the aforementioned decidability algorithmsto provide a more complete,

though significantly more time-intensive, optimization. Numerous other approaches to

Page 58: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

43

query optimization exist [FW02, GO01, GPVW95, Mad97, SB00]. Some perform sin-

gle query optimizations beyond what is demonstrated here. For applications for which

these additional optimizations are desirable, this framework could be augmented with

additional passes or, alternatively, could be used as a preprocessing tool for performing

certain optimizations before passing to a system capable ofperforming these additional,

domain-specific optimizations.

This compiler is a fully integrated part of the Carnauba model checking system, a

system for model checking ANSI C programs represented as unrestricted hierarchical

state machines [BGR01]. Recall from the previous chapter that these are collections of

ordinary state machines organized to reflect the call and return structure of imperative

programs. Experiments with batches of queries of the sort used in software verification

show that an optimizing compiler framework is both effective and efficient at reducing

the total complexity of the workload passed to the model checking engine. Further, the

collection of passes provided subsumes many logic-specificoptimizations and reduc-

tions. In some cases, the optimized forms have no pre-image in the native logic from

which they originated. This indirectly eliminates the needfor “extended operators” that

are sometimes used to represent these forms [CDH+00].

The remainder of this chapter is organized as follows: Section 2.1 discusses the

details of the framework as well as introduces the semanticsof the intermediate rep-

resentation. In Section 2.2, the specific optimizations areenumerated and described.

Finally, Section 2.3 presents empirical results from applying the compiler to batches

of formulas generated from several benchmark programs. In Section 2.4, observations,

conclusions, and directions for continuing work are discussed.

Page 59: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

44

Aspect PL Compiler TL CompilerFront End Syntax driven from Syntax driven from

favorite PL to tokens favorite TL toLµBase Language Compiler dependent Modal Mu-Calculus

language of tokensIntermediate LanguageCompiler dependent Equation Block Form

pseudo-languageSymbol Table Stores type information, Stores source formulas,

base addresses, etc. base (top) variables, etc.Optimization Structure Independent passes withIndependent passes with

interdependent effects interdependent effectsAssembly Language Machine dependent Engine dependent

(SPARC, x86, etc.) (Extended EBF)

Figure 2.1: Programming Language and Temporal Logic Compiler Analogy

2.1 The Compiler Framework

The framework of this compiler is designed to mirror that of amodern programming

language compiler [ASU86, CT04]. It has a well-defined intermediate language. A

front end converts batches of temporal logic formulas through a common base language

(the modal mu-calculus) and into the intermediate languagefor optimization, while a

back end converts code in the intermediate language into a form that can be fed into the

Carnauba model checking engine. Independent optimizationpasses have interdependent

effects on the intermediate representation. An external symbol table records peripheral

information from the front end translation and feeds this information, on demand, to

the optimizer. The optimizer, in turn, updates this information as transformations are

performed. Figure 2.1 more fully illustrates this analogy with programming language

compilers. These components, except for the optimizer, aredescribed next. A full

discussion of the routines and scheduling comprising the optimizer is left for Section 2.2.

Page 60: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

45

2.1.1 Front End

The front end unifies batches of temporal logic formulas, expressed as a list of queries,

into the base language and then translates them into the intermediate language. Each

individual query consists of a minimum of two pieces of information: a formula and a

tag denoting its source logic. The front end processes each query in turn, using the tag

to select the appropriate routine for translating the formula into the modal mu-calculus,

Lµ. Translations [Dam94, EL86] are known for many logics that have proven to be

useful for model checking, includingCTL [CE81],FCTL [EL85], CTL∗ [EH86], and

PDL-∆ [Str81]. Alternatively, queries can also be expressed in the raw mu-calculus.

As the formula component of the queries are aggregated, hooks to the translation from

the original query are added to a symbol table. In this way a potentially heterogeneous

batch of formulas is unified into a single language.

Figure 2.2 provides an example of a typical front-end translation. In this case, com-

putation tree logic (CTL) is translated into the modal mu-calculus [CE81]. This trans-

lation is an entirely syntax-directed process in which closed formulas headed byCTL

operators, which are combinations of path quantifiers (A andE) with temporal oper-

ators (X, F , G, U , R), spawn recursive invocations of the translation routine on the

sub-expressions of the formula. Each sub-expression is translated in place, with expres-

sions involving the extendedCTL operators first being translated into the base operators

(EX, EG, EU). This translation assumes that every path in the model has an infinite

extension over the default modality and requires time linear in the length of the source

formula.

Since the translation from some logics such asCTL∗, which subsumesLTL, can

Page 61: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

46

CTL: φ English Description Lµ Translation: T(φ)Propositional & Base CTL Operators

p Atomic proposition p¬f Logical negation ¬T (f)f ∧ g Logical conjunction T (f) ∧ T (g)f ∨ g Logical disjunction T (f) ∨ T (g)EXf Exists successor f 3T (f)EGf Exists path globally f νX.(T (f) ∧3X)E[f U g] Exists path f until eventually g µX.((T (f) ∧3X) ∨ T (g))

Extended CTL Operators (Translated to BaseCTL Operators)AXf All successors f T (¬EX¬f)EFf Exists path in the future f T (E[true Uf ])AGf All paths globally f T (¬EF¬f)AFf All paths in the future f T (¬EG¬f)A[f U g] All paths f until eventually g T (¬E[¬g U(¬f ∧ ¬g)] ∧ ¬EG¬g)A[f R g] All paths f releases g T (¬E[¬f U¬g])E[f R g] Exists path f releases g T (¬A[¬f U¬g])

Figure 2.2: Translation FunctionT from CTL toLµ

require exponential time in the worst case, the front end is augmented with a subsystem

that remembers the translation of query forms instantiatedmultiple times over differ-

ent choices of atomic propositions. When the structure of a closed sub-expression is

repeated, the system is capable of recalling the translation to the intermediate language

and simply re-instantiating it with the current choice of atomic propositions. In Car-

nauba this processes is always demand driven.

2.1.2 Intermediate Language

Following unification, the queries are then translated and merged into the intermediate

language. Calledequation block form, this is the language over which the optimizer,

and subsequently the model checking engine, operates. Equation block form is well-

Page 62: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

47

understood and can be regarded as a factored representationof the mu-calculus formulas

in the same way that a pseudo-assembly language program can be considered a factored

representation of an imperative program. Intuitively, each equation in a block system

can be regarded as a single logical operation and each block in the system as the encap-

sulation of a single (possibly trivial) fixed-point computation. In this section, the syntax

and semantics of equation block systems are first defined along with some supporting

terms and concepts. The translation from the modal mu-calculus to equation block form

is then presented.

Syntax and Semantics

The structure of equation block form is described by the following sequence of defini-

tions.

Definition 14. GivenVar , a finite set of variables each capable of representing a set of

model states,AP , a set of atomic propositions that is closed under negation,andAct ,

a set of actions (or modalities) asimple formula, Φ, is an expression of the form

Φ ::= p | X1 | X1 ∨X2 | X1 ∧X2 | 〈a〉X1 | [a]X1

wherep ∈ AP ,Xi ∈ Var for i ∈ 1, 2, anda ∈ Act .

For the remainder of this dissertation, variables defined bythe first form are referred

to asatomic-defined, variables of the second form asvariable-defined, variables of the

third and fourth forms asjunction-defined, and variables of the final two forms asmodal-

defined. As with full modal mu-calculus formulas, the absence of an explicit action

implies the full modality. Variables that appear in a simpleformula are said to beused

by the formula.

Page 63: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

48

Definition 15. An equation block has one of two types,minE or maxE, where

E is a non-empty set of equations,

X1 = Φ1, . . . , Xn = Φn,

eachXi is a distinct element ofVar , and eachΦi is a simple formula.

Over a collection of equation blocks in which no variable is multiply defined, if an

equation defines a variableXi that is used in the simple formula that definesXj , then

Xj dependsonXi, denotedXi → Xj. If a chain of dependence occurs through zero

or more intermediate variables thenXj transitively dependson Xi. This is denoted

Xi →∗ Xj. VariablesXi andXj arecyclically dependentif each transitively depends

on the other. These relations extend to equations by considering the equation as the

variable it defines. These relations also extend in the obvious way to describe types of

dependence between equation blocks and are crucial to describing the execution of the

various optimizations.

A variable is said tooccur freein an equation block if the variable is used by a

formula of the block but is not defined by any of the equations contained in the block.

A set of equation blocks isclosedif every variable used by an equation in the block set

is defined by an equation in the block set.

Definition 16. An equation block system, BVar , is a closed, ordered list of equation

blocks,

〈B0, . . . , Bn〉,

in which every element ofVar is defined by precisely one simple formula.

Page 64: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

49

The index, I(X), of a variable,X, in an equation block system is the index of the

block in which it is defined. It is assumed that the block orderalways respects the

transitive variable-wise dependence relation except where prohibited by cyclic depen-

dences. More precisely, ifXi → Xj, thenI(Xi) ≤ I(Xj) unlessXj →∗ Xi. If Xi and

Xj are cyclically dependent then no relationship betweenI(Xi) andI(Xj) can be en-

forced without changing the semantics of the equation blocksystem. This assumption is

never violated by the standard translation and is preservedthroughout the optimization

process. For the remainder of this dissertation, this is referred to as theblock order-

ing assumption. Explicitly stating this assumption makes it easier to reason about the

correctness of certain optimizations.

The semantics of an equation block system follow from the semantics for mu-

calculus formulas [CS91]. They can also be inferred from thealgorithm in Figure 2.3

that resolves an equation block system over a Kripke structure [CE81]. This figure will

be referred to again in the next section when discussing how certain optimization passes

work. Recall that given a set,AP , of atomic propositions, aKripke structure, K, is a

tuple,(S,R, L), whereS is a (possibly infinite) set of states,R ⊆ S × S is a transition

relation, andL : S → 2AP is a labeling function that associates with each state the set of

atomic propositions that aretrue in that state. For equation block systems with multiple

modalities, each modality,a, is defined by a distinct transition relation,Ra. Recall that

hierarchical state machines can be thought of as finite representations of possibly infi-

nite Kripke structures. In this case, the semantics are defined to be compatible with the

Kripke structure semantics over the full expansion. When the algorithm terminates, each

variable is bound to the set of states at which the variable isvalid. Termination is guar-

Page 65: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

50

1. Initialize a workset with the set of all blocks and bind each element ofVar ac-

cording to its defining block type,max 7→ S,min 7→ ∅.

2. Remove the least indexed block,Bi, from the workset and process it as follows:

(a) Save the current bindings for each variable defined in theblock.

(b) Update the bindings by iteratively solving the block equations until a fixed

point is reached using the current bindings for the variables that occur free

in the block. Each equationX = Φ is evaluated as follows:

s ∈ X iff

Φ = p ∈ AP andp ∈ L(s)

Φ = X1 ands ∈ X1

Φ = X1 ∨X2 ands ∈ X1 ∪X2

Φ = X1 ∧X2 ands ∈ X1 ∩X2

Φ = 〈a〉X1 and∃t ∈ X1.Ra(s, t)

Φ = [a]X1 and∀t ∈ S.Ra(s, t)→ t ∈ X1

(c) For eachk 6= i such that there is a variable defined inBk that transitively

depends on a variable defined inBi whose binding has changed from its

saved binding (a), addBk to the workset and ifk < i reset each defined

variable inBk according to its block type,max 7→ S,min 7→ ∅.

3. Repeat the previous step (2) until the workset is empty.

4. Return the variable bindings as the solution.

Figure 2.3: Algorithm for Resolving an Equation Block System,BVar , against a Kripke

Structure,K = (S,R, L)

Page 66: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

51

anteed by the absence of negation among the simple formulas and the block ordering

assumption, which ensures monotonicity.

From the semantics, it is evident that the evaluation of a block can result in pre-

vious, dependent, blocks being returned to the workset. This behavior occurs within

components of blocks that arestrongly connected[CLR01] under thedependsrelation

and continues until a simultaneously consistent solution is reached for all of the blocks

of the component.

Thefinal index, F (X), of a variable,X, in an equation block system is the maximal

index among the set of blocks that are cyclically dependent on the block definingX.

Intuitively, once the block indexedF (X) + 1 is reached by the semantic algorithm, the

set of states bound toX is finalized. If a block is not cyclically dependent with any other

block besides itself then the block is referred to as asingleton. Singletons are guaranteed

to be processed exactly once by the semantic algorithm. The notion of singletons makes

it easier perform and reason about the correctness of certain optimizations. Singletons

may depend on blocks that have previously been finalized. Non-singleton blocks oc-

cur as a consequence of the translation of mu-calculus formulas with alternating fixed

point operators. Formulas without alternating fixed point operators are referred to as

alternation-freeand their corresponding equation block systems ashierarchical.

Translation to Intermediate Language

The translation of a modal mu-calculus formula into equation block form proceeds in

two phases. First, the formula is translated intoequation list form. Equation list form is

a simplified version of equation block form in which each block contains only a single

Page 67: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

52

/* Given sub-formulaφ, block typeτ , and next variable indexi, return an ordered ** triple, (X,E, n), whereX is the top variable,E is a typed list of equations, ** andn is the variable count. */

procedure proc(φ, τ , i)caseφ isp ⇒ return (Xi, 〈τXi = p〉, i+ 1)X ⇒ if X is Xj for somej ≤ i then return (X, 〈〉, i)

else return (Xi, 〈τXi = X〉, i+ 1)φ1 binop φ2 ⇒ (Y,E1, i1) := proc(φ1, τ, i+ 1);

(Z,E2, i2) := proc(φ2, τ, i1);return (Xi, 〈τXi = Y binop Z :: E1 :: E2〉, i2)

modop φ1 ⇒ (Y,E1, i1) := proc(φ1, τ, i+ 1);return (Xi, 〈τXi = modop Y :: E1〉, i1)

σX.φ1 ⇒ if σ = µ then τ ′ = min else τ ′ = max ;(Y,E1, i1) := proc(φ1[Xi/X], τ ′, i+ 1);return (Xi, 〈τ

′Xi = Y :: E1〉, i1)

/* Given formulaφ, return an equivalent top variable, typed equation list pair. */procedure translate(φ)

φ′ := fnf (pnf (φ));(Xφ, E, n) := proc(φ′,min, 0);return (Xφ, reverse(E))

Figure 2.4: Procedures for Translating aLµ Formula to Equation List Form

Page 68: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

53

equation. Intuitively, each resulting variable definitioncorresponds to the validity of

some sub-expression of the original mu-calculus formula. The variable whose validity

corresponds to the entire formula is referred to as thetopvariable in translation. It is this

topvariable whose validity is of primary concern — the set of states ultimately bound to

this variable is the set of states for which that mu-calculusformula is valid. It is in this

sense that the mu-calculus and equation block form are considered equivalent logics.

The translation that takes a mu-calculus formula to its corresponding equation list form,

top variable pair is a straightforward, syntax-directed process [CS91]. The procedure,

translate, is reprinted in Figure 2.4. On termination, the algorithm of Figure 2.3 binds

to this top variable,Xφ, the same set of states that are defined by the semantics of

Section 1.3.2 on the input modal mu-calculus formulaφ.

In the second phase, blocks are merged to form the initial equation block sys-

tem. This is a three step process. First, the strongly connected components of the

equations are computed with respect to thedependsrelation and topologically sorted.

Second, within each component the equations are partitioned into equivalence classes

based on their nesting depth. These classes are sorted in increasing order. Thenesting

depth[CKS92] of an equation within a component is the maximal number of block type

alternations that occur on a simple cycle in thedependsrelation, restricted to the com-

ponent, from the equation to itself consisting only of equations whose index is less than

or equal to the index of the block containing the equation whose nesting depth is being

computed. Finally, within each nesting depth equivalence class, the strongly connected

components with respect to thedependsrelation are again computed and topologically

sorted. By the translation of Figure 2.4, each component must consist of equations all

Page 69: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

54

CTL: AG EF p

CTL (Base Operators): ¬EF¬EF p

Modal Mu-Calculus: ¬µY.[(¬µX.p ∨3X) ∨3Y ]

Positive Normal Form (PNF): νY.[(µX.p ∨3X) ∧2Y ]

Fixed Point Normal Form (FNF): νY.[(µX.p ∨3X) ∧2Y ]

Equation List Form: top = z0

max z6 = 2z0 min z5 = 3z2 min z4 = p min z3 = z4 ∨ z5 min z2 = z3 max z1 = z2 ∧ z6 max z0 = z1

Initial Equation Block Form: top = z0

min z4 = p min z2 = z3 z3 = z4 ∨ z5 z5 = 3z2 max z0 = z1 z1 = z2 ∧ z6 z6 = 2z0

Figure 2.5: Example Translation Sequence fromCTL to Equation Block Form

of the same original block type. These components become theblocks and are assigned

that type. The equation block system is then the ordered concatenation of these blocks.

The time to construct the equation block system is bounded bythe time to compute

the nesting depth of each variable. Computing the set of nesting depths requires time

quadratic in the number of equations.

Merging equations into blocks reduces the number of times each block must be

reprocessed by the semantic algorithm of Figure 2.3. Consequently, more singletons

are created which makes certain optimizations to more straightforward perform. Con-

ceptually, each block encapsulates a single fixed point computation. The order of the

equations within each block is thus irrelevant to the semantics of the system.

Page 70: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

55

An Example

Figure 2.5 shows an example of the sequence of translations from aCTL formula to

the initial equation block form. TheCTL query,AG EF p can be read: On all paths it

globally holds that there exists a path where in the future p holds.

Notice that the resulting equation block system is not generally a unique represen-

tation of a mu-calculus formula. In particular, blocks can sometimes be merged, parti-

tioned, and re-ordered without changing the semantics. Likewise, individual equations

can often be simplified or re-expressed. The optimizer will be seen to exploit these facts

in the next section.

Finally, notice that for some blocks, the block type does notaffect the semantics of

the equation block system. IfBi is a block and for each blockBj ,Bi → Bj implies that

i < j, then the initial values of the variables defined inBi are irrelevant to the semantics

and the block is said to haveneutral type. Although not introduced by the standard

translation, when this condition is discovered then extending the syntax to represent this

block type acts as a hint to the optimizer that certain, otherwise unsound, operations

are permissible. Semantically, these blocks can be treatedasmin type blocks. In the

example, the block definingz4 could be retyped as a neutral block.

2.1.3 Symbol Table

During the translation process, the equation block systemsarising from a batch of for-

mulas are concatenated into a single list of equation blocks, with the variables from

each suitably renamed to ensure uniqueness and avoid conflicts. Since each original

formula is closed, this does not violate the block ordering assumption. During the com-

Page 71: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

56

Field Value Field ValueIndex 1 Final Index 1Type min Singleton? true

Formula z4 ∨ z5 Top? false

Uses z2 Same SCC 1

Figure 2.6: Initial Symbol Table Record for Variablez3 from the Translation SequenceExample

pilation process the equation block system is rearranged. Asymbol table is used to track

information about each variable. The information recordedfor each variable consists of:

• Index • Final Index

• Defining Block Type • Defining Block a Singleton?

• Defining Formula • Top Variable?

• List of Uses • Indexes of Blocks in Same SCC

This information is initialized during the translation of each formula and is updated

with each transformation. Additionally, eachtop variable is bound to a supplemental

record containing information from the original query, including its source logic. This

information can also include a textual description of the query, a list of parameters, and

possibly, instructions for post-processing. The list oftopvariables is used throughout the

compilation process to decide what information it is necessary to retain in the equation

block system and what information it is safe to discard. After compilation, information

stored in the table is used to generate additional executionand post-processing instruc-

tions for the model checking engine.

In the example of Figure 2.5, variablez0 is thetopvariable and would be bound to a

record containing the original formula (AG EF p) and logic type (CTL). As an exam-

Page 72: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

57

Fixed Point Normal Form (FNF): µX.[p ∨ µY.(X ∨3Y )]

Initial Equation Block Form: top = z0

min z2 = p min z0 = z1 z1 = z2 ∨ z3 z3 = z4 z4 = z0 ∨ z5 z5 = 3z3

Cyclic Unguarded Dependence:z0 → z1 → z3 → z4 → z0

Initial Equation Block Form without Cyclic Unguarded Dependences:top = z0

min z2 = p min z0 = z2 ∨ y y = z5 z1 = z2 ∨ z3 z3 = z4 z4 = z0 ∨ z5 z5 = 3z3

Expanded and Re-factored:z0 = z2 ∨ z0 ∨ z5

Figure 2.7: Example Translation to Remove a Cyclic Unguarded Dependence

ple, the initial symbol table entries for the variablez3 are give in Figure 2.6. Since the

query asks if a property holds globally over all paths, an expected set of post-processing

instructions might include generating an example if and only if the query returnsfalse

for a particular state of interest, for example the initial state of the program.

2.1.4 Back End

The Carnauba model checking engine is capable of checking raw equation blocks over

unrestricted hierarchical state machines, so only minimaladditional back end transla-

tion is required. Specifically, any cyclic dependencies among the variable and junction-

defined variables, referred to ascyclic unguarded dependences, are eliminated. While

such cyclic dependences can occur as a consequence of the standard translation, a system

that is equivalent with respect to the set oftopvariables can always be produced without

these cyclic dependences [BS92]. The process of eliminating a cyclic unguarded de-

pendence involves expanding the greatest indexed variablein the cycle, substituting the

Page 73: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

58

atomic-constant corresponding to the defining block type for each instance of that vari-

able in the expansion, and then re-factoring the expansion back into simple formulas.

Figure 2.7 illustrates an example of a naturally occurring cyclic unguarded dependence

and the equivalent system with the cycle removed1. After removing any unguarded de-

pendence cycles, the symbol table is used to generate a script instructing the engine

which variables to report and how to use the result to determine for which variables to

generate an example in the model.

More sophisticated back ends are possible, including thosethat produce specialized

forms, such as Buchi automata [HKT00], which accept paths containing the infinite

repetition of some state. While equation block form is strictly more expressive than

some common temporal logics, many block patterns can be translated back into other

logics. The ability to substitute optimization passes thatpreserve these forms makes the

framework compatible with other model checking systems based on specific logics, such

atLTL. In this way, the framework is compatible with other optimization techniques.

2.2 The Optimizer

The logic compiler optimizer is organized as a system of independent optimization rou-

tines. As in a traditional compiler, each routine embodies asingle optimization concept.

1The routine to eliminate these dependences is also run as part of the front end. This is done because

eliminating these dependences can result in dead and/or trivial equations that can be optimized away.

None of the optimization routines described in Section 2.2,with the possible exception ofPerform Peep-

hole Substitutions, introduce new cyclic unguarded dependences. The routine in included as part of the

back end to ensure correctness.

Page 74: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

59

TL Compiler Optimization PL Compiler OptimizationDead Equation Elimination Dead Code EliminationPerform Generic Solve Constant PropagationUnify Atomic Propositions noneNormalize Equation Block SystemnoneRemove Trivial Equations Copy PropagationSimplify Redundant Junctions Partial-Redundancy EliminationSever Isomorphic Expressions Common Sub-Expression EliminationPerform Peephole Substitution Peephole Optimization

Figure 2.8: Temporal Logic and Programming Language Optimizations Analogy

In fact, as Figure 2.8 demonstrates, many of the optimizations applicable to equation

block systems have traditional programming language analogs. The goal of each op-

timization is to reduce the total complexity of the equationblock system as measured

by the time the space required to resolve it. In this section each of the eight basic op-

timizations are presented in some detail. This is followed by a discussion of the pass

scheduler. Finally, a comprehensive example is presented to illustrate how the passes

come together to produce an effective result.

2.2.1 Operating Modes

One novelty of this compiler is that it is capable of operating in either a model-independent

or model-dependent mode. The choice of mode affects the operation of the conflu-

ence and comparison operations over the modalities and atomic propositions. In the

model-independent mode, modalities and atomic propositions are treated as abstract.

In this mode, the optimizer is restricted to the syntactic notion of equality. Compar-

isons between distinct abstract modalities or atomic propositions are always otherwise

inconclusive. Union and intersection operations generatenew abstractions that embody

Page 75: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

60

information about how they were constructed. In this way, equality can still be inferred

if two abstract objects have identical constructions. Thisabstraction is augmented by

the inclusion ofabsolute constants. The atomic propositionstrue and false are used

to denote the universal and empty set of model states, respectively. The modalitytrue,

also denoted in the syntax by the absence of an explicit modality, likewise refers to the

full set of model transitions.

In the model-dependent mode, comparisons between modalities or atomic propo-

sitions involve a complete inspection of the set of transitions or states that are repre-

sented. Context-sensitive atomic propositions are represented as finite sets of analysis

context, vertex pairs. Comparisons between different contexts, whether derived from

the same analysis or not not, are always inconclusive unlessexplicit external informa-

tion is provided that one context has a relationship to another context with respect to

certain vertices. When live context information exists, the comparisons are restricted

to context, vertex pairs where the vertex is in the domain of the context2. The model-

dependent mode also records whether every model state has atleast one successor under

the true modality and uses this during the optimization process ensure the validity of

certain reductions. This mode, therefore, subsumes the optimizations performed in the

model-independent mode.

2Live contexts and the notion of the domain of a context will bedescribed in detail in Section 3.1.

Page 76: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

61

2.2.2 Optimization Routines

There are eight distinct optimization routines:

1. Dead Equation Elimination

2. Perform Generic Solve

3. Unify Atomic Propositions

4. Normalize Equation Block System

5. Remove Trivial Equations

6. Simplify Redundant Junctions

7. Sever Isomorphic Expressions

8. Perform Peephole Substitutions

The overarching goal of the optimization routines is to reduce the complexity of

the equation block system without altering its semantics. Complexity can be approx-

imated by the number of variables occurring in the system andtheir formula type.

Modal-defined variables tend to increase the running time ofa model checker, they drive

the non-trivial fixed point computations. Atomic-defined variables tend to increase the

space required. The optimizations are constructed so that the semantics of each variable

is preserved, including non-top variables, unless the variable is completely eliminated

from the system. In select cases, new variables are added to the system to take the

place of more complex subsystems of variables. The optimizations are guaranteed to

retain each of thetopvariables in the system, the variables corresponding to theoriginal

query formulas. Optimizations that preserve the semanticsof each of thetop variables

are said to becorrect. Finally, notice that the number of edges in thedependsrelation is

Page 77: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

62

[Query 1] FCTL : EX EF p 7→ x0 in

min x3 = p min x5 = true min x1 = x2 x2 = x3 ∨ x4 x4 = x5 ∧ x6 x6 = 3x1 min x0 = 3x1

[Query 2] FCTL : EX EF EF p 7→ z0 in

min z5 = p min z7 = true min z10 = true min z3 = z4 z4 = z5 ∨ z6 z6 = z7 ∧ z8 z8 = 3z3 min z1 = z2 z2 = z3 ∨ z9 z9 = z10 ∧ z11 z11 = 3z1 min z0 = 3z1

[Pass 1] Perform Generic Solve: x4 7→ x6, z6 7→ z8, andz9 7→ z11

min x3 = p min x5 = true min x1 = x2 x2 = x3 ∨ x4 x4 = x6 x6 = 3x1 min x0 = 3x1 min z5 = p min z7 = true min z10 = true min z3 = z4 z4 = z5 ∨ z6 z6 = z8 z8 = 3z3 min z1 = z2 z2 = z3 ∨ z9 z9 = z11 z11 = 3z1 min z0 = 3z1

[Pass 2] Dead Equation Elimination: x5, z7, z10 eliminated

min x3 = p min x1 = x2 x2 = x3 ∨ x4 x4 = x6 x6 = 3x1 min x0 = 3x1 min z5 = p min z3 = z4 z4 = z5 ∨ z6 z6 = z8 z8 = 3z3 min z1 = z2 z2 = z3 ∨ z9 z9 = z11 z11 = 3z1 min z0 = 3z1

[Pass 3] Unify Atomic Propositions: x3, z5 7→ x3

min x3 = p min x1 = x2 x2 = x3 ∨ x4 x4 = x6 x6 = 3x1 min x0 = 3x1 min z5 = x3 min z3 = z4 z4 = z5 ∨ z6 z6 = z8 z8 = 3z3 min z1 = z2 z2 = z3 ∨ z9 z9 = z11 z11 = 3z1 min z0 = 3z1

[Pass 4] Normalize Equation Block System

neutral x3 = p min x1 = x2 x2 = x3 ∨ x4 x4 = x6 x6 = 3x1 neutral x0 = 3x1 neutral z5 = x3 min z3 = z4 z4 = z5 ∨ z6 z6 = z8 z8 = 3z3 min z1 = z2 z2 = z3 ∨ z9 z9 = z11 z11 = 3z1 neutral z0 = 3z1

[Pass 5] Remove Trivial Equations: x1, x4, z1, z3, z5, z6, z9 trivially defined

neutral x3 = p min x2 = x3 ∨ x6 x6 = 3x2 neutral x0 = 3x2 min z4 = x3 ∨ z8 z8 = 3z4 min z2 = z4 ∨ z11 z11 = 3z2 neutral z0 = 3z2

[Pass 6] Simplify Redundant Junctions: z11 → z4 soz2 = z4 ∨ z11 7→ z2 = z4

neutral x3 = p min x2 = x3 ∨ x6 x6 = 3x2 neutral x0 = 3x2 min z4 = x3 ∨ z8 z8 = 3z4 min z2 = z4 z11 = 3z2 neutral z0 = 3z2

Figure 2.9: Sequence of Optimization Passes on a Two-Query Batch Example

Page 78: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

63

Figure 2.9 (Continued)

[Pass 7] Dead Equation Elimination: z11 eliminated,[Pass 8] Normalize Equation Block System,

[Pass 9] Remove Trivial Equations: z2 trivially defined

neutral x3 = p min x2 = x3 ∨ x6 x6 = 3x2 neutral x0 = 3x2 min z2 = x3 ∨ z8 z8 = 3z2 neutral z0 = 3z2

[Pass 10] Sever Isomorphic Expressions: x0, z0, x6, z8 7→ x6

neutral x3 = p min x2 = x3 ∨ x6 x6 = 3x2 neutral x0 = x6 min z2 = x3 ∨ z8 z8 = x6 neutral z0 = x6

[Pass 11] Dead Equation Elimination: z2, z8 eliminated,[Pass 12] Normalize Equation Block System,

[Pass 13] Remove Trivial Equations: x0, z0 trivially defined

neutral x3 = p min x2 = x3 ∨ x0 x0 = 3x2 neutral z0 = x0

bounded above by twice the number of variables defined in the system3. Therefore, all of

the asymptotic complexity measures can be described in terms of numbers of variables.

Dead Equation Elimination

This is a straightforward optimization based on dead code elimination [Ken81] in a

traditional compiler. In a pass of this routine, equations that do not contribute to the

solution of one or more of thetopvariables in the equation block system are eliminated.

This pass is used to sweep away parts of the equation block system that have been found

to be redundant or unnecessary by other passes in the optimizer.

The optimization starts by marking the equations defining the topvariables and pro-

ceeds by depth-first search along the reverse of thedependsrelation. Equations that de-

fine variables needed in the definition of thetop variables are ultimately marked. When

3The definition of each variable requires the use of at most twoother variables.

Page 79: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

64

Binary Operators∨ true false q 0 ∧ true false q 0true true true true true true true false q 0false true false q 0 false false false false false

p true p p ∪ q 0 p p false p ∩ q 00 true 0 0 0 0 0 false 0 0

Atomic Propositions, Variables, & Unary Operatorsv q v 3v 〈a〉v 2v [a]vtrue q true true 0 true true

false q false false false false 0p q p 0 0 0 00 q 0 0 0 0 0

Assume thata 6= true.

Figure 2.10: Evaluation Tables for Equation Forms in Generic Solve

the search terminates, any unmarked equations are eliminated in place from the system.

A final sweep then eliminates any empty equation blocks and updates the indexes stored

in the symbol table. The preservation of the semantics is guaranteed by the nature of the

dependsrelation; no variable that is transitively required by the algorithm of Figure 2.3

in the evaluation of atop variable is eliminated. This optimization does not disturbthe

ordering of the remaining variables and requires time linear in the number of variables.

Perform Generic Solve

This optimization is partially inspired by the notion of constant propagation [WZ91]

for imperative programs. The goal is to propagate inward theabsolute atomic con-

stants while using them to simplify modal and junction formulas. Additionally, vari-

ables defined entirely by atomic propositions with no intervening modal expressions

are redefined as atomic-defined variables via unions and intersections of those atomic

propositions. In the model-dependent case, these are constructed explicitly while in the

Page 80: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

65

model-independent case, the operators work abstractly.

Propagation is accomplished by running the algorithm described in Figure 2.3, but

with the evaluation rules in step 2b replaced with those of Figure 2.10. Under the new

evaluation rules, a value of zero (0) can be interpreted as“will never know”. Similar

to the third value in three-value logic, it denotes a variable whose solution cannot be

simplified to an atomic proposition without evaluating a non-trivial modal expression.

Essentially, these rules generate the most complete solution available without knowing

the precise transition relation of the model. The rules in the figure utilize the assumption

that every state has a successor under thetrue modality. If this assumption is invalid

or cannot be determined, then the pass proceeds by using the rules for other modalities

even in thea = true case.

When the generic solve terminates the non-zero bindings areused to transform the

equation block system. Variables are considered in order ofincreasing index. For a

variable,X, defined in a singleton block found to have non-zero bindingp, if p = true

then equations of the formZ = X ∧ Y becomeZ = Y . If p = false then equations

of the formZ = X ∨ Y becomeZ = Y . Finally, the equation definingX becomes

X = p. For variables defined in non-singleton blocks, the junction transformations are

restricted to cases whereF (X) < I(Z). Also, instead of redefiningX at its current

index, a newneutral type block with indexF (X) + 1 is created containing the single

equationX = p. Every instance ofX in Bk, k ≤ F (X) in then converted to a new

variable name. The symbol table entries for indexes, formulas, and uses are updated

as each variable is considered and processed.Perform Generic Solvedoes not update

the Final Index field when a variable is redefined. Those values are recomputed by

Page 81: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

66

Normalize Equation Block System. The old value is conservative and does not affect the

correctness of any other optimization routine.

The correctness of this optimization follows from the evaluation rules used for this

routine that guarantee that the final binding of each variable is either its actual solution or

zero. Only those variables solved to non-zero values have their expressions replaced in

the system and only at the point at which their value may be considered finalized. Since

no cyclic dependence is destroyed and no reordering takes place, the block ordering

assumption is also preserved. For any fixed alternation depth, d, each variable needs to

be updated at most3d times. Hence, the running time is linear in the number of variables

for any bounded alternation depth.

Unify Atomic Propositions

In this optimization, an equivalence relation is constructed among the atomic-defined

variables in the equation block system using the notion of atomic proposition equiva-

lence directed by the optimization mode. For each non-trivial equivalence class in this

relation, the variable with the least index becomes the class representative. In the event

of a draw, a representative is chosen arbitrarily from amongthe set of least indexed vari-

ables. Each other variable in the equivalence class is then redefined in its current block

as the class representative. The symbol table entries are updated for each variable in the

equivalence class. When this pass terminates, each atomic proposition in the system is

unique with respect to the mode-determined notion of equivalence.

The correctness of this optimization follows from the choice of the least-indexed

member of each equivalence class as the class representative. Atomic-defined variables

Page 82: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

67

cannot be part of a cyclic dependence. By the block ordering assumption they can be

considered finalized before they are ever used. The least indexed member of an equiv-

alence class can then be considered finalized before any of the new variable equations

are evaluated. Thus, the semantics of the system are preserved.

The purpose of this optimization is three-fold. First, it minimizes the number of dis-

tinct atomic propositions that occur in the equation block system. For a model checker,

this reduces the number of distinct sets of model states thatmust be manipulated. Sec-

ond, although it does not reduce the number of variables in the system, it does introduce

trivial equations that subsequent passes can eliminate, thus reducing the size of the equa-

tion block system. Finally, it eliminates the need for any further comparisons among the

atomic-defined variables. In the model-dependent case, these comparisons can be non-

trivial, and performing them all in a single pass allows subsequent passes to require only

the trivial notion of syntactic equality as the test in either optimization mode. The per-

formance of this optimization is bounded by the time to create the equivalence relation.

In the model-dependent case, hashing is used to create pre-equivalence classes that con-

tain atomic-defined variables that may be equivalent. Comparisons are then restricted to

these classes. In the worst case, where there are no equivalences (and only a single pre-

class in the model-dependent case), the number of required comparisons is quadratic in

the number of atomic-defined variables.

Because this is such a potentially costly optimization, themetaphor with program-

ming language compilers is slightly deviated from here. After the first time this routine

executes a flag a set. Any subsequent re-execution of this optimization is then restricted

to finding equivalences among the atomic constants. Since only constants can be intro-

Page 83: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

68

duced as new atomic propositions by subsequent executions of the other optimization

routines, the effectiveness of the optimization is not inhibited.

Normalize Equation Block System

Like reassociation for partial redundancy elimination [BC94] this routine does not per-

form any specific optimization. Rather, it transforms the equation block system into a

normal form that both facilitates other optimizations as well as makes model checking

more efficient. First, it repartitions the individual equations to make the equation blocks

as small as possible, while at the same time minimizing the number of times the se-

mantic algorithm of Figure 2.3 would need to reprocess any particular block. Second,

it identifies blocks whose type can be changed toneutral . This designation speeds up

subsequent optimization and provides a possible hint to themodel checking engine that

the block specified fixed point can be computed directly without iteration.

This optimization again relies heavily on thedependsrelation. First, the existing

equation blocks are dissolved in place to a list of equations, each tagged with their orig-

inal block type and index. This list of equations is then partitioned into a topologically

sorted list of strongly connected components via thedependsrelation. Within each com-

ponent, the equations are partitioned again by their nesting depth. The nesting depth of

an equation is defined as the maximal number of type alternations that occur on a simple

dependence cycle consisting only of equations at or below the tagged index of the equa-

tion. The classes are ordered from least to greatest and eachis then partitioned a final

time into a topologically sorted list of strongly connectedcomponents using the same

dependsrelation restricted to the equivalence class. These components, which must

Page 84: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

69

consist of equations all of the same type, are the blocks. Blocks that can be assigned

neutral type are identified during this final partitioning. The blocks are then concate-

nated together, in order, and the symbol table is reconstructed. Computing the strongly

connected components requiresO(n log n) time in the number of variables. Computing

the nesting depths requires quadratic time over the variables in each component.

Remove Trivial Equations

A trivial equation is a variable-defined equation that is unnecessary to the semantics

of the equation block system. Trivial equations occur as a natural by-product of the

translation to equation block form as well as from other simplifying optimizations. For

example, recall thatUnify Atomic Propositionsreplaces redundant atomic propositions

with trivial equations. This optimization seeks to remove these equations by unifying

both the left and right hand sides into a single variable.

Processing occurs by examining each equation in each block,in order. Specifically,

a trivial equation is defined to be an equation of the formX = Y in which X does

not occur free in anyBk with k < I(X). In these cases, the equation is removed and

every use ofX is converted to a use ofY . In the event that an equation of the form

X = X, X = X ∨X, orX = X ∧X, is created, then this is replaced by the equation

X = p wherep = true if the block has typemax andp = false otherwise. IfX is atop

variable andY is not, then the syntactic variableX is substituted for every instance of

Y throughout the equation block system (including the definition of Y ). If bothX and

Y are top variables, then a newneutral block consisting of the single equation,X = Y ,

is appended to the end of the equation block system. This ensures that each top variable

Page 85: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

70

Equation Test Always? Reduction

Z = X ∨ Y X → Y Z = YY → X Z = X¬X → Y Z = true

Z = X ∧ Y X → Y Z = XY → X Z = YX → ¬Y Z = false

Figure 2.11: Tests and Reductions for Simplify Redundant Junctions

is preserved in the system.

Correctness is ensured by the restrictions placed on the variable equations that can be

eliminated. Only those variables whose initial value is never required have their trivial

equation eliminated. Hence, the semantics of the system is preserved. It is straightfor-

ward to check that this optimization cannot violate the block ordering assumption. This

optimization requires linear time.

Simplify Redundant Junctions

Inspired by partial-redundancy elimination [MR81], this optimization examines each

junction equation to determine if there is an implication relationship between the two

operands that holds each time the equation is evaluated. If such an implication relation-

ship exists, it can be used to simplify the formula to either asingle operand or an absolute

atomic constant. The tests and simplifications for each junction form are described in

Figure 2.11.

Each test involves attempting to prove by induction that theimplication rule be-

tween the two variables holds universally. This requires testing both the base case (first

invocation) and using this as an inductive hypothesis for testing each subsequent invo-

Page 86: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

71

cation initiated by the fixed point semantics. To accomplishthis, both variables used in

the junction formula are first expanded into a DNF expressionin which atomic propo-

sitions, modal expressions, and higher-indexed variablesmake up the terminals. The

proof then proceeds by trying to satisfy a sufficient (thoughnot necessary) condition

for the implication of one DNF form by another. Specifically,DNF expressionE1 im-

pliesE2 if for every conjunctive clause,Ci in E1 there existsCj in E2 such thatCi → Cj .

Likewise,Ci → Cj if for every atomal in Cj there existsak in Ci such thatak → al.

The proof proceeds by recursive descent. Atomic proposition p implies q if p ⊆ q.

Modal expression[a]X implies [b]Y if the expressions are from the same block type,

a ⊆ b, andX recursively impliesY . The base test substitutes the initial values for the

terminals while the recursive test carries the previous test as an assumption. The pro-

cess terminates when an implication is found or when every possible matching has been

exhausted. Any test on the formula definingX that would result in the elimination of

a use of variableY fails trivially if I(Y ) < I(X) ≤ F (Y ). A quick inspection of the

DNF forms for mismatched modalities or atomic propositionsbefore testing precludes

most failing tests.

This test is a sufficient, though not necessary, condition toprove that the appropriate

implication is a tautology. Since this optimization can remove uses of some variables, it

can potentially result in dead variables that can removed bysubsequent passes ofDead

Equation Elimination. The symbol table is updated as equation is modified.

The trivial failure test precludes eliminating any dependence that would violate the

block ordering assumption. The correctness of this optimization follows form the fact

that the implication holds each time the junction formula isevaluated. As a result, the

Page 87: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

72

simplifications of Figure 2.11 do not change the value derived by the formula for any

evaluation and hence do not change the semantics of the equation block system. The

performance of this optimization is bounded by the time to perform the tests. In the

worst case, the test used in this implementation is quadratic in the number of variables

on which the junction operands transitively depend.

Sever Isomorphic Expressions

This optimization builds equivalence classes by identifying pairs of isomorphic variables

in the system. Isomorphic variables are variables that are provably equivalent at the

termination of the semantic algorithm. Given these equivalence classes, the variableX

with the leastF (X) in the class becomes the representative. Each other singleton,Y , in

the class is redefined, in place, by the variable equation,Y = X. For other members of

the class,Z, a newneutralblock is created at indexF (Z) + 1 with the equationZ = X

and every instance ofZ in a blockBk with k ≤ F (Z) is renamed to a new name. The

result is that equations defining an isomorphic copy of an expression become “severed”

as there are no longer any uses.Dead Equation Eliminationis then able to discard these

equations. The symbol table is updated as each equation is modified or created.

Equivalence is determined using an implication proof technique similar to the one

used inSimplify Redundant Junctions. In this caseX ∼= Y ↔ (X → Y ) ∧ (Y → X).

The complexity of this optimization comes from the need to determine which variable

pairs to test. To achieve the optimal result, it is unnecessary to find every isomorphism

that exists; one isomorphism can result in the elimination of several variables with iso-

morphic partners. Testing for these isomorphic partners would be redundant. Further, it

Page 88: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

73

is computationally infeasible to test for an isomorphism between every pair of variables.

To minimize the number of tests, a signature is computed for each variable consisting

of the atomic propositions and modal operators on which the variable transitively de-

pends. This signature is intended to be a compelling precondition for equivalence. The

signature can be computed quickly and only variables with the same signature need to

be compared. As each new variable is added to the equivalencerelation within a partic-

ular signature, it is only tested against one representative for each class so far identified

within that signature. Further, to prevent redundant testing, tests are restricted to thetop

variables and variables used at an index strictly greater than their own index. These are

the variables most likely to represent closed sub-expressions from an original formula.

From these equivalence classes, equivalences between the remaining variables are re-

stricted to structural isomorphism. For instance, ifX = [a]W , Z = [a]Y , and it was

found thatW ∼= Y , thenX ∼= Z.

The correctness of the optimization follows from the restriction that a variable can

only be redefined to an equivalent variable that is immutablebefore the first evaluation

of the replacement formula. In essence, this replaces the variable with its solution which

does not change the semantics of the equation block system. The running time is depen-

dent on the number of tests that must be performed. In the worst case, this is quadratic

in the number of variables within each signature.

Perform Peephole Substitutions

This is an optional optimization inspired by the notion of peephole optimizers for low-

level machine code [McK65]. The preceding routines break the problem of reducing

Page 89: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

74

Source Pattern: s0 is semantically equivalent toCTL : A[P U Q]

neutral s3 = P neutral s4 = Q max s1 = s3 ∧ s5 s5 = s4 ∨ s6 s6 = 2s1 min s2 = s4 ∨ s7 s2 = 2s2 neutral s0 = s1 ∧ s2

Target Pattern: t0 is semantically equivalent tos0

neutral t1 = Q neutral t3 = P min t0 = t1 ∨ t2 t2 = t3 ∧ t4 t4 = 2t0

Original Code: z0 is semantically equivalent toCTL : A[(AG W )U (AG X)]

neutral z8 = W neutral z10 = X max z3 = z8 ∧ z9 z9 = 2z3 max z4 = z10 ∧ z11 z11 = 2z4 max z1 = z3 ∧ z5 z5 = z4 ∨ z6 z6 = 2z1 min z2 = z4 ∨ z7 z7 = 2z2 neutral z0 = z1 ∧ z2

Optimized Code: Pattern matchess0∼= z0

∼= t0, s3∼= z3

∼= t3, ands4∼= z4

∼= t1

neutral z8 = W neutral z10 = X max z3 = z8 ∧ z9 z9 = 2z3 max z4 = z10 ∧ z11 z11 = 2z4 min t0 = t1 ∨ t2 t1 = z4 t2 = t3 ∧ t4 t3 = z3 t4 = 2t0 max z1 = z3 ∧ z5 z5 = z4 ∨ z6 z6 = 2z1 min z2 = z4 ∨ z7 z7 = 2z2 neutral z0 = t0

Dead Code Eliminated: z1, z2, z5, z6, z7 eliminated

neutral z8 = W neutral z10 = X max z3 = z8 ∧ z9 z9 = 2z3 max z4 = z10 ∧ z11 z11 = 2z4 min t0 = t1 ∨ t2 t1 = z4 t2 = t3 ∧ t4 t3 = z3 t4 = 2t0 neutral z0 = t0

Figure 2.12: Example of Perform Peephole Substitutions

Page 90: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

75

the complexity of an equation block system into the execution of small, semantics-

preserving transformations. However, cases can arise in practice when an equation pat-

tern is present for which there exists a simpler equivalent pattern, but for which there

does not exists a simple sequence of transformations takingthe more complex pattern to

the simpler one. This routine fills the need to perform this transformation by providing

a library of “large scale” translation rules.

The peephole analysis works by comparing variables to entries in a library of pattern

transformations in an attempt to find a syntactic equivalence. When such an equivalence

is found the variable is redefined as thetop variable in a corresponding target pattern.

The closed expression variables of the target pattern are then instantiated based on the

syntactic equivalence with the source pattern. The necessary variables for the target

pattern are added to the equation block system so as not to conflict with any existing

variable names. The intent is that a subsequent pass ofDead Equation Eliminationwill

sweep the variables of the original source pattern from the system. Figure 2.12 shows an

example of a peephole optimization that could not be performed by any combination of

the preceding passes. After the transformation is performed, the symbol table is updated

and entries are generated for the new variables.

Finally, while it is true that an entire optimization systemcould be based around the

application of a library of reduction rules, such a system would be inefficient, this routine

has the same running time characteristics ofSever Isomorphic Expressions. The intent

of this pass is not to perform simple optimizations but rather to provide a tool within the

framework for performing certain domain-specific, monolithic transformations.

Page 91: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

76

Pass Possibly ReschedulesDEE PGS UAP NEBS RTE SRJ SIE PPS

DEEPGS • • • • • • •UAPNEBSRTESRJ • • • • • • •SIE • • •PPS • • •

Figure 2.13: Interdependencies between Optimization Passes

2.2.3 Dynamic Scheduler

As in any optimizing compiler, the sequencing of the passes is crucial to efficiency of

the system. Rather than using a fixed optimization sequence,the logic compiler uses a

dynamic pass scheduler within the static optimization order introduced at the outset of

this section. Each optimization routine has an associated flag to indicate if it is sched-

uled to run. At each iteration, the first optimization in the fixed order that is flagged

is executed. When a pass performs a transformation that may open an opportunity for

another optimization to make progress, it simply sets the flag for the corresponding rou-

tine. A fixed order for the routines was chosen so that later routines could be accelerated

by making assumptions that earlier ones had run. For example, the comparison of sig-

natures inSever Isomorphic Expressionsis accelerated by the unification of the atomic

propositions inUnify Atomic Propositions. If earlier optimizations are omitted then the

later optimizations are still correct; they are just somewhat less effective.

When the optimizer starts, the flag for each routine is set with the exception of the

flag for Dead Equation Elimination. Since this is the first optimization in the order,

Page 92: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

77

and since the native translation never produces equations that are initially dead4, an ini-

tial pass of this optimization is unnecessary.Dead Equation Elimination, Unify Atomic

Propositions, Normalize Equation Block System, andRemove Trivial Equationsare es-

sentially cleanup routines and never schedule any additional optimization. The remain-

ing four optimizations set the flag of one or more of the remaining routines whenever

they make progress. Figure 2.13 shows the scheduling interdependencies of the eight

optimizations.

2.2.4 A Comprehensive Example

A complete example of the execution of the optimizer is provided in Figure 2.9. In

the example, two related but independentFCTL formulas are translated into a single

equation block system and optimized. The first query, translated into thex-subscripted

variables, should be read: “There exists a successor such that there exists a path where

in the future p holds”. The second query, translated into thez-subscripted variables,

should likewise be read: “There exists a successor such thatthere exists a path where in

the future there exists a path where in the future p holds”. Intuitively, the two queries

are equivalent; if in the future it is the case that somethingwill hold in the future then

it is the case that something will hold in the future. The second query uses a redundant

EF operator. These queries were chosen to provide a small example that would pro-

vide grist to each of the active optimization passes. For this examplePerform Peephole

Substitutionshas been turned off; an example of its execution is provided separately in

4Removal of a cyclic unguarded dependence by the front end canproduce dead equations. In this case,

Dead Equation Eliminationis retained as the initial pass.

Page 93: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

78

Figure 2.12.

The queries in the example are translated into equation block form using the stan-

dard translations forFCTL and the modal mu-calculus.FCTL is an extension ofCTL

that incorporates fairness constraints. Since these formulas do not make use of a fair-

ness constraint, the constraint is interpreted astrue. This adds an additional atomic

proposition to the initial equation block form that can be optimized away. Here, both

queries use the same syntactic atomic proposition, so the compiler generates the same

output in both the model-dependent and model-independent modes. The variablesx0

andz0 are thetopvariables in this case. The finalneutralblock in the output is required

to ensure that both of these variables remain in the system. Notice that the optimizer

not only finds the equivalence between the two queries, but that it significantly opti-

mizes the first query as well. In fact, for the first query thereis no equivalentFCTL

expression that translates into a block form under the standard translation with only

one modal expression. The factored intermediate representation admits an optimization

that is impossible at theFCTL source level. Some other systems compensate for this

by introducing extended operators into the native logic that are then interpreted as this

optimized form [CDH+00]. It is self-evident that the optimized equation block system

would place appreciably less demand on a model checking engine than the unoptimized

version and that this reduction would be orthogonal to any reduction achieved by reduc-

ing the state space of the model over which the resulting equation block system were

checked. Since both the optimized and unoptimized versionsof the queries can be re-

solved in time linear in the size of the model, the reduction in model checking time as

a percentage is invariant in the size of the model. Experimentally, the optimized ver-

Page 94: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

79

Name Logic Description Atomic Propositions

Instantiated over each Global Variablev

use-before-define CTL ∃ a path wherev is used before it is defined? D(v), U(v)never-defined CTL ∃ a path wherev is declared but never defined? L(v), A(v)never-used CTL ∃ a path wherev is defined but never used? D(v), U(v)kill-before-use CTL ∃ a path where a definition ofv is killed before it is used? D(v), U(v)

Instantiated over each Global Pointer Variablep

never-free FCTL ∃ a path wherep is malloc’d and never free’d before termination? M(p), F (p)illegal-free CTL ∃ a path wherep is free’d without having been malloc’d? M(p), F (p)double-free CTL ∃ a path wherep is free’d twice without being malloc’d in between? M(p), F (p)double-malloc CTL ∃ a path wherep is malloc’d twice without being free’d in between? M(p), F (p)null-dereference Lµ ∃ a path wherep is dereferenced after being assigned zero? Z(p), D(p), R(p)false-dereference Lµ ∃ a path wherep is dereferenced after being tested zero? D(p), T (p), N(p), R(p)unneeded-test Lµ ∃ an unneeded test ofp being null? D(p), T (p), N(p), R(p)

Instantiated once per Programunseeded-random CTL Is a possibly unseeded random number generated? rand, srand

D = must define,A = may define,U = may use,L = declaration,M = must malloc,F = may free,R = may dereference,Z = must define zero,T = test zero,N = test non-zero,rand = call torand(), srand = call tosrand(unsigned)

Figure 2.14: Query Templates Used to Generate Test Batches

Model-Independent OptimizationBenchmark Unoptimized Serial Hand Optimized Compiler Optimized

Code Top Vars Atomic Modal Vars Atomic Modal Red (%) Vars Atomic Modal Red (%)188.ammp 251 4382 1589 545 2065 658 545 2.5 1497 290 414 30.8179.art 156 2573 928 321 1185 372 321 2.2 964 208 271 22.9256.bzip2 299 4946 1784 617 2278 715 617 2.2 1852 403 519 23.2129.compress 155 2506 902 313 1144 357 313 2.0 945 206 269 21.2183.equake 260 4441 1607 553 2074 657 553 2.4 1641 350 450 26.2099.go 1160 18637 6701 2329 8464 2631 2329 2.0 7168 1655 2024 20.2164.gzip 449 7516 2714 937 3478 1095 937 2.3 2713 557 766 25.4132.ijpeg 170 2831 1022 353 1309 412 353 2.2 1028 215 289 25.2124.m88ksim 1186 20243 7323 2521 9440 2987 2521 2.4 7101 1428 1984 28.2197.parser 367 6170 2229 769 2861 902 769 2.3 2212 461 620 26.3300.twolf 2266 39019 14127 4857 18260 5791 4857 2.4 13383 2559 3762 29.3175.vpr 304 5213 1887 649 2438 773 649 2.4 1805 352 506 28.9

Totals 7023 118477 42813 14764 54996 17350 14764 2.2 43007 8684 11874 26.6

Figure 2.15: Performance of Optimizing Compiler in Model-Independent Mode

sion requires less than30% of the running time of the unoptimized version over a fixed

model.

2.3 Experimental Results

Testing the efficacy of the logic compiler was done by applying it to batches of queries

automatically generated from several benchmark programs.Generation of a query batch

for each program was done by instantiating a collection of twelve query templates, one

at a time, over a set of appropriate program elements. Query templates are Carnauba’s

way of abstracting common query patterns away from the specific atomic propositions

Page 95: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

80

Model-Independent OptimizationBenchmark DEE PGS UAP NEBS RTE SRJ SIE

Code Pass Time (s) # % # % # % # % # % # % # %188.ammp 10 2.4 2 2.2 1 4.6 1 14.1 2 21.3 2 3.0 1 27.4 1 25.0179.art 10 1.2 2 4.7 1 9.3 1 13.3 2 28.7 2 5.3 1 8.1 1 28.0256.bzip2 10 2.5 2 2.8 1 6.4 1 15.6 2 26.4 2 4.2 1 10.0 1 32.0129.compress 10 1.2 2 4.3 1 8.6 1 13.6 2 22.1 2 12.9 1 6.7 1 27.9183.equake 10 2.5 2 3.0 1 11.0 1 23.0 2 23.3 2 4.7 1 8.3 1 24.0099.go 10 28.5 2 0.8 1 2.9 1 22.1 2 7.3 2 3.5 1 11.6 1 48.2164.gzip 10 4.1 2 1.9 1 6.7 1 17.1 2 17.0 2 4.5 1 14.5 1 34.8132.ijpeg 10 2.2 2 2.0 1 9.4 1 9.7 2 13.7 2 2.9 1 36.6 1 22.9124.m88ksim 10 27.6 2 0.4 1 4.0 1 32.7 2 8.0 2 2.3 1 11.0 1 39.6197.parser 10 4.2 2 1.5 1 6.3 1 15.6 2 17.5 2 3.1 1 21.4 1 32.5300.twolf 10 80.0 2 1.8 1 2.8 1 26.6 2 7.3 2 2.3 1 16.7 1 39.5175.vpr 10 3.5 2 1.4 1 5.9 1 11.4 2 14.1 2 4.6 1 37.4 1 22.0

Totals 120 159.9 24 1.4 12 3.7 12 25.1 24 9.2 24 2.8 12 15.4 12 39.4

Figure 2.16: Running Time Analysis of Optimizations in Model-Independent Mode

Model-Dependent OptimizationBenchmark Unoptimized Serial Hand Optimized Compiler Optimized

Code Top Vars Atomic Modal Vars Atomic Modal Red (%) Vars Atomic Modal Red (%)188.ammp 251 4382 1589 545 806 359 180 68.9 678 97 157 74.4179.art 156 2573 928 321 864 290 234 29.9 678 148 196 44.4256.bzip2 299 4946 1784 617 1346 483 358 45.4 1097 204 294 57.1129.compress 155 2506 902 313 818 274 224 31.8 672 130 191 44.4183.equake 260 4441 1607 553 1018 390 267 53.9 758 93 200 67.2099.go 1160 18637 6701 2329 4977 1713 1395 45.3 4148 908 1112 57.0164.gzip 449 7516 2714 937 2137 762 568 41.9 1737 456 477 54.2132.ijpeg 170 2831 1022 353 759 274 202 45.9 604 106 162 58.8124.m88ksim 1186 20243 7323 2521 4223 1722 1039 61.6 3452 1176 824 70.1197.parser 367 6170 2229 769 1628 606 415 48.8 1354 394 353 58.7300.twolf 2266 39019 14127 4857 10350 3907 2582 48.7 8104 2217 2149 60.1175.vpr 304 5213 1887 649 1294 496 324 52.3 1049 167 273 62.3

Totals 7023 118477 42813 14764 30220 11276 7788 50.1 24331 6096 6388 61.0

Figure 2.17: Performance of Optimizing Compiler in Model-Dependent Mode

Model-Dependent OptimizationBenchmark DEE PGS UAP NEBS RTE SRJ SIE

Code Pass Time (s) # % # % # % # % # % # % # %188.ammp 10 1.4 2 4.8 1 11.3 1 19.0 2 21.4 2 4.8 1 14.9 1 20.8179.art 17 1.4 4 6.9 2 7.5 2 7.5 3 45.0 3 6.9 2 9.4 1 21.3256.bzip2 10 1.6 2 4.4 1 11.4 1 16.8 2 23.3 2 11.9 1 7.4 1 23.3129.compress 10 0.9 2 5.5 1 10.9 1 5.5 2 36.4 2 7.3 1 5.5 1 25.5183.equake 10 1.0 2 5.6 1 14.5 1 6.5 2 33.1 2 7.3 1 8.1 1 24.2099.go 10 33.8 2 0.5 1 3.6 1 45.2 2 6.2 2 2.3 1 21.3 1 19.5164.gzip 17 5.9 4 2.9 2 9.4 2 27.0 3 20.3 2 2.5 2 15.1 1 18.8132.ijpeg 10 3.0 2 1.8 1 11.6 1 29.5 2 11.9 2 1.8 1 33.7 1 5.2124.m88ksim 17 29.4 4 1.4 2 4.6 2 33.2 3 7.4 3 1.9 2 34.8 1 14.9197.parser 17 6.1 4 2.0 2 6.8 2 25.8 3 14.3 3 4.1 2 31.1 1 11.4300.twolf 17 109.3 4 0.6 2 3.3 2 50.1 3 7.0 3 2.3 2 23.0 1 9.9175.vpr 10 2.6 2 2.3 1 11.2 1 21.0 2 14.9 2 2.9 1 24.7 1 21.6

Totals 155 196.4 34 1.0 17 4.3 17 43.4 29 8.5 29 2.5 17 24.2 12 13.1

Figure 2.18: Running Time Analysis of Optimizations in Model-Dependent Mode

Page 96: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

81

Model-Dependent OptimizationBenchmark Variables Unoptimized Compiler Optimized

Code Top Global Pointer Vars Atomic Const % Vars Atomic Red (%)188.ammp 251 24 22 4382 1589 1182 74.4 678 97 74.4179.art 156 30 5 2573 928 590 63.6 678 148 44.4256.bzip2 299 57 10 4946 1784 1245 69.8 1097 204 57.1129.compress 155 35 2 2506 902 599 66.4 672 130 44.4183.equake 260 35 17 4441 1607 1148 71.4 758 93 67.2099.go 1160 281 5 18637 6701 4755 71.0 4148 908 57.0164.gzip 449 77 20 7516 2714 1887 69.5 1737 456 54.2132.ijpeg 170 30 7 2831 1022 723 70.7 604 106 58.8124.m88ksim 1186 165 75 20243 7323 5468 74.7 3452 1176 70.1197.parser 367 60 18 6170 2229 1611 72.2 1354 394 58.7300.twolf 2266 281 163 39019 14127 9563 67.7 8104 2217 60.1175.vpr 304 39 21 5213 1887 1301 68.9 1049 167 62.3

Totals 7023 1114 365 118477 42813 30072 70.2 24331 6096 61.0

Figure 2.19: Optimized Atomic Propositions in Model-Dependent Mode

to which they are applied and the temporal logic used to implement them. In the exper-

iment, templates were chosen to produce a uniform, domain non-specific, set of queries

of the same sort that would be relevant to program verification for any C program. The

twelve templates used, and the program elements over which they were instantiated, are

described in Figure 2.14.

The atomic propositions for each query were generated usingCarnauba’s system

for abstract interpretation to determine when a variable isdefined, used, dereferenced,

tested, etc. This abstract interpretation is based upon a pointer analysis [SH97] and ac-

companying alias analysis that can track effects through intervening variables. While

the underlying pointer analysis used is context-insensitive, the analysis used to generate

themust define zeroand test zeroatomic propositions is context-sensitive. Restricting

the instantiation to the global variables allowed for the generation of reasonably-sized

batches of queries with a diverse collection of atomic propositions. It should be noted

that not every query template was applicable to all of the variables or programs to which

it was applied. No effort was made to prevent the inclusion ofuseless queries, it was left

to the optimizer to detect and eliminate them. All of the codes are ANSI C programs

taken form the SPEC95 and SPEC2000 benchmark suites [SPE]. The performance of

Page 97: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

82

the compiler in its model-independent mode is presented in Figure 2.15, with a break-

down of the running times of the various optimization routines provided in Figure 2.16.

Figure 2.17 provides the corresponding performance data for the model-dependent mode

with the running time breakdown of the optimization routines supplied in Figure 2.18.

Finally, Figure 2.19 breaks down the atomic propositions used to instantiate the tem-

plates and gives information about the number of atomic constants that are introduced

for the model-dependent mode to propagate.

For the performance results tables, the first two columns give the name of the pro-

gram and the total number of queries that were generated for it, corresponding to the

number oftop variables in the equation block system. A breakdown of the variables in

the initial, unoptimized, system is given in the next three columns. Vars refers to the

total number of variables in the system,Atomicto the number of atomic-defined vari-

ables in the system, andModal to the number of modal-defined variables in the system.

The columns underSerial Hand Optimizedshow the results assuming an optimal “by

hand” optimization of each individual formula, performed serially using the rules for

equivalence and implication dictated by the mode, without any inter-query optimization.

Finally, the columns underCompiler Optimizedshow the result of the optimizing com-

piler, running in its respective mode, with both intra-query and inter-query optimizations

enabled. TheRedcolumn of these two sections shows the resolution time reduction of

the optimized systems as a percentage of the resolution timefor the unoptimized system.

In both modes, the optional peephole optimization routine was disabled.

The second table for each mode provides the running time breakdown of the opti-

mization routines. The first three columns give the name of the code, the total number

Page 98: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

83

of optimization passes executed, and the total CPU running time of the compiler in sec-

onds. The running time includes some minimal initialization and back-end overhead that

is not charged to any optimization routine. The two columns for each of the seven active

optimizations show the number of times the routine executed(#) and the percentage of

the total compilation time spent executing that routine (%).

In the model-independent mode, the only hand-optimizations that were possible in-

volved the combining of syntactically equivalent atomic propositions repeated within

a query, elimination of some unnecessary atomic constants produced by the transla-

tion, and the identification ofneutral blocks containing modal-defined variables. This

accounts for the rather modest reduction in total resolution time achieved by this ap-

proach. However, beyond these optimization, the compiler was able to determine nu-

merous inter-query isomorphisms that were used to reduce the complexity of the system.

By taking advantage of these repeated sub-expressions, thetotal resolution time of the

system was reduced by an additional 24.4% beyond the hand-optimized output.

In this mode, the most time-intensive optimizations wereSever Isomorphic Expres-

sionsandUnify Atomic Propositions. This is to be expected, as these optimizations can

require a quadratic number of comparisons. Combined withSimplify Redundant Junc-

tions, which, like Sever Isomorphic Expressions, requires testing one or more costly

implication rules, these three optimizations accounted for 79.9% of the running time of

the compiler in this mode.

In the model-dependent mode, the hand optimization routinewas able to detect in-

stances where non-trivially defined atomic propositions were, in fact, provably equiva-

lent to atomic constants. As a result, hand optimization in this mode was able to elim-

Page 99: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

84

inate most of the irrelevant sub-expressions as well as detect more sophisticated intra-

query implications. Despite this, the optimizing compilerwas still able to provide a su-

perior result by again capitalizing on inter-query redundancies. The result was a further

10.9% reduction in the resolution time beyond what is possible with serial optimization

and processing. Because the overlapping characteristics of the atomic propositions vary

somewhat from program to program, the improvement achievedby the compiler is more

sensitive to the specific benchmark in this mode than in the model-independent mode.

Again, Unify Atomic Propositions, Simplify Redundant Junctions, andSever Iso-

morphic Expressionswere the three most time intensive optimization routines, this time

accounting for 80.7% of the total compilation time.Unify Atomic Propositionsalone

accounted for 43.4% of the total compilation time. This increase, in comparison to the

model-independent case, was due to two factors. First, since the comparisons potentially

involve a complete inspection of the sets underlying the atomic propositions, the individ-

ual tests are more expensive in this mode. Second, since thisroutine occurs early in the

fixed optimization order, it is not usually the case that the system has been substantially

reduced before it runs. This also accounts for the reason why, in some cases, the model-

independent compilation actually takes longer than the model-dependent compilation

— even though the individual routines are less aggressive, they must frequently operate

on substantially larger systems later in the compilation process. This provides a useful

lesson: when the model is available, model-dependent optimization is often no more ex-

pensive than model-independent optimization despite the fact that the model-dependent

techniques completely subsume the model-independent ones.

For the model-dependent mode, the results show that the optimizer was effective at

Page 100: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

85

reducing the size of the equation block system. In the experiments, queries originating

from three distinct source logics5 were merged into the common intermediate repre-

sentation. In each of the dozen examples, the optimizer reduced both the total number

of modal-defined variables (which largely govern the running time) and the number

of distinct atomic propositions (which has implications onthe space requirement). In

all cases, the reduced resolution time significantly exceeded the compilation overhead.

For comparison, the resolution times for the model-dependent optimized equation block

systems are presented in Section 3.2.

In addition to picking up common sub-expressions within thequery batch, the opti-

mizer, in model-dependent mode, also eliminated irrelevant sub-expressions. For exam-

ple, if a particular pointer variable in a benchmark was never used to store the result of

amalloc, as is common, then themalloc andfree atomic propositions for the variable

would be equivalent tofalse. In these cases,Perform Generic Solvewould reduce the

expressions based on these atomic propositions to atomic constants. In this way, the op-

timizer automatically “cleans-up” irrelevant query sub-expressions instead of burdening

the resolution system. This accounts for the larger reduction for the more pointer-based

codes, as illustrated in Figure 2.19 which includes the number of global (Global) and

pointer (Pointer) variables used to instantiate the templates as well as the initial num-

ber of atomic constants,Const, and the percentage (%) of these constants in relation

to the total number of initial atomic propositions. This count of atomic constants also

includes residual constants introduced by the front end translation that do no correspond

to any instantiating atomic proposition. Propagating atomic propositions that are trivial

when applied to a specific model is the most significant reasonfor the distinction in the

5CTL queries were translated intoLµ using theFCTL front end.

Page 101: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

86

improvement achieved by the model-dependent and model-independent modes.

Finally, notice that in both modes roughly 3% of the compilation time was spent

outside of the various optimization routines. Most of this time was spent performing the

front end translations. Because each query template represents a closed expression form

to be instantiated with different choices of atomic propositions, the translation of each

template is performed once and then cached. This caching resulted in a 32.2% reduction

in the total running time of the front end over the entire suite of benchmarks.

As an alternative application, this compiler could be applied to batches of domain

specific queries. In cases where there existed a common library and a definite set of

rules for its proper usage, the rules could be encoded in a temporal logic and compiled

in model-independent mode. Then, given a specific program tobe linked against the

library, the pre-optimized query batch could be re-compiled in model-dependent mode

against the source model. The re-compilation would then optimize to a degenerate form

queries necessary to the proper usage of the library but irrelevant to the specific program

being considered. For libraries with rules involving the proper sequencing of function

calls and initializations, it can be anticipated that some rules could be trivially reduced

when applied to programs that do not make full use of the features of the library.

2.4 Conclusions and Future Work

This chapter has presented an optimizing compiler for batches of temporal logic formu-

las that is effective at significantly reducing their total complexity. This has been demon-

strated by applying the compiler to batches of queries generated from actual benchmark

programs that are of the sort used by program verification systems. The reduction in

Page 102: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

87

the demand the reduced systems place on the model checking engine has been shown to

more than justify the compilation time6. This reduction is completely orthogonal to any

subsequent reduction achieved by reducing the size of the model. This approach is also

applicable to “on the fly” methods [BC96] that produce a product automaton between

the model and the query. The optimizations presented in thischapter are only a subset of

those that are possible within this framework. The compileris capable of either working

in conjunction with, or in place of, other optimization systems.

Beyond the empirical results, this chapter also demonstrates that a compiler-based

framework is a rational alternative way to organize a systemfor optimizing temporal

logic formulas. As with any optimizing compiler, this compiler is useful because it trans-

fers responsibility for optimization away from the programmer. This transfer makes it

feasible to automatically generate and check large query sets without regard to the spe-

cific applicability of each query to the given program. A coreset of useful optimizations

was reduced to a small number of routines, many operating by simple rules with pro-

gramming language analogs. The use of a global symbol table allows knowledge of

query and component boundaries to be used to speed up the optimization process in

ways that are unavailable to optimization systems that onlyconsider raw logical ex-

pressions. By using a well-defined intermediate language, commonality can be found

among even a heterogeneous collection of formulas. Further, by employing a represen-

tation that factors formulas in the same way as an assembly language factors imperative

programs, optimizations become permissible that were not possible in the source logic.

The optimizations provided in this system are intended to represent a middle ground

solution between routines that are complete but intractable and those that are fast but

6The running times for the model checking engine are presented in Section 3.2.3 as Figure 3.12.

Page 103: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

88

ineffective. Many other approaches to reducing the complexity of temporal logic formu-

las exist. Some of the most popular involve the constructionof Buchi automata [FW02,

GO01, GPVW95, SB00]. These approaches concentrate onLTL optimization, which

encompasses only a subset of the full modal mu-calculus. Further, these are again single

query optimizations that do not take any special advantage of the redundancy present in

query batches over a common domain. Finally, since they produce a specialized type

of automata, they are tied to model checking engines that utilize that format. However,

in some cases, these approaches are capable of performing optimizations beyond those

that have been presented in this chapter. For those applications, the analogy with pro-

gramming language compilers could be extended further by considering equation block

systems and Buchi automata as high and low intermediate forms each allowing a dis-

joint set of optimizations. In this way, the framework presented in the chapter could be

seen to work with, rather than in place of, these alternate approaches.

The next step in the development of this compiler is to improve the locality of

rescheduled optimizations. As implemented, when an optimization routine is executed

it operates over the entire equation block system. This includes cases when the routine

has previously executed. In cases when an optimization is rescheduled as a consequence

of a single transformation, the result is often wasted processing time. For example, an

optimization may introduce a new trivial equation and rescheduleRemove Trivial Equa-

tions. A subsequent pass of this routine would then search for trivial equations over the

entire equation block system. The result is wasted search time. Although restricting

passes to certain subsets of blocks breaks with the traditional programming language

compiler metaphor, it may result in significantly improved performance for this system.

Page 104: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

Chapter 3

Resolving Queries

Given an equation block system over an unrestricted hierarchical state machine, the ob-

jective is to generate a solution environment that maps eachstate in the model to the set

of variables from the equation block system that hold at thatstate. The process of gen-

erating this environment for an equation block system is referred to asresolvingthe set

of queries encoded by the system over the model. This chapterpresents one of the stan-

dard algorithms for resolving equation block systems over unrestricted hierarchical state

machines recast in terms of manipulations on context-sensitive analyses. This recasting

allows queries to be resolved in a manner that unifies the output of the model checking

algorithm with the representation of context-sensitive analyses presented in Section 1.2.

It also serves to integrate the basic algorithm with atomic propositions derived from pre-

existing context-sensitive analyses. The result is that the solution environment can then

be represented as an independent set of context-sensitive analyses. This representation

facilitates the novel post-query analyses that are the subject of Chapter 4.

This chapter begins by introducing a set of basic operationson context-sensitive

89

Page 105: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

90

analyses that will be referred to throughout the remainder of this dissertation. Opera-

tions that were previewed in Section 1.2 are presented in full detail in Section 3.1. The

algorithm for generating the solution environment as a set of context-sensitive analyses

is outlined in Section 3.2. Cast in terms of operations on context-sensitive analyses,

the variables are encoded as properties and transformers map stack-context, vertex pairs

representing states to sets of variables that hold at those states. One novelty of this ap-

proach is that the reduction operation for context-sensitive analyses presented in this

chapter can be used to contain the complexity of the solutionwhile it is being generated

by being run at arbitrary intervals, much like a garbage collector for memory reclama-

tion is run while a program is executing. Empirical data fromapplying this technique

is provided. Finally, Section 3.3 concludes with a discussion of techniques for reducing

the complexity of the model. A paradigm based on constructing equivalence classes is

described in the context of a motivating example.

3.1 Operating on Context-Sensitive Analyses

Section 1.2 introduced a fully abstract definition of a context-sensitive analysis that

is capable of representing a broad class of popular program analyses, including those

derived from second-order methods. Examples presented later in that section alluded

to operations that could be performed on these analyses to extract context-sensitive

atomic propositions and reduce the underlying number of analysis contexts. This sec-

tion presents a set of four operations on context-sensitiveanalyses that are necessary to

perform those manipulations and also provide the basis for the algorithm presented in

Section 3.2.

Page 106: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

91

Four operations are presented in this section:

Unification Given two analyses, return a single analysis spanning theircombined prop-

erty sets with a solution function equivalent to the union ofthe operand solution

functions.

Projection Given an analysis and a collection of functions each defininga predicate

on the set of properties spanned by the analysis, return a newanalysis that spans

those predicates.

Live Context Analysis Given an analysis over a model, return a mapping from each

procedure in the model to the set of analysis contexts that correspond to valid

stack-contexts for that procedure.

Context Reduction Given an analysis, use the output of the live context analysis to

return an equivalent analysis requiring fewer contexts.

3.1.1 Unifying Context-Sensitive Analyses

Before the solution process begins, any analyses necessaryfor representing context-

sensitive atomic propositions must be brought together into a single analysis. As strongly

connected components of equation blocks are solved, the solution for each of the free

variables on which the component depends must also be unified. The operation that

combines a pair of context-sensitive analyses,A1 andA2, over a common model into

a new context-sensitive analysis is referred to as theunify operation and is denoted

A1 ∪A2. Like union for sets, this operation is both associative andcommutative.

Page 107: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

92

Definition 17. Given two context sensitive analyses,A1 = (C1,X1,Γ1,Φ1, κ1), and

A2 = (C2,X2,Γ2,Φ2, κ2), defined over a common UHSM with call-edge setΣ and vertex

setV, theunification ofA1 andA2, denotedA1 ∪ A2, is the context-sensitive analysis

A = (C1 × C2,X1 ∪ X2,Γ,Φ, (κ1, κ2)), where for eachσ ∈ Σ, [Γ(σ)]((α1, α2)) =

([Γ1(σ)](α1), [Γ2(σ)](α2)), and for eachv ∈ V, [Φ(v)]((α1, α2)) = [Φ1(v)](α1) ∪

[Φ2(v)](α2).

The definition does not require that the property sets be disjoint. The unify operation,

A1 ∪ A2 produces a context-sensitive analysis,A, such that for any vertex,v, and valid

stack-context,σ, for v in the model the solution function forA, ρA, has the property

ρA(σ, v) = ρA1(σ, v) ∪ ρA2(σ, v).

Proof. Let σ = σ0 . . . σn, then

ρA(σ, v) = [Φ(v)](Γ∗(σ))

= [Φ(v)]([Γ(σn) . . . Γ(σ0)]((κ1, κ2)))

= [Φ(v)]([Γ(σn) . . . Γ(σ1)](([Γ1(σ0)](κ1), [Γ2(σ0)](κ2))))

= . . .

= [Φ(v)](([Γ1(σn) . . . Γ1(σ0)](κ1), [Γ2(σn) . . . Γ2(σ0)](κ2)))

= [Φ(v)]((Γ∗1(σ),Γ∗

2(σ)))

= [Φ1(v)](Γ∗1(σ)) ∪ [Φ2(v)](Γ

∗2(σ))

= ρA1(σ, v) ∪ ρA2(σ, v).

As a corollary, notice that the induced stack-context transformer,Γ∗, of the unifica-

tion can be recovered directly from the stack-context transformers of the operands,Γ∗1

andΓ∗2 by defining

Γ∗(σ) = (Γ∗1(σ),Γ∗

2(σ)).

Page 108: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

93

example1:Global Integer G, H

v0: Procedure P() v12: Procedure R() v1: while(...cond_1...) v13: H := G + 2;v2: G := G + 1; v14: print(G);s3: R() v15: v4: H := H + 1;

v16: Procedure main() v5: v17: G := 0;

v18: H := 1;v6: Procedure Q() v19: if(...cond_2...) v7: H := 7; s20: P();s8: R(); else s9 : P(); s21: Q();v10: G := 2; v11: v22: print(G);

v23:

Figure 3.1: Sample Programexample1

A sample program with global integer variablesG andH, introduced in Chapter 1,

is reproduced here as Figure 3.1. Figure 3.2 provides two separate context-sensitive

analyses,AG andAH , for computing the liveness ofG andH, respectively, derived from

solving for the least fixed point of the live variable data-flow equations of Figure 1.4

one variable at a time. Each analysis has a single property corresponding to the liveness

of the identically named variable. The liveness of each variable is context-sensitive.

Figure 3.3 shows the unified analysis,AGH = AG ∪ AH . In the unified analysis, the

product contexts have been relabeled,(β0, β2) 7→ α0, (β1, β2) 7→ α1, (β0, β3) 7→ α2,

(β1, β3) 7→ α3. This is the same analysis that presented was in Section 1.2 as Figure 1.7.

The unified analysis produces the same result as the union of the two separate analy-

Page 109: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

94

Analysis Computing Liveness ofGContext Transformers Property Transformers

v [Γ(v)](β0) [Γ(v)](β1) [Φ(v)](β0) [Φ(v)](β1)v0 G Gv1 G Gv2 G Gs3 β1 β1 G Gv4 G Gv5 ∅ Gv6 G Gv7 G Gs8 β1 β1 G Gs9 β0 β0 G Gv10 ∅ ∅v11 ∅ Gv12 G Gv13 G Gv14 G Gv15 ∅ Gv16 ∅ ∅v17 ∅ ∅v18 G Gv19 G Gs20 β1 β1 G Gs21 β1 β1 G Gv22 G Gv23 ∅ G

Analysis Computing Liveness ofHContext Transformers Property Transformers

v [Γ(v)](β2) [Γ(v)](β3) [Φ(v)](β2) [Φ(v)](β3)v0 ∅ Hv1 ∅ Hv2 ∅ ∅s3 β3 β3 ∅ ∅v4 H Hv5 ∅ Hv6 ∅ ∅v7 ∅ ∅s8 β2 β2 ∅ ∅s9 β2 β3 ∅ Hv10 ∅ Hv11 ∅ Hv12 ∅ ∅v13 ∅ ∅v14 ∅ Hv15 ∅ Hv16 ∅ ∅v17 ∅ ∅v18 ∅ ∅v19 ∅ Hs20 β2 β3 ∅ Hs21 β2 β3 ∅ ∅v22 ∅ Hv23 ∅ H

Thes-designated vertices also identify the elements ofΣ by call-edge source.

Figure 3.2: Context-Sensitive Analyses,AG = (β0, β1, G,Γ,Φ, β0) andAH =(β2, β3, H,Γ,Φ, β2), overexample1 for Computing the Liveness ofG andH sep-arately

Page 110: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

95

Context Transformers Property Transformersv [Γ(v)](α0) [Γ(v)](α1) [Γ(v)](α2) [Γ(v)](α3) [Φ(v)](α0) [Φ(v)](α1) [Φ(v)](α2) [Φ(v)](α3)v0 G G G, H G, Hv1 G G G, H G, Hv2 G G G Gs3 α3 α3 α3 α3 G G G Gv4 G, H G, H G, H G, Hv5 ∅ G H G, Hv6 G G G Gv7 G G G Gs8 α1 α1 α3 α3 G G G Gs9 α0 α0 α2 α2 G G G, H G, Hv10 ∅ ∅ H Hv11 ∅ G H G, Hv12 G G G Gv13 G G G Gv14 G G G, H G, Hv15 ∅ G H G, Hv16 ∅ ∅ ∅ ∅v17 ∅ ∅ ∅ ∅v18 G G G Gv19 G G G, H G, Hs20 α1 α1 α3 α3 G G G, H G, Hs21 α1 α1 α3 α3 G G G Gv22 G G G, H G, Hv23 ∅ G H G, H

Thes-designated vertices also identify the elements ofΣ by call-edge source.

Figure 3.3: Context-Sensitive Analysis,AGH = (α0, α1, α2, α3, G,H,Γ,Φ, α0),overexample1 for Computing the Liveness ofG andH together

Page 111: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

96

ses. As a pair of examples, given vertexv14 with valid stack-contexts,s21s8 ands21s9s3,

ρAGH(s21s8, v14) = G = G ∪ ∅ = ρAG

(s21s8, v14) ∪ ρAH(s21s8, v14),

and

ρAGH(s21s9s3, v14) = G,H = G ∪ H = ρAG

(s21s9s3, v14) ∪ ρAH(s21s9s3, v14).

If the negation operation for a context-sensitive analysis, A, denoted¬A, is defined

in the obvious way so as to invert the solution function,ρA, with respect to the set of

properties spanned by the analysis, then the intersection of two analyses,A1 andA2,

denotedA1 ∩ A2, can also be defined in the usual way as

A1 ∩ A2 = ¬(¬A1 ∪ ¬A2).

It is trivial to confirm that this operation has the corresponding intersection property,

ρA(σ, v) = ρA1(σ, v) ∩ ρA2(σ, v),

for each vertex,v, and valid stack-context,σ, for v in the model. An application of

intersecting analyses is discussed in Section 5.1.

3.1.2 Projecting Context-Sensitive Analyses

Recall that a property,p, is said to bedecidableby a context-sensitive analysis spanning

a property setX , if there exists a total functionθp : 2X → true, false that decides p.

These functions form the basis for generating context-sensitive atomic propositions, sin-

gle property analyses that encode some aggregate property of the property set spanned

by the analysis. Given a collection of such functions, a new context-sensitive analysis

can be derived that spans only the predicates decided by the functions.

Page 112: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

97

Definition 18. Given a context-sensitive analysisA = (C,X ,Γ,Φ, κ) over an UHSM

with vertex setV and a set of functions,Θ, that each decide some predicate from the

analysis, theprojection of A with respect to the setΘ, denotedπΘA, is the context-

sensitive analysis,AΘ = (C,Θ,Γ,ΦΘ, κ), where[ΦΘ(v)](α) = θi ∈ Θ | θi([Φ(v)](α)) =

true for eachv ∈ V andα ∈ C.

The projected context-sensitive analysis,AΘ, is then the context-sensitive analysis

over the same set of contexts (and context transformers and initial context) asA with the

property setΘ. For eachθ ∈ Θ, v ∈ V, andσ a valid stack-context forv in the model,

θ ∈ ρAΘ(σ, v)↔ θ(ρA(σ, v)) = true.

Proof. Assumeθ ∈ ρAΘ(σ, v). Thenθ ∈ [ΦΘ(v)](Γ∗(σ)). So,θ([Φ(v)](Γ∗(σ))) =

θ(ρA(σ, v)) = true.

Conversely, ifθ(ρA(σ, v)) = θ([Φ(v)](Γ∗(σ))) = true thenθ ∈ [ΦΘ(v)](Γ∗(σ)). So

θ ∈ ρAΘ(σ, v).

A function for projecting the live variable analysis of Figure 3.3 based on the cardi-

nality of the live variable set was presented in Section 1.2.

θD(X) =

true if |X| ≥ 2

false otherwise.

UsingD to denote the property decided byθD, the projected live variable analysis

for G andH with respect to the setΘ = θD as defined by Definition 18 is provided in

Figure 3.4. Notice that this analysis requires four contexts to encode the same property

that was encoded using only two contexts in Section 1.2. Thissuggests that that there

may be a process by which the number of contexts required by the analysis can be

Page 113: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

98

Context Transformers Property Transformersv [Γ(v)](α0) [Γ(v)](α1) [Γ(v)](α2) [Γ(v)](α3) [Φ(v)](α0) [Φ(v)](α1) [Φ(v)](α2) [Φ(v)](α3)v0 ∅ ∅ D Dv1 ∅ ∅ D Dv2 ∅ ∅ ∅ ∅s3 α3 α3 α3 α3 ∅ ∅ ∅ ∅v4 D D D Dv5 ∅ ∅ ∅ Dv6 ∅ ∅ ∅ ∅v7 ∅ ∅ ∅ ∅s8 α1 α1 α3 α3 ∅ ∅ ∅ ∅s9 α0 α0 α2 α2 ∅ ∅ D Dv10 ∅ ∅ ∅ ∅v11 ∅ ∅ ∅ Dv12 ∅ ∅ ∅ ∅v13 ∅ ∅ ∅ ∅v14 ∅ ∅ D Dv15 ∅ ∅ ∅ Dv16 ∅ ∅ ∅ ∅v17 ∅ ∅ ∅ ∅v18 ∅ ∅ ∅ ∅v19 ∅ ∅ D Ds20 α1 α1 α3 α3 ∅ ∅ D Ds21 α1 α1 α3 α3 ∅ ∅ ∅ ∅v22 ∅ ∅ D Dv23 ∅ ∅ ∅ D

Thes-designated vertices also identify the elements ofΣ by call-edge source.

Figure 3.4: Context-Sensitive Analysis,AD = (α0, α1, α2, α3, D,Γ,Φ, α0), overexample1 for Computing the Projected Property D

reduced. In fact, such a process exists and is essential to generating efficient context-

sensitive atomic propositions from complex analyses.

3.1.3 Live Context Analysis

Each context-sensitive analysis,(C,X ,Γ,Φ, κ), is defined in terms of a finite set of

analysis contexts,C. The collection of context transformers,Γ, map contexts to contexts

with each procedure transition in the model. The stack-context transformer,Γ∗, induced

by the analysis then maps stack-contexts, represented as finite sequences of elements

(strings) from the set of call-edges,Σ, to elements ofC. Given a state consisting of

a vertex and a valid stack-context for that vertex, the stack-context transformer returns

the element ofC that is then applied to the property transformer associatedwith the

vertex to produce a solution. Intuitively, given a vertex, not every element ofC is in the

image ofΓ∗ applied to the set of valid stack-contexts for that vertex. Here, an analysis is

Page 114: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

99

Live-Context Analysis Equations:

LC (P0) = κ

LC (Pi) =⋃

sj

[Γ(sj)]LC (Pk), wherei 6= 0 andsj is a call-edge fromPk toPi

Figure 3.5: Equations for Performing Live Context Analysisover a Context-SensitiveAnalysis,(C,X ,Γ,Φ, κ)

provided that returns a mapping,LC , associating with each procedure,P , in the UHSM

over which the analysis is defined, the subset ofC that is the image ofΓ∗ applied to

the set of valid stack-contexts for vertices in that procedure. Since every vertex in a

procedure shares the same set of valid stack-contexts, thissolves the general problem

of determining the set of live contexts for each vertex. The mapping produced by this

analysis is necessary for the reduction techniques presented in Section 3.1.4.

Definition 19. Given a context-sensitive analysis,(C,X ,Γ,Φ, κ), with induced stack-

context transformer,Γ∗, over an UHSM,U , with procedure setP, the set oflive contexts

for P ∈ P, denotedLC (P ), is the setα | α = Γ∗(σ) for someσ, a valid stack-context

for P entry.

The set of live contexts for each procedure is computed as theleast fixed point over

the usual set inclusion lattice of the equations shown in Figure 3.5. Live contexts are

extended to vertices as follows: givenv ∈ V, a vertex in the model,LC (v) = LC (Pi),

wherePi is the unique procedure containingv. The algorithm for computing the so-

lution to these equations is the standard worklist algorithm over the call graph of the

model, initialized with procedureP0 having live contextκ. The proof that these equa-

tions produce the sets prescribed by the definition proceedsby a straightforward pair of

Page 115: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

100

inductions.

Proof. LetLC (P ) be the live context set for procedureP as described by Definition 19

and letLC ′(P ) be the live context set computed as the least fixed point of theequations

in Figure 3.5.

Let α ∈ LC (P ). Then there existsσ, a valid stack-context forP entry , such that

Γ∗(σ) = α. If σ = ε thenα = κ and P = P0. So,α ∈ LC ′(P ). Assume that

for all τ such thatlength(τ) = length(σ) − 1, if τ is a valid stack-context forQentry

(henceΓ∗(τ ) ∈ LC (Q)) then, Γ∗(τ) ∈ LC ′(Q). Let σ = σ0 . . . σn and letP ′ be

the procedure containing the source ofσn. Then by the hypothesis,Γ∗(σ0 . . . σn−1) =

α′ ∈ LC ′(P ′) sinceσ0 . . . σn−1 is a valid stack-context forP ′. By the equations,

[Γ(σn)](α′) = Γ∗(σ) = α ∈ LC ′(P ). ThusLC (P ) ⊆ LC ′(P ).

Conversely, letα ∈ LC ′(P ) for someP . If P = P0, thenα = κ. Sinceε is a

valid stack-context forP entry0 andΓ∗(ε) = κ, α ∈ LC (P ). If P 6= P0, then letLC ′

i(P )

be thei-th update ofLC ′(P ) in the least fixed point computation of the equation with

LC ′0(P ) = ∅. Clearly,LC ′

0(P ) ⊆ LC (P ) for each procedure,P . Assume that for allQ,

LC ′i−1(Q) ⊆ LC (Q). So, ifα ∈ LC ′

i(P ) then there existsQ with α′ ∈ LC ′i−1(Q) such

that [Γ(sj)](α′) = α andsj is a call-edge fromQ to P . Then by the hypothesis, there

existsτ , a valid stack context forQ, such thatΓ∗(τ ) = α′. So,τ is a valid stack-context

for P andΓ∗(τ sj) = [Γ(sj)](α′) = α ∈ LC (P ). ThusLC ′(P ) ⊆ LC (P ).

Therefore, for each procedureP , LC ′(P ) = LC (P ).

Contexts that do not appear in the live context set of any procedure aredead. As a

corollary, dead contexts can be removed from the analysis and any context transform-

ers producing dead contexts from live contexts can be remapped to the initial context

Page 116: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

101

without changing the solution function of the analysis. That is, if A = (C,X ,Γ,Φ, κ),

is a context-sensitive analysis with dead context setD ⊂ C, then letC′ = C − D. A

is thenequivalentto A′ = (C′,X ,Γ′|C′,Φ|C′, κ) with respect to the full property set

X asequivalentis defined by Definition 13 , whereΓ′(α) = Γ(α) if Γ(α) ∈ C′ and

Γ′(α) = κ otherwise. Note that the initial context,κ, is never dead for any analysis

sinceκ ∈ LC (P0).

Recall the sample program of Figure 3.1 and the unified live variable analysis for

variablesG andH of Figure 3.3. Because this program does not contain any recursive

procedure calls, the equations of Figure 3.5 for computing the live context analysis can

be solved sequentially,

LC (main) = α0

LC (Q) = [Γ(s21)]LC (main) = α1

LC (P ) = [Γ(s9)]LC (Q) ∪ [Γ(s20)]LC (main) = α1 ∪ α1 = α1

LC (R) = [Γ(s3)]LC (P ) ∪ [Γ(s8)]LC (Q) = α3 ∪ α1 = α1, α3.

For this example, contextα2 is dead since it is not in the image ofLC for any

procedure in the model. Live contexts provide the set of possible contexts in which an

element of a program can occur. For example, the live contextset associated withP ,

LC (P ), is α1. The set of properties that can ever hold at a vertex, sayP entry , in P

is then⋃

αi∈LC (P )[Φ(v0)](αi) = G. The absence ofH in this equality implies thatH

is never live at any occurrence ofv0 = P entry . Taking the confluence of the property

sets over the live contexts for each vertex also provides oneway of casting a less precise

context-insensitive analysis from a context-sensitive analysis.

Page 117: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

102

If a context is not part of the live context set associated with a procedure then this

implies two things. First, for each vertex in the procedure,the value of the property

transformer associated with that vertex is irrelevant for that context and can be changed

without changing the solution function. Second, that the value of the context trans-

former associated with calls in that procedure is also irrelevant for that context and can

likewise be changed without changing the solution function. The operation for reducing

a context-sensitive analysis takes advantage of these facts.

For analyses generated as the unification of two context-sensitive analyses for which

the live contexts sets,LC 1 andLC 2, have already been computed, the live context sets

for the unification,LC , can be computed directly without having to resolve the fixed

point equations of Figure 3.5. For each procedure,P ,

LC (P ) = LC 1(P )× LC 2(P ).

For analyses generated as the projection of some other analysis, the live-context sets for

the projected analysis are the same as the live-contexts sets for the source analysis. This

follows immediately from projected analyses having the same context transformers and

set of contexts as their source analysis.

3.1.4 Reducing Context-Sensitive Analyses

Unnecessary context information is sometimes a residual ofthe algorithm used to gen-

erate the analysis. Both the unification and projection operations on context-sensitive

analyses may produce analyses that make inefficient use of contexts. For analyses de-

rived by unification, the context set is the Cartesian product of the context sets of the

operand analyses. For projected analyses, the context set remains the same, but the num-

Page 118: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

103

ber of properties is frequently reduced. This can reduce thenumber of distinct resulting

property sets for each vertex that need to be distinguished within the analysis.

Live context analysis provides a mechanism for determiningthe set of contexts for

each procedure that are necessary for the solution functionto be defined over every state

in model. In so doing, it also identifies the set ofdeadcontexts that can be trivially elim-

inated without changing the solution function of the analysis. In many cases, the output

of the live context-analysis can be used beyond simple dead context elimination to fur-

ther reduce the cardinality of the context set of the analysis by identifying opportunities

for disparate procedures to “share” contexts. In some cases, the need for a non-trivial

context set can be eliminated altogether. This is useful, asthe complexity of the model

checking algorithm will be seen to be strongly dependent on the number of contexts

required to encode the underlying analyses.

Definition 20. A context-sensitive analysis,(C,X ,Γ,Φ, κ), over an UHSM with vertex

setV is context-insensitivewith respect to the property setX ′ ⊆ X if, for everyv ∈ V,

[Φ(v)](α) ∩ X ′ is invariant over allα ∈ LC (v).

If A = (C,X ,Γ,Φ, κ) is context-insensitive with respect toX ′ ⊆ X , thenA is

equivalent toA′ = (κ,X ′, id,Φ′, κ) with respect toX ′, where for eachv ∈ V,

[Φ′(v)](κ) = [Φ(v)](α) ∩ X ′ for any choice ofα ∈ LC (v).

Proof. LetX ′ be a property set for which[Φ(v)](α)∩X ′ is invariant over allα ∈ LC (v)

for every vertex,v. This implies that for any vertex,v, ρA(σ, v) ∩ X ′ is invariant over

any choice of valid stack-context,σ, for v. Thus, for any valid stack-context,σ′, for v,

ρA′(σ′, v) = [Φ′(v)](κ) = [Φ(v)](α) ∩ X ′ for someα in LC (v). Let σ be the valid

stack-context forv such thatΓ∗(σ) = α. Then,[Φ(v)](α) ∩ X ′ = ρA(σ, v) ∩ X ′. But

Page 119: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

104

ρA(σ, v) = ρA(σ′, v), soρA′(σ′, v) = ρA(σ′, v) ∩ X ′.

The resulting analysis over a single context is referred to as acontext-insensitive

analysis. The converse also holds. If there exists a vertex for which the property trans-

former is not invariant over the set of live contexts for thatvertex with respect to a

property set, then there does not exist an equivalent context-insensitive analysis with

respect to that property set.

Proof. Assume thatv is vertex andα1, α2 ⊆ LC (v) such that[Φ(v)](α1) ∩ X′ 6=

[Φ(v)](α2) ∩ X′ for some property setX ′. Thenα1, α2 ⊆ LC (v) implies that there

exists valid stack-contexts forv, σ1 and σ2, with Γ∗(σ1) = α1 andΓ∗(σ2) = α2. So,

ρA(σ1, v) ∩ X′ = [Φ(v)](Γ∗(σ1)) ∩ X

′ = [Φ(v)](α1) ∩ X′ 6= [Φ(v)](α2) ∩ X

′ =

[Φ(v)](Γ∗(σ2))∩X′ = ρA(σ2, v)∩X

′. But, ifA′ is an analysis with a single context,κ′,

thenρA′(σ1, v) = [Φ′(v)](κ′) = ρA′(σ1, v). HenceA′ can not be equivalent toA with

respect to the property setX ′. This is a contradiction.

The analysis of Figure 3.4,AD, spanning the property setD is not context-

insensitive with respect to that property set.AD has the same live context relation as

AGH , the analysis from which it was projected. For that analysis, LC (v14) = LC (R) =

α1, α3 and[Φ(v14)](α1) = ∅ 6= D = [Φ(v14)](α3).

Even in cases where where there the analysis cannot be reduced to a context-insensitive

representation, the number of contexts can frequently be reduced by other methods.

Definition 21. Given a context,α, from a context-sensitive analysis over a model with

vertex setV, thedomain of the context,α, is the setD(α) = v | α ∈ LC (v).

Page 120: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

105

Definition 22. Given two contexts,α1 andα2, from a context-sensitive analysis span-

ning a property setX , with property transformers,Φ, α1 andα2 agreewith respect to

X ′ ⊆ X , denotedα1∼=X ′ α2, if for eachv ∈ D(α1)∪D(α2), [Φ(v)](α1) = [Φ(v)](α2).

Agreement among contexts defines an equivalence relation over the set of contexts.

That is, the relation is reflexive, symmetric, and transitive. The notationα1∼= α2 is

used to denote cases where the contexts agree over the full property set spanned by the

analysis. The definition of agreement assumes that the property transformers are total

functions. Note, therefore, that while dead contexts do notoccur in the domain of any

vertex they do not necessarily agree with every other context. That one context agrees

with another does not imply that one can be substituted for the other throughout the

analysis. Specifically, it implies only that the two are equivalentterminalcontexts for the

stack-context transformer with respect to valid-stack contexts for vertices in the domain

of one or both contexts. The two contexts could still have different images underΓ for

some vertex, leading to different solutions in cases where the contexts areintermediate

contexts in the stack-context transformer computation. Capturing the broader notion of

general equivalence requires ensuring that this is not the case. The agreement relation

can be used to deconstruct the set of contexts into this more refined set of equivalence

classes by computing the greatest fixed point of a single monotone function,

F (X) = X − (W ∪ S(X))

over the lattice of pairs of contexts, where

W = (α1, α2) | α1 6∼= α2

Page 121: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

106

and

S(X) = (α1, α2) | ∃σ ∈ Σ.α1, α2 ⊆ LC (σ) ∧ ([Γ(σ)](α1), [Γ(σ)](α2)) /∈ X.

Hereσ refers to a call-edge by its source,Σ is again the set of call-edges in the model

over which the analysis is defined, andΓ is the collection of context transformers from

the analysis. The result corresponds to the largest set of pairs that are notpairwise

distinguishableby some valid stack-context suffix for some vertex with respect toX ′.

That is, givenα1 = Γ∗(σ1) andα2 = Γ∗(σ2), then(α1, α2) is an element of the fixed

point solution unless there existsσ such thatσ1σ and σ2σ are valid stack-contexts for

some vertex andΓ∗(σ1σ) 6∼=X ′ Γ∗(σ2σ). This set can be computed using a worklist

driven fixed point solver initialized with the irreflexive pairs of W ′, the complement

of W . The process can be speeded up by recognizing that, at each step, the setX is

symmetric and therefore a single pair can represent both symmetric pairs and reflexive

pairs can be omitted.

The fixed point solution to this equation defines an equivalence relation over the set

of contexts. These classes can be formed explicitly by usingthe fixed point elements

of X as pairs to merge classes in a union-find [CLR01] data structure. The resulting

equivalence classes form the contexts of a new equivalent analysis using fewer contexts.

If contextsα1 andα2 are equivalent with respect to a property setX ′, this is denoted

α1 ≡X ′ α2.

Given a context-sensitive analysis,A = (C,X ,Γ,Φ, κ) over an UHSM with vertex

setV and call-edge setΣ, letX ′ ⊆ X be a property set and letEX ′ be the set of equiva-

lence classes derived from computing the greatest fixed point of the preceding equation

overC×C using the agreement relation∼=X ′ , then the analysisA′ = (EX ′,X′,Γ′,Φ′, Eκ)

Page 122: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

107

is equivalent toA with respect toX ′, where[Γ′(σ)](E) = [Γ(σ)](α) for any choice of

α ∈ E, [Φ′(v)](E) = [Φ(v)](α) ∩ X ′ for any choice ofα ∈ E, andEκ is the class

containingκ.

Proof. Let Z be the greatest fixed point solution to the equationF (X) = X − (W ∪

S(X)) and let σ be a valid stack-context of lengthn for some vertexv. Let κ =

α0, . . . , αn = Γ∗(σ) be the sequence of contexts induced by applyingΓ∗ to the pre-

fixes ofσ. Likewise, letEκ = [α′0], . . . , [α

′n] = Γ′∗ be the corresponding sequence

for Γ′∗ with eachα′i ∈ C being the chosen representative of the respective equivalence

class. Then(α0, α′0) ∈ Z, sinceκ ∈ [α′

0] by definition. Assume that(αi, α′i) ∈ Z. Then

(αi+1, α′i+1) ∈ Z since, if not, the pair is distinguishable by some valid stack-context

suffix. This suffix appended to thei + 1 prefix of σ would have distinguishedαi from

α′i and the pair would not be inZ. Hence(αn, α

′n) ∈ Z ⊆ W ′. So,αn

∼=X ′ α′n and

ρA′(σ, v) = [Φ(v)]([α′n]) = [Φ(v)](αn)∩X ′ = ρA(σ, v)∩X ′, sincev ∈ D(αn)∪D(α′

n),

as exhibited by valid stack-contextσ for v.

This technique for computing equivalence classes of contexts is identical to the tech-

nique for computing equivalence classes of states in a finitestate machine [Huf54]. In

that construction, the agreement relation holds that two states agree if and only if both

states are accepting or both states are non-accepting. The fixed point computation then

seeks to find a suffix for each pair that distinguishes an accepting computation from a

non-accepting computation. For context-sensitive analyses, the context transformers act

as a finite state machine on the set of contexts. Rather than distinguish accepting states

from non-accepting states, the agreement relation distinguishes contexts that produce

distinct solutions for one or more vertices for some stack-context suffix. Unlike mini-

Page 123: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

108

mization for finite state machines, the computation of pairwise distinguishable contexts

is restricted to suffixes that are part of a valid stack-context for some vertex. This re-

stricts the set of pairs that must be considered at each update and produces a set with

more equivalences.

Recall the analysis,AD, of Figure 3.4. Using procedures to stand for sets of vertices,

the domain of each context can be computed from the live context analysis,

D(α0) = main

D(α1) = P,Q,R

D(α2) = ∅

D(α3) = R.

The dead context,α2, can be eliminated. From the remaining contexts, the only

non-trivial agreement isα0∼= α1. Then, lettingI be the set of reflexive pairs,

F (>) = (α1, α2), (α2, α1) ∪ I

F 2(>) = F ((α1, α2), (α2, α1) ∪ I) = (α1, α2), (α2, α1) ∪ I.

This is the greatest fixed point of the equation. Thus the set of equivalence classes

among the set of live contexts is

α0, α1, α3.

Figure 3.6 shows the reduced context-sensitive analysis,A′D, spanningD derived

from this equivalence relation withβ0 7→ α0, α1, andβ1 7→ α3. This is the same

reduced analysis that was presented Section 1.2.

Page 124: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

109

Context Transformers Property Transformersv [Γ(v)](β0) [Γ(v)](β1) [Φ(v)](β0) [Φ(v)](β1)

v0 ∅ Dv1 ∅ Dv2 ∅ ∅s3 β1 β1 ∅ ∅v4 D Dv5 ∅ Dv6 ∅ ∅v7 ∅ ∅s8 β0 β1 ∅ ∅s9 β0 β1 ∅ Dv10 ∅ ∅v11 ∅ Dv12 ∅ ∅v13 ∅ ∅v14 ∅ Dv15 ∅ Dv16 ∅ ∅v17 ∅ ∅v18 ∅ ∅v19 ∅ Ds20 β0 β1 ∅ Ds21 β0 β1 ∅ ∅v22 ∅ Dv23 ∅ D

Thes-designated vertices also identify the elements ofΣ by call-edge source.

Figure 3.6: A Reduced Context-Sensitive AnalysisA′D = (β0, β1, D,Γ,Φ, β0) over

example1 for Computing PropertyD

Page 125: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

110

Finally, while the agreement relation for finite state machines produces an equiva-

lent machine that is provably minimal, this is not the case for this notion of agreement

on context-sensitive analyses. The problem lies in the factthat the agreement relation is

too strong. The requirement that contexts have equivalent property transformer outputs

over theunion of their domains may force property transformer values for acontext

on vertices that are outside its domain. This sacrifices a degree of freedom and may

preclude other, more advantageous, agreements. This relation can be weakened by tak-

ing advantage of the fact that the output of a property transformer for a context can be

lazily decided for vertices that are not part of its domain without changing the solution

function. Computing the optimal strategy for doing this is NP-complete.

Definition 23. Given two contexts,α1 andα2, from a context-sensitive analysis span-

ning a property setX , with property transformer,Φ, α1 and α2 weakly agreewith

respect toX ′ ⊆ X , denotedα1 'X ′ α2, if for eachv ∈ D(α1) ∩D(α2), [Φ(v)](α1) =

[Φ(v)](α2).

Weak agreement defines a relation that treats the output of a property-transformer

as a “don’t care” for vertices not in the domain of the argument. Because these “don’t

cares” are not necessarily consistent across weak agreement pairs, the weak agreement

relation is not transitive and thus not an equivalence relation. Solving the fixed point

equation,F (X) = X − (W ∪ S(X)), in the same way for this relation leads to a set of

context pairs that can be made equivalent, but the relation is not necessarily transitive.

The minimal number of contexts derivable from the notion of weak agreement can be

recovered by finding the minimal number of cliques (equivalence classes) in the graph

of the relation that cover the set of contexts. For an arbitrary analysis, determining if

Page 126: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

111

there is ak-clique-cover for this relation is NP-complete.

Theorem 1. Given a context-sensitive analysis,A, letGA be the graph of the relation

encoding the set of pairs of contexts fromA that are not pairwise distinguishable under

the weak agreement relation. Then the problem of deciding ifGA has ak-clique-cover

for k ≥ 3 is NP-complete.

Proof. Constructing the graphGA can be done in polynomial time. The problem ofk-

clique-cover fork ≥ 3 is known to be NP-complete [GJ79]. What remains to complete

the reduction is to show that given an arbitrary finite, reflexive, undirected graph,G,

there exists an UHSM,U , and an analysis,A, overU such thatGA is isomorphic toG.

GivenG = (V,E) letU have procedure setmain ∪ Pij, where each procedure

of the formPij = Pji corresponds to an unordered pair of vertices,vi andvj , fromV ,

has no transitions, and consists of a single vertex that is both the entry- and exit-vertex

of that procedure. Let the proceduremain call each procedurePij exactly twice in

an arbitrary straight-line sequence of transitions via call-edgesσ(i,j) andσ(j,i). Define

A = (V, p,Γ,Φ, v0) to be a context-sensitive analysis spanning a single property, p,

with an arbitrarily chosen initial contextv0 ∈ V . For each call-edge,σ(i,j), frommain

to procedurePij, let [Γ(σ(i,j))](v0) = vi. For each unordered pair of vertices,vi and

vj , if (vi, vj) ∈ E then let[Φ(P entryij )](vi) = [Φ(P entry

ij )](vj) = p. Otherwise, let

[Φ(P entryij )](vi) = ∅ 6= p = [Φ(P entry

ij )](vj). For each vertexu in main and context

v ∈ V , let [Φ(u)](v) = ∅. Then for each pair of contexts,vi andvj , D(vi) ∩ D(vj) =

Pij. Further,vi ' vj ↔ (vi, vj) ∈ E. Since every function exceptmain is terminal

in the call graph, it is trivial to confirm thatGA is isomorphic toG.

The reduction demonstrates that there is nohidden substructurein the graphGA

Page 127: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

112

for an arbitrary analysis. It is conjectured that the weak agreement relation produces an

equivalent analysis with the minimal number of contexts. The optimal solution derivable

from this notion of agreement can be approximated by assigning definite, consistent

values to the “don’t care” cases to greedily maximize the original “strong” agreement

relation of Definition 22. This is the approach taken in the implementation.

3.2 Generating the Environment

The algorithm for resolving a system of equation blocks is essentially the standard algo-

rithm of Burkart, Steffen, and Knoop [BS99, Kno99] recast toincorporate and produce

context-sensitive analyses. The basic structure of the algorithm follows the semantic

algorithm for equation block systems presented as Figure 2.3 in Section 2.1. The back

end of the optimizing compiler produces a normalized equation block system that has

been partitioned into a topologically sorted list of strongly connected components ac-

cording to the blockwise dependence relation. The blocks are ordered within each com-

ponent and each block contains a list of equations encapsulating a single fixed point

computation. Over the entire system, there are no cyclic unguarded dependences, cyclic

dependences among the variable- and junction-defined variables; these were eliminated

during compilation.

The algorithm proceeds by resolving the strongly connectedcomponents sequen-

tially in their topological order. For each component, the algorithm produces a context-

sensitive analysis that spans precisely the set of variables defined in the component.

This resulting analysis can be reduced and otherwise operated on just like any other

context-sensitive analysis using the techniques described in Section 3.1. The collection

Page 128: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

113

of analyses, each corresponding to a single strongly connected component, is referred

to as theenvironment. From this environment is is possible to recover the validity of any

variable defined in the equation block system for any state inthe model by applying to

that state the solution function of the unique analysis in the environment spanning that

variable. This section describes the sequence of operations for processing a strongly

connected component of equation blocks, including the specific second-order equations

necessary to compute the fixed point prescribed by an individual block. A complete ex-

ample, recalled from Chapter 1, is presented followed by a discussion of implementation

choices and empirical results.

3.2.1 Processing a Strongly Connected Component

Strongly connected components that consist of a single block with a single atomic-

defined variable1 are solved trivially as the context-sensitive analysis encoding the atomic

proposition projected to the single property. This property is renamed to be the atomic-

defined variable. Otherwise, the processing of a strongly connected component follows

the outline of the semantic algorithm of Figure 2.3. An initial analysis is created that is

the unification of the solution analyses for all of the components on which the current

component directly depends coupled with the initial (context-insensitive) values of the

variables defined in the current component. The blocks are each added to a worklist.

At each iteration the least indexed block from the worklist is removed and solved. In

this case, rather than simply computing the fixed point of theequations in the block

1Because the equation block system has been normalized, every atomic-defined variable is introduced

as the single equation of a neutral block that is the only block of its component.

Page 129: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

114

using the current values, computing the fixed point requirescomputing the fixed point

of a system of second-order equations that incorporate the context-sensitive solution of

each of the variables on which the block depends. As before, if a variable changes in

some live context, then its transitively dependent blocks in the component are reintro-

duced to the worklist and the variables defined in any block oflesser index are reset to

their initial value. The iteration ceases when the worklistis empty. This indicates that

a fixed point solution has been reached that is consistent forall of the equations in the

component. Finally, the analysis is projected down to just the variables defined in the

component and reduced. The process then continues with the next component until all

of the components have been solved.

In describing the construction of the component solution analysis, letV be the set of

vertices and letP be the set of procedures in the model. There are five elements of the

component solving routine that require elaboration:

• Creating the Initial Analysis

• Solving a Block

• Checking for a Variable Change

• Resetting Variables

• Incorporating the Analysis into the Environment

Creating the Initial Analysis

The first step in resolving a component is to construct an initial context-sensitive analysis

that unifies the final solution to all of the variables on whichthe current component

depends with the initial values of the variables defined in the current component as

Page 130: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

115

specified by the component block types.

Let E0, . . . , En be the set of distinct context-sensitive analyses from the environ-

ment that together span the set of variables,D, that arefree in the strongly connected

component. This is the set of variables that are used by the component, but are not de-

fined in the component2. Let C be the set of variables defined in the component, with

Cmax being the variables defined inmax type blocks andCmin being the remainder

(including variables defined inneutral type blocks). LetEC be the context-insensitive

analysis spanningC that associates to each vertex the setCmax . The initial analysis is

then,

Einit = πC∪D(E0 ∪ . . . ∪ En ∪ EC),

the projection, with respect toC ∪D, of the unified analysis that combines the solution

of the variables inD with the initial values of the variables inC. The projection is

necessary to remove variables on which the component does not depend, but which are

defined in the same component as a variable on which the current component depends.

This analysis contains the value of every variable necessary to resolve the strongly con-

nected component. This analysis may, of course, be reduced before proceeding with the

next step.

Solving a Block

The block solving routine takes the current context-sensitive analysis for the component,

E, and produces a new analysis,E ′, spanning the same property set but encapsulating

the fixed point solution of the equations comprising the block. The fixed point that is

2Because the components are solved in topological order, a solution must exist for any free variable.

Page 131: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

116

computed is dictated by the initialization, so the routine does not depend on the block

type. Blocks belong to one of two categories: blocks that contain modal-defined vari-

ables and blocks that do not. Blocks that do not contain any modal-defined variables

can be solved directly without changing the set of contexts or context transformers. If

E = (C,X ,Γ,Φ, κ), then

E ′ = (C,X ,Γ,Φ′, κ)

where,

[Φ′(v)](α) = solve([Φ(v)](α), ∅).

The routinesolvetakes two arguments. The first is a set containing assumptions

about the free variables of the block (elements of the first argument defined in the current

block are discarded) and the second is a set of assumptions about the validity of the

modal-defined variables of the current block. Using these two sets, the routine returns

the set of variables that are valid, based on these assumptions, by combining the two sets

and solving for the variable- and junction-defined variables of the block in dependence

order3. Formally, ifB is the set of variables defined in the current block and〈z1, . . . , zn〉

are the variable- and junction-defined variables inB in dependence order, thenW n =

solve(X, Y ) where,

W 0 = (X −B) ∪ Y

3Cyclic unguarded dependences were removed during compilation, so a dependence order for the

variable- and junction-defined variables can always be established.

Page 132: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

117

W i+i = Wi ∪

zi if

zi = x andx ∈W i

zi = x ∨ y andx, y ∩Wi 6= ∅

zi = x ∧ y andx, y ∪Wi = Wi

∅ otherwise.

For the remaining blocks, letS be the non-empty set of modal-defined variables in

the block and letE = (C,X ,Γ,Φ, κ), then

E ′ = (C × 2S,X ,Γ′,Φ′, (κ, ∅)).

The context set forE ′, C′ = C × 2S, represents the combination of all the existing

contexts on which the block depends with all of the possible sets of assumptions about

the validity of the modal-defined variables of the block. Defining the property and con-

text transformers forE ′ requires iterating a set of second-order equations to theirfixed

point. Each equation defines a function that mapsC′ → 2S. The functions are initialized

as follows:

Fu(α,X) = [Φ(v)](α) ∩ S

The fixed point equations are then,

Fu(α,X) =

X if u is P exit for someP 6= P0 ∈ P

T(u,s)([Γ(u)](α), Fs([Γ(u)](α), if (u, s) a call-edge toP ′,

T(t,v)(α, Fv(α,X)))) (u, P ′, v) ∈ P ,

and(t, v) a return-edge toP ,

for someP , P ′ ∈ P

`v∈succs(u)T(u,v)(α, Fv(α,X)) otherwise,

Page 133: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

118

whereT(u,v) also mapsC′ → 2S,

z ∈ T(u,v)(α,X)↔

z = 〈a〉z′ and(u, v) ∈ Ra andz′ ∈ solve([Φ(v)](α), Fv(α,X))

z = [a]z′ and(u, v) ∈ Ra andz′ ∈ solve([Φ(v)](α), Fv(α,X))

z = [a]z′ (u, v) /∈ Ra,

and`

combines a collection of subsets from2S into a single element of2S,

z ∈hX0, . . . , Xn ↔

z = 〈a〉z′ andz ∈⋃

0≤i≤nXi

z = [a]z′ andz ∈⋂

0≤i≤nXi.

From these equations, the property and context transformers for the solution analysis

E ′ = (C′,X ,Γ′,Φ′, κ′) can be defined as,

[Φ′(v)](α,X) = solve([Φ(v)](α), Fv(α,X))

[Γ′(v)](α,X) = ([Γ(v)](α), T(s,t)(α,X))

where(s, t) is the return-edge corresponding to the call-edge with source atv.

These equations require some explanation. Since the modal-defined and free vari-

ables of the block form a basis from which the validity of every other variable in the

block can be determined at any state, the setC × 2S serves as the minimal representa-

tion of the set of consistent assumptions about the validityof the variables at the exit

of each function. The initial context can be any choice of(κ,X) with X ⊆ 2S since

P exit0 loops to itself making the initial assumptions about the validity of the elements of

S irrelevant. The initialization function extracts the current values of the modal-defined

variables from the previous solution. The functionsFu compute the set of modal-defined

variables that hold at vertex in a given context. This is the “new information” that is be-

ing added to the solution. As thesolveroutine dictates, the solution of the other variables

Page 134: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

119

defined in the block can be trivially recovered from the validity of these variables plus

the validity of the free variables of the block. The functionT(u,v) computes the effect

of individual transitions. Here is where the modalities come into play to determine the

validity of the modal-defined variables at the source stateu. Finally, the`

-function can

be taken as a “mixed confluence operator” that computes the union of the3-defined

variables while simultaneously computing the intersection of the2-defined variables.

Finally, it should be noted that in cases where the optimizerhas determined the block

to haveneutral type, the fixed point of theFu functions can be found by solving them

sequentially without any worklist iteration. Otherwise, when the functionFu is updated,

the functions that depend upon it must be recomputed.

Checking for a Variable Change

Checking to see if a variable has changed is only applicable when the block is not a

singleton. For non-singleton blocks, only variables that would require a block to be

added to the worklist that is not already on the worklist needto be checked. Given a

block with a (possibly empty) set of modal-defined variablesS, letE = (C,X ,Γ,Φ, κ)

be the initial analysis and letE ′ = (C × 2S,X ,Γ′,Φ′, (κ, ∅)) be the solution analysis,

then a variable,z, defined in the block has changed if and only if

∃v ∈ V, α ∈ C, X ∈ 2S.(α,X) ∈ LC E′(v) ∧ ¬(z ∈ [Φ(v)](α)↔ z ∈ [Φ′(v)](α,X)).

This definition says that a variable has changed if and only ifthere is a live context

for some vertex such that the resulting analysis distinguishes the validity of the variable

from its previous validity in its corresponding original context. Changes can be detected

and noted as theFu functions are being computed. These can then be checked against

Page 135: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

120

the live contexts after the block has been solved.

Reseting Variables

Again, reseting of variables only occurs in cases where the component consists of mul-

tiple blocks. If the variablez needs to be reset, then the property transformers of

E ′ = (C′,X ′,Γ′,Φ′, κ′) are adjusted accordingly for each context,

[Φ′(v)](α′) :=

[Φ′(v)](α′)− z z is defined in amax block

[Φ′(v)](α′) ∪ z otherwise,

for eachv ∈ V andα′ ∈ C′.

Incorporating the Analysis into the Environment

Given the solution analysis,E, for a strongly connected component defining the set of

variables,C, the final analysis is then

Efinal = (C,X ,Γ,Φ, κ) = πCE.

This projection eliminates any variables that are not defined in the component but

are among the properties of the analysis. This projected analysis can be reduced. This

analysis is then entered into the environment as the solution to the variables inC. Given

z ∈ C, Ez refers to this analysis and the notationΓz andΦz refer to its collections

of context and property transformers, respectively. Likewise, Γ∗z refers to its induced

stack-context transformer andρz refers to its solution function.

Page 136: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

121

Optimized Equation Block Form as Strongly Connected Components: top = z0

SCC 1 : neutral z3 = R′ SCC 2 : neutral z5 = D SCC 3 : min z2 = z5 ∨ z6 z6 = 2z2 SCC 4 : min z0 = z1 ∨ z2 z1 = z3 ∧ z4 z4 = 3z0

Figure 3.7: Optimized Equation Block Form Partitioned intoStrongly Connected Com-ponents for the Modal Mu-Calculus Formulaφ = µX.[(R′ ∧3X) ∨ µY.(D ∨2Y )]

3.2.2 Resolving an Example Query

Section 1.3.3 introduced an example model checking query equivalent to theCTL for-

mulaE[R′ U AF D] over the context-sensitive atomic propositionD whose representa-

tion is supplied in Figure 3.6 and the context-insensitive atomic propositionR′ that holds

everywhere except instances of vertexv12 = Rentry . The optimized equation block form

translation of this query, with the equation blocks partitioned into topologically sorted

strongly connected components, is provided in Figure 3.7. The block system is com-

prised of four strongly connected components each consisting of a single block.

The first two components are solved trivially. The context-sensitive analyses en-

coding their solution are provided in Figure 3.8. In both cases the property has been

renamed to reflect the defined variable. In the third component, z5 occurs as a free vari-

able so the component depends on SCC 2 wherez5 is defined. The component contains

only a single block and the set of modal-defined variables in that block isz6. Thus,

the context set for the solution analysis is

β0 = (α0, ∅), β1 = (α0, z6), β2 = (α1, ∅), β3 = (α1, z6).

The solution analysis for this component is provided in Figure 3.9. The dead contextsβ1

andβ3 have been removed and only the portion of each transformer that is in the domain

Page 137: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

122

Analysis Encoding Solution to SCC 1Context Transformers Property Transformers

v [Γ(v)](α) [Φ(v)](α)v0 z3v1 z3v2 z3s3 α z3v4 z3v5 z3v6 z3v7 z3s8 α z3s9 α z3v10 z3v11 z3v12 ∅v13 z3v14 z3v15 z3v16 z3v17 z3v18 z3v19 z3s20 α z3s21 α z3v22 z3v23 z3

Analysis Encoding Solution to SCC 2Context Transformers Property Transformers

v [Γ(v)](α0) [Γ(v)](α1) [Φ(v)](α0) [Φ(v)](α1)v0 ∅v1 ∅v2 ∅s3 α1 ∅v4 z5v5 ∅v6 ∅v7 ∅s8 α0 ∅s9 α0 ∅v10 ∅v11 ∅v12 ∅ ∅v13 ∅ ∅v14 ∅ z5v15 ∅ z5v16 ∅v17 ∅v18 ∅v19 ∅s20 α0 ∅s21 α0 ∅v22 ∅v23 ∅

Thes-designated vertices also identify the elements ofΣ by call-edge source.

Figure 3.8: Context-Sensitive Analyses, (α, z3,Γ,Φ, α) and(α0, α1, z5,Γ,Φ, α0), Encoding the Solution to SCC 1 and SCC 2

Page 138: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

123

Context Transformers Property Transformersv [Γ(v)](β0) [Γ(v)](β2) [Φ(v)](β0) [Φ(v)](β2)

v0 ∅v1 ∅v2 z2, z6

s3 β2 z2, z6v4 z2v5 ∅v6 ∅

v7 ∅s8 β0 ∅s9 β0 ∅v10 ∅

v11 ∅v12 ∅ z2, z6v13 ∅ z2, z6v14 ∅ z2, z6v15 ∅ z2

v16 ∅v17 ∅v18 ∅v19 ∅

s20 β0 ∅s21 β0 ∅v22 ∅v23 ∅

Thes-designated vertices also identify the elements ofΣ by call-edge source.

Figure 3.9: Context-Sensitive Analysis,(β0, β2, z2, z6,Γ,Φ, β0), Encoding the So-lution to SCC 3

Page 139: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

124

Context Transformers Property Transformersv [Γ(v)](α0) [Γ(v)](α1) [Γ(v)](α2) [Φ(v)](α0) [Φ(v)](α1) [Φ(v)](α2)

v0 z0, z1, z4v1 z0, z1, z4v2 z0, z1, z4

s3 α2 z0, z1, z4v4 z0, z1, z4v5 ∅v6 ∅

v7 ∅s8 α1 ∅s9 α0 z0, z1, z4v10 ∅

v11 ∅v12 z4 z0, z1, z4v13 z0, z1, z4 z0, z1, z4v14 z0, z1, z4 z0, z1, z4v15 z0, z1, z4 z0, z1, z4

v16 z0, z1, z4v17 z0, z1, z4v18 z0, z1, z4v19 z0, z1, z4

s20 α0 z0, z1, z4s21 α0 ∅v22 ∅v23 ∅

Thes-designated vertices also identify the elements ofΣ by call-edge source.

Figure 3.10: Context-Sensitive Analysis,(α0, α1, α2, z0, z1, z4,Γ,Φ, α0), Encodingthe Solution to SCC 4

Page 140: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

125

Context Transformers Property Transformersv [Γ(v)]([α0]) [Γ(v)]([α2]) [Φ(v)]([α0]) [Φ(v)]([α2])

v0 z0, z1, z4v1 z0, z1, z4v2 z0, z1, z4

s3 [α2] z0, z1, z4v4 z0, z1, z4v5 ∅v6 ∅

v7 ∅s8 [α0] ∅s9 [α0] z0, z1, z4v10 ∅v11 ∅

v12 z4 z0, z1, z4v13 z0, z1, z4 z0, z1, z4v14 z0, z1, z4 z0, z1, z4v15 z0, z1, z4 z0, z1, z4

v16 z0, z1, z4v17 z0, z1, z4v18 z0, z1, z4v19 z0, z1, z4

s20 [α0] z0, z1, z4s21 [α0] ∅v22 ∅v23 ∅

Thes-designated vertices also identify the elements ofΣ by call-edge source.

Figure 3.11: Reduced Context-Sensitive Analysis,([α0], [α2], z0, z1, z4,Γ,Φ, [α0]),Encoding the Solution to SCC 4

Page 141: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

126

of each context is represented. For this component, no further reduction is possible.

In the final component,z3 andz2 both occur free, so this component depends on

both SCC 1 and SCC 3. Again, the component contains only a single block. The set of

modal-defined variables isz4. Recalling that the context set for SCC 1 wasα and

the context set of SCC 3 wasβ0, β2, the context set for the solution analysis to SCC 4

is

α0 = (α, β0, ∅), α1 = (α, β0, z4), α2 = (α, β2, ∅), α3 = (α, β2, z4).

The initial solution analysis for this component is provided in Figure 3.10. The dead

contextα3 has been eliminated and the transformers are again restricted to the domain

of each context. As the figure illustrates,α0 ' α1 andα0 ' α2 (butα1 6' α2, confirm-

ing that this is not a transitive relation). Assigning the unspecified elements ofΦ(α0)

and Φ(α1) so that they agree results in the discovery that they are not pairwise dis-

tinguishable and hence can be combined into a single context. Letting [α0] stand for

the equivalence classα0, α1 and letting[α2] stand for the singleton equivalence class

α2, the reduced solution analysis for SCC 4 is provided in Figure 3.11.

The environment for the equation block system is then the collection of these four

solution analyses. Recalling that the top variable for the query wasz0, then

(σ, v) ` φ↔ z0 ∈ ρz0(σ, v).

Figure 1.10 illustrated the global solution to the example query. The query is valid

as the start of the program since,

z0 ∈ ρz0(ε, v16) = [Φ(v16)]([α0]) = z0, z1, z4.

The validity of any other variable can likewise be recoveredfor any other state, in-

cluding the variables that correspond to the closed sub-expressions of the original query.

Page 142: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

127

The solutions to the components defining these variables canbe reused in deriving so-

lutions to new queries. In this way, the environment can be considered as a database of

solutions that can be drawn upon to solve subsequent queries. Finally, notice that the

entire environment consisting of the set of context-sensitive analysesE = E0, . . . , En

can be interpreted as a single context-sensitive analysis,equivalent toE0 ∪ . . . ∪ En.

Under this interpretation, the environment maps each vertex, stack-context pair to the

set of variables that hold at that vertex in that stack-context via the induced solution

function. This interpretation of the environment will be necessary in Section 4.2 for

posingconstraint querieson the solution to a model checking problem.

3.2.3 Implementation Considerations

Although the fixed point equations for solving a block are straightforward, there do exist

opportunities to optimize their computation. First, unless the number of distinct contexts

is small4 it is generally advantageous to solve the equations by demand. That is, rather

than compute the complete mapping for eachFu andT(u,v) function, only those input,

output pairs that are required for the solution are computed. The solutions given for the

non-trivial components in Section 3.2.2 are the product of ademand-driven solution;

the property and context transformers are only defined for the elements necessary for

the solution function to be defined for the states in the model. As always, the efficiency

of the fixed point computation is sensitive to the vertex ordering used to draw elements

from the worklist. Since this is a backward-flow algorithm, an ordering that emphasizes

placing vertices before their dominators in each procedureis preferable.

4Experimental evidence suggests that “small” should be interpreted as≤ 8.

Page 143: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

128

Also, rather than solve the fixed point equations independently and then use them

to construct the context and property transformers afterward, these functions can be

constructed and updated throughout the solution process. By doing this, thesolverou-

tine can be made automatic by implicitly representing the variable- and junction-defined

variables in terms of their underlying free and modal-defined variables. In this way, an

update to a modal-defined variable as prescribed by an updateto anFu function automat-

ically ripples to the variable and junction-defined variables. Further, since the output of

theT(u,v) function depends only on its arguments and the modalities that (u, v) belongs

to, it is advantageous to cache computed input, output pairsbased on this information.

Finally, in the case of multi-block components, the cost of the alternation can be reduced

by tracking where the changes that cause a reset occur and saving the values before the

reset. When recomputing the block containing the reset variable, the worklist need only

be initialized with the locations that depend on the changes. Once the change propa-

gates to a point where the updated solution matches the previous solution, the previous

solution can be used for the variable at the remaining vertices. This is a consequence of

monotonicity.

Finally, the primary novelty of this construction is its formulation in terms of ab-

stracted operations on context-sensitive analyses. A corollary of this is that, since the

reduction algorithm produces an equivalent analysis, it can be used between any phase

of the algorithm as a form ofcontext collection. Analogous to garbage collection [JL96]

for dynamically allocated memory, this has a substantial impact on controlling the real

running time of the resolution routine. In fact, the abilityto reduce analyses is the mo-

tivation for projecting away the unnecessary variables as the final step of the solution

Page 144: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

129

Benchmark Equation Block System ResultsCode |V| |P| |Σ| Top Vars Atomic Modal Block FO % Time (s) Avg (s/φ)188.ammp 11944 181 722 251 678 97 157 479 88 56.1 74.6 0.30179.art 1097 24 46 156 678 148 196 433 132 67.3 16.7 0.10256.bzip2 2699 67 208 299 1097 204 294 738 243 82.3 41.2 0.14129.compress 618 22 38 155 672 130 191 440 160 83.8 12.3 0.08183.equake 1359 26 31 260 758 93 200 520 200 100.0 17.1 0.66099.go 21927 385 2063 1160 4148 908 1112 2902 548 49.3 1049.0 0.90164.gzip 3230 77 235 449 1737 456 477 1142 415 87.0 71.3 0.16132.ijpeg 15497 320 1421 170 604 106 162 406 93 57.4 170.1 1.00124.m88ksim 15478 345 1226 1186 3452 1176 824 2428 383 46.5 584.8 0.49197.parser 12616 307 1219 367 1354 394 353 909 201 57.2 212.7 0.58300.twolf 18866 175 760 2266 8104 2217 2149 5339 1109 51.6 2250.2 0.99175.vpr 14487 273 897 304 1049 167 273 702 114 41.8 168.9 0.56

Totals — — — 7023 118477 42813 14764 16438 3686 61.6 4668.9 0.66

Figure 3.12: Performance of the Query Resolution System after Model-Dependent Op-timizations

of each component. In many cases, the variables defined in a component have context-

insensitive solutions even when the variables they depend upon do not. When this hap-

pens, the solution analysis becomes trivial, which then greatly reduces the complexity

for any subsequent component that depends on those variables.

Figure 3.12 gives the performance of the Carnauba equation block resolution system

on the batches of queries introduced in Section 2.3. These number reflect the perfor-

mance after the query batch has been run through the optimizing compiler operating in

its model-dependent mode. The column headingV refers to the number of vertices in

the model,P refers to the number of procedures, andΣ refers to the number of call-

edges. The middle section of columns are repeated from Figure 2.17 with the additional

columnBlock providing the number of equation blocks in the system. The number of

modal-defined variables that were found to have a first-order(context-insensitive) solu-

tion by the context collector is provided in theFO column with the subsequent column

providing the corresponding percentage of modal-defined variables with first-order so-

lutions. Finally the CPU running time,Time, is provided in seconds along with the

average resolution time per query,Avg, in seconds per query (s/φ).

The results confirm that, for many models, first-order solutions frequently exist.

Page 145: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

130

These solutions reduce the total running time of the resolution system by simplifying

the computation of blocks depending on those solutions. Theslightly disproportion-

ate running times for the larger model, query set combinations are attributable to the

memory management mechanisms of the implementation platform. For large amounts

of data, swapping occurs that tends to degrade the overall performance. One way to

mitigate this effect is to reduce the overall complexity of the model. A discussion of

techniques for reducing the model is the subject of Section 3.3. The performance re-

sults presented in this section assume no external rules of the sort discussed in the next

section have been provided to assist that process.

3.3 Reducing the Model

Within the realm of data-flow analysis, many techniques exist for speeding up the anal-

ysis by reducing the complexity of the flow graph. One approach is to use static single

assignment [CFR+89, BP03] form or some other collection of dependence information

to construct new edges that propagate information directlyfrom where it is generated

to where it is used [CCF91, JP93, SGL96]. These approaches can be broadly charac-

terized as the construction of so-called “sparse graphs”. Other approaches involve the

construction of reduced graphs constructed by either eliminating irrelevant vertices or

collapsing regions into single components [RFT99, RP86]. Given the size of the mod-

els over which the resolution algorithm is required to operate, it is reasonable to ask

whether similar techniques can be applied to the model checking problem.

Unfortunately, for the general modal mu-calculus, the construction of such graphs is

Page 146: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

131

Optimized Equation Block Form as Strongly Connected Components: top = z0

SCC 1 : neutral z1 = p SCC 2 : min z0 = z1 ∨ z2 z2 = 3z3 z3 = 3z0

Figure 3.13: Optimized Equation Block Form Partitioned into Strongly ConnectedComponents for the Modal Mu-Calculus Formulaφ = µX.(p ∨33X)

λλ λ λ

Procedure: main

v0 v1 v2 v3 v4 v5 v6 v7

φφφ φ

p

λλ λ

Figure 3.14: UHSM for the Sample Model Reduction Queryφ

Analysis Encoding Solution to SCC 1Context Transformers Property Transformers

v [Γ(v)](α) [Φ(v)](α)

v0 z0, z2v1 z3v2 z0, z2

s3 z3v4 z0, z2v5 z3v6 z0

v7 ∅Thes-designated vertices also identify the elements ofΣ by call-edge source.

Figure 3.15: Context-Sensitive Analysis,(α, z0, z1, z2, z3,Γ,Φ, α) , Encoding theSolution to the SCC 1 of the Sample Model Reduction Queryφ

Page 147: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

132

considerably more difficult. Consider the sample query formula,

φ = µX.(p ∨33X),

over the single atomic propositionp. This formula is equivalent to thePDL-∆ formula

〈(true; true)∗〉p, wheretrue represents the universal modality. The optimized equation

block form for this query is provided as Figure 3.13. This system is hierarchical, consist-

ing entirely of singleton blocks and thus belongs to the simplest non-trivial complexity

class of mu-calculus formulas,Lµ1. Now consider the UHSM model of Figure 3.14.

This model consists of a single procedure,main, and thus is equivalent to its own ex-

pansion. Further, the model contains only a single path frombeginning to end and the

atomic propositionp hold at only a single state,(ε, v6). Except for models consisting of

a single state with no transitions, this is the simplest model imaginable. Figure 3.14 also

illustrates the solution to the query; the formulaφ holds at the even-indexed vertices and

does not hold at the odd-indexed vertices.

In plain language, the formulaφ holds at a state if and only if there exists a path

of even length to a state wherep holds. Immediately the problems with simplifying

the model become self-evident. The removal of any state in the model would disrupt

the length of paths between states and therefore change the semantics of the formula.

Aggregating states into blocks is foiled by the realizationthat no two successive states

have the same solution. Presumably, it might be possible to construct a sparse graph

that propagates the validity of the atomic propositionp to the even numbered vertices.

However, it is not clear that such a graph would afford a significant improvement.

Some systems deal with these problems by restricting the class of formulas for which

the model can be reduced [CDH+00, Sei98]. However, even in these cases, the lesson

Page 148: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

133

learned is that for explicit state models, the problem of determining how to reduce the

model and then construct a new model is not significantly easier than simply resolving

the formula directly. Further, constructing reduced models conflicts with Carnauba’s

design objective of producing global information. If the objective were to simply resolve

the query at the origin of the program, this would not be an issue. However, as will be

demonstrated in Chapter 4, the existence of global information creates opportunities for

further post-query analysis and is a worthwhile goal.

At a minimum, this introduces the requirement that it be possible to reflect the solu-

tion for any reduced model back to the full model. The solution adopted by Carnauba is

to take a “less is more” approach. Rather than spending considerable processor cycles

determining an optimally reduced model, the system utilizes a simple reduction subsys-

tem that, consistent with Carnauba’s design philosophy, can either act autonomously or

with user assistance to isolate easily recognizable reductions.

The solution for the non-trivial strongly connected component of the sample query is

provided in Figure 3.15. Notice that while the validity of the query oscillates along the

single path of the model, the vertices of the model can be partitioned into equivalence

classes,

v0, v2, v4, v6, v1, v3, v5, v7,

where the solution for each variable in the system is invariant over the elements of

each class. If these equivalence classes had been known prior to the equation block

system being resolved they could have been used to simplify the resolution process.

Specifically, theFu function would be identical for each element of the class, soan

update to the function associated with one member could instantly be reflected to every

Page 149: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

134

other member. Then, rather than return the dependents of thechanged element to the

worklist, a representative of every dependent equivalenceclass could be added. For

example, in the sample queryφ, when the function associated withv4 is updated, the

update would propagate tov2 andv0 automatically. Then a single representative of the

classv1, v3, v5 would be added to the worklist. This is correct since the automatic

updates still keep each function between its initial value and its fixed point solution

value in the lattice. In this way, global information is generated without even explicitly

processing each vertex at least once.

This idea extends naturally to models with content-sensitivity considerations. For

the purposes of processing a component, two vertices,u andv, are equivalent if their

property transformers are equivalent across all contexts after each block is processed.

That is, if (C′,X ′,Γ′,Φ′, κ) is the solution analysis after the processing of some block

then,u andv can be treated as equivalent if

∀α′ ∈ C′.α′ ∈ LC (u) ∪ LC (v)→ [Φ′(u)](α′) = [Φ′(v)](α′).

Since determining whetheru andv are equivalent requires knowing that their solu-

tions are equivalent, this would seem to be putting the cart before the horse. However,

theFu functions are calculated from theT(u,v) functions which can be calculated inde-

pendently for each(u, v) pair. These functions can be used to construct equivalence

classes in many cases. For example, if(C,X ,Γ,Φ, κ) is the initial solution (before

expanding the contexts) for a block with set of modal-definedvariablesS, thenu is

equivalent to its successors when, for each successor,v,

∀α ∈ C.[T(u,v)(α,X) = X(∧α ∈ LC (u) ∪ LC (v)→ [Φ(u)](α) = [Φ(v)](v))].

Page 150: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

135

That is, the solutions to the previously solved variables match for each context and

theT(u,v) function is the identity over the new assumptions. This equivalence can be

used to collapse vertices into classes while merging their dependent sets. Recalling the

example from Section 3.2, three examples of classes that could be constructed in this

way would be

v16, v17, v18, v19, s20 andv14, v15 ands21, v6, v7, s8.

Still, unless a single set of equivalence classes can be applied broadly, the performance

increase is minimal. Also, this technique would not find the equivalence classes neces-

sary to simplify the example queryφ of Figure 3.13 since theT(u,v) functions are not

identity over the set of assumptionsz2, z3 for any vertex.

The solution taken by Carnauba is to allow the user two ways tomanually specify

the equivalence classes to be used. Classes can be specified either explicitly, or as a

function encoding a set of rules describing how to build the classes from the underlying

atomic propositions. For example, a rule that would work forthe example might say

that, if there exists a straight-line sequence of three vertices andp does not hold at any

of them in any context, then the first and third are equivalent. In this way, constructing

the classes can be done with a local inspection of each vertexin the model. Equivalence

class generating functions associated with queries are stored in the symbol table and

executed before the resolution process begins.

In practice, rather than collapse individual transitions,the most effective rules are

those that collapse entire procedures to a single class. A list of such procedures can

be provided explicitly, or, as an example, a rule can be attached to a query template

specifying that if the instantiating atomic propositions do not hold at any vertex in a

Page 151: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

136

procedure or any procedure it may transitively call, and there is path from the entry to the

exit of the procedure5, then the procedure may be reduced to a single class representing

the entire reachable portion of the call graph from that point. These rules can be applied

quickly and, because the class boundaries are also the procedure boundaries, the new

dependence relation is trivial to compute. Further, it is significantly easier for a query

writer with knowledge of the intended semantics of the formula to write these rules than

it is to derive them independently from an examination of thecorresponding equation

blocks. To a lesser extent, this approach also works passably well for reducing individual

loops and other single-entry, single-exit regions, assuming that these regions have been

identified beforehand. In this way, these rules allow for an approximation of the implicit

model reduction techniques used by some other algorithms, such as the pre-computation

of strongly connected components within the model and the generation of procedure

summary edges. This approach is also compatible with more sophisticated techniques

that could be implemented to infer rules from the actual equation blocks.

Model reduction is most effective when the cost can be amortized over multiple

queries. At present, the rules for individual queries can beintersected, with two equiva-

lence classes being merged only if it is lawful to merge them for both queries. It remains

a subject for future work to devise a good heuristic for determining a partitioning of the

queries so that each partition can share a common set of mutually advantageous equiva-

lence classes.

5This reachability calculation is (optionally) performed when the model is constructed and then drawn

upon when needed.

Page 152: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

Chapter 4

Post-Query Analysis

The output of the model checking engine represents a global solution to a set of temporal

logic queries. That is, each individual query, and all of itssub-expressions, have been

solved for each state in the model. This global solution environment can be interpreted

in two distinct ways. First, it can be regarded as a complete fixed point solution to

the mutually dependent equations that constitute the equation block system generated to

encode the queries. This interpretation is significant since the equation block system can

be seen as implicitly defining a system of interdependenciesbetween the state, variable

pairs defined by the product of the model states with the solved variables of the equation

block system. The fixed point solution, then, provides a way of navigating inside this

dependence relation. Second, the solution environment canbe viewed as the combined

solution function to a collection of context-sensitive analyses. In this view, the variables

defined in the equation block system are the properties. The environment then implicitly

maps each stack-context, vertex pair to the set of variablesthat hold at the vertex in that

stack-context. Each interpretation affords a different set of mechanisms for distilling the

137

Page 153: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

138

information contained within the global solution environment into more useful results.

In Section 4.1 a heuristic is presented for generating useful examples exhibiting the

result of a query at a state. This technique uses the global solution environment, seen

as a fixed point of a system of equations, to reduce the examplegeneration problem to

a reachability problem in anenvironment dependence graph. It is then demonstrated

how certain classes of spurious examples generated by this technique can be precluded

by semantic analysis. Next, in Section 4.2 a novel techniqueis introduced for solving

constraint querieson the solution environment, in this case viewed as a single,unified

context-sensitive analysis. Constraint queries are a broad generalization of the dual

problem for context-sensitive analyses: given a vertex anda desirable set of solutions,

what is the set of potential stack-contexts for that vertex such that the solution function of

the analysis maps to one of the desirable solutions at that vertex in those stack-contexts?

These queries are shown to have application to code comprehension and, in Chapter 5,

to other emerging software engineering problems. Both types of post-query analysis

require the presence of global solution information. Examples and empirical results are

provided throughout both Section 4.1 and Section 4.2.

4.1 Example Generator

The model checking engine produces a solution environment that finitely represents the

global solution to each model checking query. That is, givenany state in the model

and any variable from the equation block system, it is straightforward to recover the

validity of that variable at that state from the environment. Despite the fact that the

validity of each variable can be queried independently, this environment also represents

Page 154: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

139

the complete fixed point solution to the mutually dependent system of equations that

constitute the equation block form of the queries.

In most cases, when a query is posed against a model, that query is made with a cer-

tain expectation. For queries such as those used to demonstrate the optimizing compiler

of Chapter 2, the expectation is that the queries will be invalid over the program, that

some property essential to the proper execution of the program is not violated. When

that expectation is not met, when there are bugs or other unexpected behavior in the

program, it is useful to have a heuristic for generating concise, meaningful examples

that exhibit the cause of the aberration.

This section demonstrates how the environment, interpreted as a fixed point solution

to a mutually dependent system of equations, can be used to generate examples. First the

notion of anenvironment dependence graphis established. This graph can be thought of

as the product of the expansion of the UHSM model with the solution environment. Ex-

amples are then seen as subclasses of walks in this graph, with the solution information

in the environment serving as a “road map” to the locations where interesting behavior

occurs. From this basic framework, a technique is presentedto improve the precision of

the examples by incorporating both user-defined and program-derived semantic infor-

mation. Examples are provided throughout this section to illustrate the motivation for

these ideas and empirical data is presented. This section concludes with a discussion of

other methods and applications of example generation.

Page 155: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

140

4.1.1 Environment Dependence Graphs

Given an UHSM,U , with vertex setV, letS be the universal set of states,(σ, v), in the

model, wherev ∈ V andσ is a valid stack-context forv. Further, letBVar be an equation

block system over the variable setVar . Let E(U,BVar ) : S × Var → true, false be the

environment resulting from resolvingBVar overU interpreted as a function mapping a

state, variable pair totrue if and only if the state is an element of the fixed point solution

associated with the variable in the environment.

Definition 24. TheEnvironment Dependence Graph, D(E(U,BVar)), derived from an

environment,E(U,BVar), is a potentially infinite directed graph whose vertices arethe set

S × Var and for which there is an edge((σ, u), X)→ ((τ , v), Y ) if and only if

E(U,BVar )((σ, u), X) = E(U,BVar )((τ , v), Y )

and inBVar

X = Y and (σ, u) = (τ , v)

X = X1 ∧X2 or X = X1 ∨X2 and (X1 = Y or X2 = Y ) and(σ, u) = (τ , v)

X = [a]Y or X = 〈a〉Y and (u, v) ∈ Ra and∃Pi ∈ U,w ∈ V such that

(σ = τ) and(u, λ, v) ∈ Pi,

or σu = τ , v = P entryi , and(u, Pi, w) ∈ Pi,

or σ = τw, u = P exiti , and(w, Pi, v) ∈ Pi.

The vertices of this graph are the product of the full expansion of the UHSM with

the set of variables defined in the equation block system. A transition exists from one

vertex to another in this graph if there is a dependence between the state, variable pair

in the equation block system and both pairs are either valid or invalid with respect to the

Page 156: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

141

environment. This graph defines adependence spacewhich can be navigated arbitrarily

to trace the cause of the validity or invalidity of each statein the model. Intuitively,

the portion of this graph that is reachable from a state, variable pair constitutes a slice

of environment dependence graph on which the original pair depends. Elements of the

slice are the state, variable pairs whose solution contributes to the solution of the slice

source. Because the environment dependence graph can be infinite, it is infeasible to

construct these slices explicitly. To rectify this, the notion of a condensate graphis

introduced.

Definition 25. Thecondensate, DcX(E(U,BVar)) of an environment dependence graph,

D(E(U,BVar)), with respect to a variable,X ∈ Var , is the finite directed graph formed

by condensing the vertices ofD(E(U,BVar)) into equivalence classes according to the

relation ((σ, v), X) ≡ ((τ , v), X) ↔ Γ∗X(σ) = Γ∗

X(τ). A transition,(R, T ), exists in

DcX(E(U,BVar )) if and only if there existsr ∈ R andt ∈ T such that(r, t) is a transition

in D(E(U,BVar )) andr ≡ t.

Because the set of contexts is finite, the condensate is a finite representation of an

environment dependence graph. What is lost in this representation is the notion of valid

paths, paths that respect the call and return structure inherent in the original UHSM.

However, for the purpose of slicing (which requires only a reachability computation),

these can be recovered via essentially the same valid path reachability algorithm used in

the computation of interprocedural control dependence [EBP01] extended to the multi-

entry, multi-exit case1. These slices can then be projected down to a set of variablesor

vertices to give a concise description of the set of sub-expressions or program locations

1The individual variables constitute the multiple entry andexit points for each procedure.

Page 157: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

142

on which the validity of the source state, variable pair depends.

However, what is frequently of more interest than the general transitive dependence

information is the illustration of a set of concrete examples in the model that explain

the validity of a specific state, variable pair. Both examples and counter-examples can

be generated as finite walks in this graph. Since no transitions exist from a valid state,

variable pair to an invalid pair or vice versa, these classesof walks are always disjoint.

The remainder of this section focuses on methods for generating these walks and a

heuristic for determining when a walk constitutes an interesting case.

Definition 26. An example (counter-example) of ((σ0, v0), X0) in an en-

vironment dependence graph,D(E(U,BVar)), is a finite path in the graph,

〈((σ0, v0), X0), . . . , ((σn, vn), Xn)〉, in which one of the following three rule conditions

hold:

Rule 1 Xn = [a]Y (Xn = 〈a〉) and((σn, vn), Xn) has no successor inD(E(U,BVar)).

Rule 2 Xn = p and (σn, vn) ` Xk ((σn, vn) 6` Xk) under the assumption that every

modal-defined variable is invalid (valid) atsn, wherek ≤ n is the maximum of

zero (0) and the greatest index such thatXk−1 is modal-defined inBVar .

Rule 3 There existsi < n such thatvi = vn, Xi = Xn, σi is a prefix ofσn, Γ∗Xn

(σi) =

Γ∗Xn

(σn), andXn is defined in amax (min) block ofBVar .

This definition encompasses a heuristic for determining thepoint at which a walk

in an environment dependence graph constitutes an interesting example that finitely ex-

plains the validity of a variable at a state. Intuitively, a walk follows the dependence

relation of the equation block system using the modal operators to transition from one

Page 158: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

143

state in the model to the next. The restriction of the environment dependence graph

to states all of the same validity prevents the walk from taking a “wrong turn” at the

branch points. The conditions for terminating the walk correspond to conditions where

a contributory cause of the result originates from the final node. In plain language, the

termination conditions can be described as follows:

Rule 1 The final element of the path is a modal-defined variable whosevalidity is triv-

ially determined by the absence of a successor under the appropriate modality.

Rule 2 The final element of the path is an atomic-defined variable andthe set of atomic

propositions at the final state are sufficient to determine the validity of the variable

that is the target of the final state transition in the path.

Rule 3 The path forms a valid loop either within a single procedure or through a se-

quence of recursive calls in the condensate graph with respect to the current vari-

able. This implies a cyclic dependence within a fixed point type where the loop

does not need to contain any other reason for the validity of the loop to be as it is.

Note that it is not necessary to have an explicit rule that terminates a walk as an

example at either the terminal state of the program,P exit0 , or any of the embedded halts,

Uhalt . The validity of any variable at these states is covered by the existing rules.

The formal description of the second rule is interpreted as follows. Since there are

no cyclic unguarded dependences inBVar , the validity of every variable- and junction-

defined variable at a state can be determined from the equation block system by knowing

the validity of the atomic and modal-defined variables at that same state2. The final

2This fact was used in Section 3.2 to definesolve(X, Y ).

Page 159: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

144

condition of Rule 2 states that the modal-defined variables should be considered contrary

to the validity of the path being walked and that the validityof the first variable reached

as a consequence of the final state transition in the walk should be reevaluated using

those assumptions in conjunction with the validity of the atomic propositions at that

state. An example application of this rule is provided laterin this section.

The prefix component of Rule 3 is intended to capture repetition through successive

calls to a recursive function. Note that while this does not technically constitute a loop in

the environment dependence graph, it does constitute a valid path loop in the condensate

graph and provides a close approximation to a cyclic dependence between the property

sets associated with a vertex for those stack-contexts. These cyclic dependences drive

the fixed point semantics of modal mu-calculus formulas.

4.1.2 A Complete Example

Figure 4.1 introducesexample2, a sample program with a single global integer array,

G, of length two. Figure 4.2 provides the mu-calculus formulaand corresponding equa-

tion block form of a single query,φ, to be posed overexample2. The top variable

for φ is x0. The query is defined in terms of a single atomic proposition,Def (G[1]),

that holds at a state if and only ifG[1] is defined at that state. This atomic proposi-

tion holds at precisely two states in the UHSM representation of the program,(ε, v8)

and(s14, v4). Since the proposition does not hold at(s11, v4), this is a context-sensitive

atomic proposition. The queryφ holds at a state if and only if there exists a path where in

the future the atomic propositionDef (G[1]) holds. As aCTL query, this is represented

asEF Def (G[1]).

Page 160: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

145

example2:Global Integer G[2];

v0: Procedure C(Integer a) v6: Procedure main() v1: G[0] = a * a; s7: C(1);v2: v8: G[1] = 0;

v9: while(...cond_1...) v3: Procedure F(Integer i) v10: if(...cond_2...) v4: G[i] = G[0] + G[i]; s11: F(0);v5: else

s12: C(G[0] + 1);

v13: while(...cond_3...) s14: F(1);

v15: print(G[1]);v16:

Figure 4.1: Sample Programexample2 for the Example Generator

Optimized Equation Block Form: top = x0

neutral x1 = Def (G[1]) min x0 = x1 ∨ x2 x2 = 3x0

Figure 4.2: Equation Block Form of Sample Queryφ = µX.(Def (G[1]) ∨3X)

Page 161: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

146

Context Transformers Property Transformersv [Γ(v)](α) [Γ(v)](β) [Φ(v)](α) [Φ(v)](β)

v0 x0, x2 x0, x2v1 x0, x2 x0, x2v2 x0, x2 x0, x2

v3 x0, x2 x0, x2v4 x0, x2 x0, x1, x2v5 x0, x2 x0, x2v6 x0, x2 x0, x2

s7 α β x0, x2 x0, x2v8 x0, x1, x2 x0, x1, x2v9 x0, x2 x0, x2v10 x0, x2 x0, x2

s11 α β x0, x2 x0, x2s12 α β x0, x2 x0, x2v13 x0, x2 x0, x2s14 β β x0, x2 x0, x2v15 ∅ ∅

v16 ∅ ∅Thes-designated vertices also identify the elements ofΣ by call-edge source.

Figure 4.3: Environment for Sample Queryφ over example2 Cast as a Context-Sensitive Analysis(α, β, x0, x1, x2,Γ,Φ, α)

Page 162: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

147

Figure 4.3 shows the environment that results from resolving this query

over the sample program, represented as a single context-sensitive analysis,

(α, β, x0, x1, x2,Γ,Φ, α). The output of the context-sensitive analysis necessary to

encode the atomic proposition,Def (G[1]), is incorporated in the solution as variable

x1. Notice that(ε, v6) ` x0 sincex0 ∈ [Φ(x0)](α). That is, the query (represented by its

top variable,x0) holds at the entry of the program. This reflects that there exists a path

in the model from the state(ε, x0) = P entry0 to a state whereDef (G[1]) holds. As an

example, the following finite path fragment exhibits the validity of the query at(ε, v6).

It terminates at a state, reachable fromP entry0 = (ε, v6), where the atomic proposition

Def (G[1]) holds.

(ε, v6), (ε, s7), (s7, v0), (s7, v1), (s7, v2), (ε, v8)

Figure 4.4 provides an illustration of the slice of the environment dependence graph

that is reachable from((ε, v6), x0). Recall that, from the definition of an environment

dependence graph, if((σk, vk), xk) is reachable from((ε, v6), x0), then it must be the

case that(σk, vk) ` xk since(ε, v6) ` x0.

The following walk in the environment dependence graph is anexampleunder Rule 2

of Definition 26 and corresponds to the path fragment provided above that exhibits the

validity of the formula,φ, at(ε, v6).

((ε, v6), x0)0, ((ε, v6), x2)1, ((ε, s7), x0)2, ((ε, s7), x2)3, ((s7, v0), x0)4, ((s7, v0), x2)5,

((s7, v1), x0)6, ((s7, v1), x2)7, ((s7, v2), x0)8, ((s7, v2), x2)9, ((ε, v8), x0)10, ((ε, v8), x1)11

The projected state transition sequence for this example isthe same as the corre-

sponding finite path fragment. To see that this is an example under Rule 2, consider that

Page 163: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

148

ε

εε

ε ε

εεε ε

ε ε

εε

(( , v6), x0)

(( , s7), x0)

((s7, v0), x0)

((s7, v1), x0)

((s7, v2), x0)

ε

ε

ε

ε

ε

ε (( , v8), x0)

(( , v9), x2)

(( , v10), x2)

(( , s12), x0) (( , s12), x2)

((s12, v0), x2)

((s12, v1), x2)

((s12, v2), x2)

(( , v13), x2)

(( , s14), x2)

((s14, v4), x2)

((s14, v5), x2)

(( , v8), x1)

(( , s11), x2)

((s11, v3), x2)

((s11, v4), x2)

((s11, v5), x2)

((s14, v3), x2)

((s14, v4), x1)

(( , v8) ,x2)

(( , v9), x0)

(( , v10), x0)

(( , s11), x0)

((s11, v3), x0)

((s11, v4), x0)

((s11, v5), x0)

((s12, v0), x0)

((s12, v1), x0)

((s12, v2), z0)

(( , v13), x0)

(( , s14), x0)

((s14, v3), x0)

((s14, v4), x0)

((s14, v5), x0)

(( , v6), x2)

(( , s7), x2)

((s7, v0), x2)

((s7, v1), x2)

((s7, v2), x2)

Figure 4.4: Fragment of the Environment Dependence Graph for Sample Queryφ overexample2 Reachable from((ε, v6), x0)

x0 = x1 ∨ x2. Therefore, ifx1 is true at a state, then this is a sufficient condition to

explain the validity ofx0 at that state regardless of any assumption about the validity of

x2 at that state. As illustrated in Figure 4.3,(ε, v8) ` x1, so this is a terminal element of

an example under the rule. For this query and model, every example must be covered by

this rule since there are no terminal modal-defined variables and no variable is defined

within amax block.

To see why the variable type condition of Rule 3 makes sense, consider the following

walk containing a loop from((ε, v9), x0) to itself in the environment dependence graph.

((ε, v6), x0)0, ((ε, x2), x2)1, ((ε, s7), x0)2, ((ε, s7), x2)3, ((s7, v0), x0)4, ((s7, v0), x2)5,

((s7, v1), x0)6, ((s7, v1), x2)7, ((s7, v2), x0)8, ((s7, v2), x2)9, ((ε, v8), x0)10, ((ε, v8), x2)11,

((ε, v9), x0)12, ((ε, v9), x2)13, ((ε, v10), x0)14, ((ε, v10), x2)15, ((ε, s11), x0)16,

Page 164: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

149

Optimized Equation Block Form: top = z0

neutral z4 = Def (G[1]) neutral z3 = z4 ∧ z0 min z2 = z3 ∨ z1 z1 = 3z2 max z0 = z1

Figure 4.5: Equation Block Form of Sample Queryψ = νX.µY.3[(Def (G[1]) ∧X)∨Y ]

((ε, s11), x2)17, ((s11, v3), x0)18, ((s11, v3), x2)19, ((s11, v4), x0)20, ((s11, v4), x2)21,

((s11, v5), x0)22, ((s11, v5), x2)23, ((ε, v9), x0)24

That this is not an example under Definition 26 makes sense. While the infinite projected

state sequence,

(ε, v6), (ε, s7), (s7, v0), (s7, v1), (s7, v2), (ε, v8),

[(ε, v9), (ε, v10), (s11, v3), (s11, v4), (s11, v5)]∞

contains a state,(ε, v8), where the atomic proposition,Def (G[1]) holds, the variable,

x1 representing the atomic proposition does not occur anywhere in the walk. Further,

Def (G[1]) does not hold anywhere in the loop and the validity of the variablesx0 and

x2 throughout the loop are strictly a consequence of the fact that Def (G[1]) holds at

(s9, v4) which occurs in the future from every point in the loop, but isnot part of the

infinite projected state sequence. This walk contains no information that explains the

validity of the vertices it contains and hence is rightly notconsidered an example under

Definition 26.

4.1.3 A Second Complete Example

Figure 4.5 provides the mu-calculus formula and equation block form of a second query,

ψ, overexample2. This query, withtop variablez0, asserts that there exists a path

Page 165: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

150

where it globally holds that in the future the atomic proposition Def (G[1]) holds and

is logically equivalent to theCTL∗ formulaEGF Def (G[1]). There is no logically

equivalent representation of this query inCTL or any other alternation-free fragment of

the modal mu-calculus.

In this case, the validity of the formula atP entry0 can be exhibited by an infinite path

starting atP entry0 = (ε, v6),

(ε, v6), (ε, s7), (s7, v0), (s7, s1), (s7, v2), (ε, v8), (ε, v9),

[(ε, v13), (ε, s14), (s14, v3), (s14, v4), (s14, v5)]∞,

that cycles infinitely through a state,(s14, v4), where the atomic propositionDef (G[1])

holds.

Figure 4.6 shows the environment that results from resolving this query

over the sample program, represented as a single context-sensitive analysis,

(α, β, z0, z1, z2, z3, z4,Γ,Φ, α). Figure 4.7 provides an illustration of the slice of

the environment dependence graph that is reachable from((ε, v6), z0), a state, variable

pair where the assertion holds.

The following walk in the environment dependence graph is the example, this time

under Rule 3 of Definition 26, corresponding to the path above.

((ε, v6), z0)0, ((ε, v6), z1)1, ((ε, s7), z2)2, ((ε, s7), z1)3, ((s7, v0), z2)4, ((s7, v0), z1)5,

((s7, v1), z2)6, ((s7, v1), z1)7, ((s7, v2), z2)8, ((s7, v2), z1)9, ((ε, v8), z2)10, ((ε, v8), z1)11,

((ε, v9), z2)12, ((ε, v9), z1)13, ((ε, v13), z2)14, ((ε, v13), z1)15, ((ε, s14), z2)16,

((ε, s14), z1)17, ((s14, v3), z2)18, ((s14, v3), z1)19, ((s14, v4), z2)20, ((s14, v4), z3)21,

Page 166: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

151

Context Transformers Property Transformersv [Γ(v)](α) [Γ(v)](β) [Φ(v)](α) [Φ(v)](β)

v0 z0, z1, z2 z0, z1, z2v1 z0, z1, z2 z0, z1, z2v2 z0, z1, z2 z0, z1, z2

v3 z0, z1, z2 z0, z1, z2v4 z0, z1, z2 z0, z1, z2, z3, z4v5 z0, z1, z2 z0, z1, z2v6 z0, z1, z2 z0, z1, z2

s7 α β z0, z1, z2 z0, z1, z2v8 z0, z1, z2, z3, z4 z0, z1, z2, z3, z4v9 z0, z1, z2 z0, z1, z2v10 z0, z1, z2 z0, z1, z2

s11 α β z0, z1, z2 z0, z1, z2s12 α β z0, z1, z2 z0, z1, z2v13 z0, z1, z2 z0, z1, z2s14 β β z0, z1, z2 z0, z1, z2v15 ∅ ∅

v16 ∅ ∅Thes-designated vertices also identify the elements ofΣ by call-edge source.

Figure 4.6: Environment for Sample Queryψ over example2 Cast as a Context-Sensitive Analysis(α, β, z0, z1, z2, z3, z4,Γ,Φ, α)

Page 167: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

152

((s11, v3), z1)

((s11, v4), z1)

((s11, v5), z1)

((s12, v0), z1)

((s12, v2), z1)

((s12, v1), z1)

((s14, v4), z1)

((s14, v5), z1)

(( , v6), z1)

(( , s7), z1)

(( , v8), z0) (( , v8) ,z1)

(( , v9), z1)

(( , v10), z1)

(( , s12), z1)

(( , v13), z1)

(( , s14), z1)

ε

ε

ε ((s7, v2), z1)

(( , v6), z0)ε

ε

((s7, v0), z1)

((s7, v1), z1)

ε (( , v8), z2)

(( , v9), z2)

(( , v10), z2)

(( , s11), z2)

((s11, v3), z2)

((s11, v4), z2)

((s11, v5), z2)

(( , v13), z2)

(( , s14), z2)

((s14, v3), z2)

((s14, v4), z2)

((s14, v5), z2)

(( , v8), z3)

(( , v8), z4)

((s14, v4), z4)

((s14, v4), z3)

(( , s12), z2)

((s12, v0), z2)

((s12, v1), z2)

((s12, v2), z2)

((s7, v2), z2)

ε ε ε

εε

ε ε

εεε ε

ε ε

εε

(( , s11), z1)

((s14, v3), z1)

((s14, v4), z0)

(( , s7), z2)

((s7, v0), z2)

((s7, v1), z2)

Figure 4.7: Fragment of the Environment Dependence Graph for Sample Queryψ overexample2 Reachable from((ε, v6), z0)

Page 168: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

153

((s14, v4), z0)22, ((s14, v4), z1)23, ((s14, v5), z2)24, ((s14, v5), z1)25, ((ε, v13), z2)26,

((ε, v13), z1)27, ((ε, s14), z2)28, ((ε, s14), z1)29, ((s14, v3), z2)30, ((s14, v3), z1)31,

((s14, v4), z2)32, ((s14, v4), z3)33, ((s14, v4), z0)34

Under Rule 3 of the definition,i = 22 sinces14 is a prefix ofs14 andz0 was defined

in amax block in Figure 4.5. Notice that, in this case, there can be noexamples derived

from Rule 1 since every state has a successor under the trivial modality. Further, there

can be no examples derived from Rule 2 sincez2 is the only variable used in a modal

formula andz2 = (z4∧z1)∨z1. Hence, under the assumption thatz1 = false, it must be

thatz2 = false at any state regardless of the validity ofz4 at that state. This makes sense

since the projected state sequence of any walk in the environment dependence graph

of Figure 4.7 terminating at either((ε, v8), z4) or ((s9, v4), z4) clearly does not have the

path property specified by the formula.

Projecting the example down to a sequence of state transitions gives the sequence:

(ε, v6), (ε, s7), (s7, v0), (s7, v1), (s7, v2), (ε, v8), (ε, v9), (ε, v13),

(ε, s14), (s14, v3), (s14, v4), (s14, v5), (ε, v13), (ε, s14), (s14, v3), (s14, v4).

Notice that this finite sequence reflects a path from(ε, v6) to a state,(s14, v4), where

the atomic propositionDef (G[1]) holds appended to a loop from that state back to

itself. This provides a finite description of what can clearly be expanded to an infinite

path that exhibits the validity of variablez0 at (ε, v6). In essence, the heuristic of Def-

inition 26 has generated abest finite approximationof an infinite path that exhibits the

validity of the formula.

Page 169: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

154

One optimization that might be considered for generating examples is the inclusion

of summary edgesanalogous to the ones used for interprocedural slicing [HRB90] in

the environment dependence graph. Such edges could be included to represent transi-

tive reachability across procedures. However, the examplegenerated for this program

cautions against employing these edges. In the environmentdependence graph of Fig-

ure 4.7,((ε, v13), z1) →∗ ((ε, v13), z2). However, the inclusion of such a summary edge

would skip the “interesting” state, variable pair((s14, v4), z0) that drives the example.

Traditional summary edges can not be used in the environmentdependence graph with-

out jeopardizing the effectiveness of the heuristic.

In many cases, the validity of a formula cannot be fully exhibited by the existence of

a single (finite or infinite) path. For example, in cases wherethe formula is expressed as

a conjunction of two sub-expressions, each represented by distinct variables in the equa-

tion block system, any walk in the environment dependence graph must, by necessity,

choose one element of the conjunction to explore. In these cases, examples will only

exhibit a cause of a single operand of the conjunction. Similarly, when a[a]-defined

variable is valid at a state with with multiplea−successors, the example must choose

one among thea−successors to continue exploring. This occurs in cases whenthe va-

lidity of a formula is dependent on a property that holds on all paths. Here, the path can

only illustrate a single example. In most of these cases, a single arbitrary example adds

little to a programmer’s understanding of the output. Consistent with Carnauba’s design

philosophy, these cases are predominantly handled by allowing the example path to be

redirected at the programmer’s discretion.

Page 170: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

155

4.1.4 Generating Examples in Practice

The environment dependence graph is never explicitly generated. Rather, examples are

generated by performing what is essentially a directed search over the program. This is

feasible since, unlike a blind traversal, the search is directed by the environment to pro-

ceed along only those state, variable pairs with the same validity as origin. Intuitively,

each state, variable pair is valid or invalid for one or more reasons. The goal of the

search, then, is to uncover one of those reasons. Put anotherway, what makes a tradi-

tional search over an entire program expansion infeasible is backtracking — exploring

a path and discovering that it does not contain an example. Here, the need for back-

tracking is minimized by the pre-existence of the solution.Because of this, searches do

not proceed down lengthy paths that do not lead to an example.In essence, the solution

environment steers the example generator toward interesting cases.

While the transitions that define the environment dependence graph can be deter-

mined directly from the model and the query, the definition ofthe graph requires that

the target of each transition have the same validity as the source. Because the graph

is never explicitly constructed, this condition must be checked before each step in the

search. Unfortunately, looking up the validity of a state, variable pair in the environ-

ment requires applying the stack-context transformer of the variable to the stack-context

of the state. This requires time linear in the length of the stack-context. However, by

associating with each element of the search path a set of stacks, one for each distinct

stack-context transformer on which the source of the example may depend correspond-

ing to the output of the stack-context transformers on the prefixes of the stack-context,

the validity check is reduced to a constant time lookup. Thisset of stacks then only needs

Page 171: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

156

to be updated (each either pushed with the new context or popped) when the search path

transitions from one procedure to another.

Rules 1 and 2 require only a local inspection of the final few nodes of the current

path to determine if it constitutes an example. Rule 3 is morecomplex and requires that

some aggregate information be retained about the current search path. As each node is

appended to the search path, information from that node is pushed on stacks that record

information from the entire path. Stacks are used so that theinformation can be popped

if the node is removed from the search path because of backtracking. First, the stack-

context of the node is pushed onto acontext-stackassociated with the vertex, variable,

and analysis context of the node. Second, a global index reflecting the order in which

the node was created is pushed on anindex-stackassociated with the variable of the

node. These tables of stacks accelerate the loop test required by Rule 3 of Definition 26.

Testing Rule 3 then requires retrieving the stack associated with the vertex, variable,

and context of the node from the context-stack table. If it contains a stack-context that

is a prefix of the stack-context of the current node then this node constitutes aloop

in the condensate graph. If the variable of the current node is of the correct type for

the traversal being conducted, then an example has been identified. These tests are

precluded in cases where it is determined beforehand that noexample can be generated

under this rule, as in the case of sample queryφ overexample2.

Backtracking among state transitions can only occur when a loop is found but the

associated variable does not have the correct type. In thesecases, there are two pos-

sibilities. If the path contains a node with an index betweenthe indices of the nodes

that bound the loop whose associated variable is of the the correct type to constitute an

Page 172: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

157

example under Rule 3, then the search proceeds using the previous iteration of the loop

to guide the completion of the example. Otherwise, the loop terminates the forward

progress of the search and backtracking occurs. The test requires comparing the top

value of the index-stack for each variable of the appropriate type that has been traversed

with the indices that bound the loop. This test prevents examples from being missed that

require repetition of vertex, variable pairs with one stack-context that is the prefix of an-

other. Simple heuristics are used to make branch selectionsthat minimize the amount of

this type of backtracking.

As an example of where the additional test is required, consider the path in the

environment dependence graph of Figure 4.7 returned as an example for the second

sample query,ψ. In that case, the sequence of state, variable pairs[14, 26] constitutes a

loop in the search. However, the variable element,z3, of ((ε, v13), z3)26 is defined in a

min block in Figure 4.5. Thus, this loop does not constitute an example under Rule 3

of the definition. However, the variable element,z0, of ((s14, v4), z0)22 is defined in a

max block and index22 ∈ [14, 26] so the search does not immediately backtrack. The

sequence[26, 34] is then a repetition of the transitions from the sequence[14, 22] and

completes the example.

That the search process always terminates is a consequence of the fact that it can

also be seen as a search over the finite condensate of the environment dependent graph.

In that graph, loops either terminate the forward progress of the search or result in the

creation of a path suffix that immediately completes an example. Since the process re-

turns a complete sequence of nodes, the search can be trivially redirected by providing

a prefix of that sequence and a desired choice of alternate direction. The context- and

Page 173: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

158

Benchmark Total Search Example Path PerformanceCode |V| |P| |Σ| Runs Nodes States Nodes States Time (s) Nodes/Secpostfix 217 9 27 20 115 58 98 51 0.02 5750.0treecode 2455 67 225 20 2479 1240 1992 907 0.32 7476.9188.ammp 11944 181 722 20 25610 13538 2301 1151 2.35 10897.9179.art 1097 24 46 20 324 108 189 99 0.03 10800.0256.bzip2 2699 67 208 20 9145 4798 1293 647 0.82 11152.4129.compress 618 22 38 20 698 352 553 277 0.05 13960.0183.equake 1359 26 31 20 958 481 861 431 0.09 10644.4164.gzip 3230 77 235 20 15272 3959 2421 1211 1.53 9981.7124.m88ksim 15478 345 1226 20 56441 30470 2871 1436 5.60 10078.7197.parser 12616 307 1219 20 2719 1368 2547 1274 0.19 14310.5Totals — — — 200 113761 56372 — — 11.0 10341.9

Figure 4.8: Performance of the Example Generation Heuristic

index-stacks are then popped as if the uninteresting portion of the search were back-

tracked away and the search then proceeds along the new path.

In Carnauba, queries are tagged with instructions that determine under what circum-

stances an example or counter-example should be generated.This information is stored

in the symbol table and prevents examples from being generated that would have no

probative value. For example, theCTL query of the second example,EF Def (G[1]),

might be tagged with instructions to generate an example if the query istrue at a partic-

ular state but not to generate a counter-example if the queryis false since no meaningful

counter-example can be generated to show the non-existenceof a path.

Figure 4.8 presents empirical data from the example generation routine. Examples

or counter-examples were generated for a selection of twenty representative queries for

each benchmark. The benchmarks include a selection from theSPEC benchmark suites

plus two additional codes,postfix andtreecode that are described in more detail

later in this chapter. The results were averaged over these queries. Each benchmark is

provided along with the number of vertices, procedures, andcall-edges its corresponding

UHSM contains. TheTotal Searchcolumns provide the average number of nodes and

distinct states traversed, while theExample Pathcolumns report the average number of

Page 174: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

159

nodes that actually constitute the generated example path.The average running time

is provided in seconds along with the average number of search nodes traversed per

second. The data demonstrates that examples of non-triviallength can be efficiently

generated.

4.1.5 Semantic Path Analysis

A model is an abstraction of an actual program and model checking takes place with

respect to this abstraction, not the program itself. This isnecessary since even some

trivial queries are undecidable when applied to un-abstracted programs. For example

theCTL queryAF halt encodes a variant of the halting problem and is undecidable.

Relying on program abstractions is also essential for controlling the running time of

the model checking process. For most types of query, acceptable abstractions are ones

that over-approximate the set of paths in the program. That is, there are paths in the

model that do not correspond to actual executions of the code. When an affirmative

result is returned as a consequence of these semantically invalid paths, this is referred

to as afalse positive. As an example, consider the program of Figure 4.1 withcond 3

replaced with the Boolean expressionG[1] < 10. This condition bounds the number

of iterations of thewhile loop of vertexv13 and invalidates the infinite path generated

as an example to the sample query of Figure 4.5. Carnauba includes a primitive module

to detect certain instances of false positive examples.

Figure 4.9 introduces a C implementation of a postfix calculator adapted from an

example in the standard reference for the ANSI C programminglanguage [KR88]. The

program,postfix, uses a stack to evaluate simple arithmetic expressions provided in

Page 175: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

160

postfix.c:/* constants */#define MAXOP 25 /* max size of operand or operator */#define NUMBER ’0’ /* signal that a number was found */#define MAXVAL 100 /* maximum value of val stack */#define BUFSIZE 10 /* buffer for pushing characters back on input */

/* function declarations */char getch(void);void ungetch(char);char getop(char []);void push(double);double pop(void);

/* globals */v0: int sp = 0; /* next free stack position */

double val[MAXVAL]; /* value stack */v1: int bufp = 0; /* next free position in buf */

char buf[BUFSIZE]; /* buffer for ungetch */

/* getch: get a (possibly pushed back) character */char getch(void)

v2: v3: return (bufp > 0) ? buf[--bufp] : getchar();v4:

/* ungetch: push character back on input */void ungetch(char c)

v5: v6: if (bufp >= BUFSIZE)v7: printf("ungetch: too many characters\n");

elsev8: buf[bufp++] = c;v9:

/* getop: get next operator or numeric operand */char getop(char s[])

v10: char c;int i;

s11: while ((s[0] = c = getch()) == ’ ’ || c == ’\t’);v12: s[1] = ’\0’;v13: if (!isdigit(c) && c != ’.’)v14: return c; /* not a number */v15: i = 0;v16: if (isdigit(c)) /* collect integer part */s17: while (isdigit(s[++i] = c = getch()));v18: if (c == ’.’) /* collect fraction part */s19: while (isdigit(s[i++] = c = getch()));v20: s[i] = ’\0’;v21: if (c != ’\n’)s22: ungetch(c);v23: return NUMBER;v24:

Figure 4.9: Sample C Programpostfix.c for the Semantic Path Analyzer

Page 176: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

161

Figure 4.9 (Continued)

/* push: push f onto value stack */void push(double f)

v25: v26: if (sp < MAXVAL)v27: val[sp++] = f;

elsev28: printf("error: stack full, can’t push %g\n", f);v29:

/* pop: pop and return top value from stack */double pop(void)

v30: v31: if (sp > 0)v32: return val[--sp];

else v33: printf("error: stack empty\n");v34: return 0;

v35:

/* reverse polish calculator */void main(void)

v36: char type;double op2;char s[MAXOP];

s37: while((type = getop(s)) != ’\n’) switch(type)

v38: case NUMBER:s39: push(atof(s));

break;v40: case ’+’:s41: push(pop() + pop());

break;v42: case ’*’:s43: push(pop() * pop());

break;v44: case ’-’:v45: op2 = pop();s46: push(pop() - op2);

break;v47: case ’/’:s48: op2 = pop();v49: if (op2 != 0) /* check for zero divisor */s50: push(pop() / op2);

elsev51: printf("error: zero divisor\n");

break;v52: case ’%’:v53: op2 = pop();v54: if (((int)op2) != 0) /* check for zero divisor */s55: push((double)(((int)pop()) % ((int)op2)));

elses56: printf("error: zero divisor\n");

break;v57: case ’\n’:v58: printf("\t%.8g\n", pop()); /* display the answer */

break;v59: default:v60: printf("error: unknown command %s\n", s);

break;

v61:

Page 177: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

162

Optimized Equation Block Form: top = z0

neutral z3 = ¬SP+ neutral z2 = SP− min z4 = 3z0 z1 = z3 ∧ z4 z0 = z1 ∨ z2

Figure 4.10: Equation Block Form of theCTL QueryE[¬SP+ U SP−].

reverse polish notation. The stack is implemented as a global array,val, with a global

integer,sp, used to track the top index of the stack. This data structureis manipulated

in the usual way by two C functions,push andpop. Execution of the program is

controlled by an infinite loop that, on each iteration, readsa value or arithmetic operator

from the input and then operates on the stack accordingly.

A property essential to the correct execution of this program is that the stack should

not be popped before it is pushed. Specifically, the stack pointer,sp, should not be

decremented before it is incremented. IfSP+ is an atomic proposition corresponding

to states where the stack pointer is incremented andSP− is an atomic proposition cor-

responding to states where the stack pointer is decremented, then a violation of this

property corresponds to the validity of theCTL queryE[¬SP+ U SP−] at the start

of the program. Figure 4.10 provides the optimized equationblock form of this query.

For this example, both atomic propositions are context-insensitive,SP+ holds at every

instance ofv27 andSP− holds at every instance ofv32.

The model checker returns that the query is valid from the start of the program. The

example generator returns an example path,

(ε, v0), . . . , (ε, v36), . . . , (ε, v40), (ε, s41), (s41, v30), (s41, v31), (s41, v32),

that reflects a decrement to the stack pointer before any increment. In the execution of

Page 178: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

163

the program, this would seem to correspond to the input “+”, which attempts to add

before any operands have been pushed to the stack. However, this path is semantically

infeasible because of the branch choice at(s41, v31). The variablesp is initialized to

zero. The branch condition implicitly tests whethersp has been incremented before

allowing it to be decremented. In this example, the program would branch to the error

message at(s41, v33) and then exit. This example is representative of a broad class of

false positives and a simple semantic analysis is sufficientto detect that this example is

infeasible.

Given a path, Carnauba’s semantic path analyzer attempts tosymbolically execute

the statements on the path searching for a branch inconsistency. At each step in the

path, the corresponding expressions are evaluated using the current value of the vari-

ables involved. When values are unknown, the expression is partially evaluated using

the known values. If these values are insufficient to complete the evaluation, a result

of unknown is returned. The analyzer records the set of variables updated by the state-

ment3. Updated values are pushed onto avalue-stackfor each memory location. These

stacks allow the recovery of intermediate values of variables at any point along the path

by popping the updated values associated with each element of the unwanted suffix.

Since example paths do not consider multiple executions of loops, the analyzer treats

loops that terminate as atomic with post-conditions that are either independently de-

rived or user-supplied. Unknown values are pushed for variables that may be modified

in the loop but for which post-conditions cannot be established. For paths that assume

the infinite execution of a loop, the semantic path analyzer uses a simple heuristic to

3When the left hand side location of an assignment is unknown,a “block” is pushed that makes the

value unknown for every possible left hand side location until a new value is discovered.

Page 179: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

164

determine if this possibility is trivially precluded. If the path is found to be infeasible,

the analyzer returns with thegreatest feasible prefixalong with the table of value-stacks

for that prefix. At the discretion of the user, example generation can then be restarted

from that prefix along a new path. The example generator can then continue checking

feasibility from the beginning of the new suffix. At any pointin the process, values and

post-conditions known by the programmer can be supplied to make the analysis more

precise.

This approach is sufficient to determine the infeasibility of the example path for

postfix. At the path element(ε, v0) the value “0” is pushed to the value-stack associ-

ated with the memory locationsp. This value remains at the top of thesp value-stack

until the condition at(s41, v31) is reached. At this point, sufficient information exists

to determine that the branch conditionsp > 0 is false. The path is then returned as

infeasible from this point.

More sophisticated approaches to path analysis exist. For autonomous verifica-

tion, semantic path analysis is used to either refine the model or generate supplemental

queries. Bebop [BR00, BR01a, BR01b], part of the Microsoft Research SLAM toolkit,

checks second-order properties over Boolean abstractionsof programs. An independent

module, Newton, checks the paths for semantic validity. In their approach, when a path

is generated whose validity is indeterminate, new queries are instantiated to generate the

required information. The process iterates until a valid path is found or every possible

path has been determined to be infeasible. Blast [HJMM04] usesCraig interpolants

to extract new abstractions from the formal proofs of infeasibility. These interpolants

identify the “cut point” in the proof at which the values of variables are inconsistent.

Page 180: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

165

These inconsistencies are then used to generate new predicates that refine the abstrac-

tion. Because Carnauba is intended as an interactive tool rather than as a autonomous

verification system, the semantic path analyzer does not automatically spawn new invo-

cations of the example generator.

4.2 Constraint Queries

For each original query, the model checking engine producesa context-sensitive analysis

that globally solves the query over the choice of model. The variables from the encod-

ing equation block system are the properties of this analysis. From this representation,

the solution to the query can be recovered for any state in themodel and meaningful

examples can be generated.

However, for many applications of context-sensitive analyses, what is of interest is

not the set of properties associated with a specific stack-context, vertex pair. Rather,

what is necessary is the solution to the dual problem: given avertex and a desirable set

of solutions, what is the set of potential stack-contexts leading to that vertex that result in

one of the desirable solutions? In this section a broad generalization of the dual problem

is introduced in the form of theconstraint query. Constraint queries translate formalized

restrictions on the set of stack-contexts and the set of desirable solutions for a vertex into

an automaton that precisely accepts valid stack-contexts for the vertex meeting those

constraints. A practical technique based on manipulating regular languages is presented

for resolving these queries that does not require the computation of any additional fixed

points. Empirical data derived from using constraint queries to solve a set of program

comprehension exercises for a selected benchmark is provided. Additional applications

Page 181: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

166

mutual:Global Integer G

v0: Procedure A() v10: Procedure C() v1: if(...cond_1...) v11: print(G);s2: C(); v12: G := 1;s3: B(); v13:

v4: v14: Procedure main()

v15: G := 0;v5: Procedure B() v16: if(...cond_3...) v6: if(...cond_2...) s17: A();s7: A(); else s8: C(); s18: B();

v9: v19:

Figure 4.11: Sample Programmutual Exhibiting Mutual Recursion

of this query framework are explored in Chapter 5.

Figure 4.11 introduces the skeleton of an imperative program, mutual, exhibiting

mutual recursion between two procedures,A andB. The labels correspond to elements of

the vertex set,V, in the standard unrestricted hierarchical state machine representation

of the program. Thes-designated labels also refer to elements of the set of call-edges,Σ,

by their unique source. For example,s2 is a call-vertex in procedureA that is the source

of a call-edge,(s2, v10), to procedureC. As is always the case,main is the singular

procedure whose entry- and exit-vertex represent the beginning and end of the program,

respectively.

Given the program,mutual, of Figure 4.11 consider the following code compre-

hension exercise:

What paths in the program to the assignment atv12 may lead to the print statement at

Page 182: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

167

v11 where the call stack atv12 begins with successive calls to proceduresA andB?

Even for such a trivial program, this is a non-trivial code comprehension exercise

due to the mutual recursion. Answering the question atv12 requires knowledge of both

past and possible future events. In fact, the solution to this query will later be seen to

contain an infinite number of paths. As an example, the path generated by stack-context

s17s3s7s2 is a one solution. Describing the complete set of paths is made difficult by

the fact that there are only two call sites to procedureC and each leads to instances

of v12 that correspond to paths thatare solutions to the exercise (e.g. s17s3s7s2 and

s17s3s7s3s8) and paths thatare notsolutions to the exercise (e.g.s17s2 ands18s8).

A context-sensitive analysis,(α, β, p,Γ,Φ, α), applicable to the exercise is pro-

vided in Figure 4.12. Intuitively, the propertyp denotes that the print statement atv11 in

procedureC, may occur in the future. This is a sanitized version of the output from the

CTL query,EF v11. Contextsα andβ distinguish between assumptions about whether

p holds at the exit of a procedure, withα being the assumption that is does not hold.

This is the analysis over which a constraint query will be posed to solve the exercise.

Formally,constraint queriesare defined as follows.

Definition 27. Given a context-sensitive analysis,(C,X ,Γ,Φ, κ), over a model with

vertex setV and call-edge alphabetΣ, a constraint query on the analysis takes an

ordered triple,(v, c,∆), wherev is a vertex in the model,c is a regular expression over

Σ, and∆ is a decidable subset of2X , and returns an automaton,M , that acceptsσ ∈ Σ∗

if and only if

1. σ is a valid stack-context forv, and

2. σ is in the language ofc, and

Page 183: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

168

Context Transformers Property Transformersv [Γ(v)](α) [Γ(v)](β) [Φ(v)](α) [Φ(v)](β)

v0 p pv1 p ps2 β β p ps3 α β p pv4 ∅ pv5 p pv6 p ps7 β β p ps8 α β p pv9 ∅ pv10 p pv11 p pv12 ∅ pv13 ∅ pv14 p pv15 p pv16 p ps17 α β p ps18 α β p pv19 ∅ p

Thes-designated vertices also identify the elements ofΣ by call-edge source.

Figure 4.12: A Context-Sensitive Analysis,(α, β, p,Γ,Φ, α), overmutual wherep = EF v11

Page 184: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

169

3. ρ(σ, v) ∈ ∆

whereρ : Σ∗ × V → 2X is the solution function induced by the analysis.

Given this formulation, the set of stack-contexts that solve the exercise can be seen

as the solution to the constraint query

(v12, (s7 + s17)(s3 + s18)(s2 + s3 + s7 + s8 + s17 + s18)∗, p)

on the analysis solvingp = EF v11 of Figure 4.12.

The objective of a constraint query is to find the valid stack-context pre-image of the

solution functionρ given a vertex,v, and asolution constraint, ∆. Thestack-context

constraint, c, further restricts that pre-image to some subset of particular interest. In this

case, the solution constraint,p, requires thatv12 may occur in the future and the

stack-context constraint,(s7+s17)(s3+s18)(s2+s3+s7+s8+s17+s18)∗, requires that the

stack-context begin with successive calls toA andB followed by any arbitrary sequence

of calls. Resolving and applying constraint queries revolves around intersecting regular

languages that encode each of the constraints.

4.2.1 Valid Stack-Contexts

The set of valid stack-contexts for a vertex,v, can be described as a regular language

over the set of call-edges,Σ. This is demonstrated by constructing a finite automaton

that precisely accepts the set of valid stack-contexts forv.

Theorem 2. Givenv, a vertex in an UHSM,U , the set of valid stack-contexts forv is

precisely accepted by a finite automaton,Mv = (P ∪ ε,Σ,main, δ, Pv), where,

P ∪ε is the set of procedures ofU , P, plus a dead state,ε, Σ is the set of call-edges in

Page 185: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

170

U , the proceduremain in U is the initial state,δ(qs, σ) = qt whereqt is the procedure

in U containing the target ofσ if and only ifqs 6= ε andqs is the procedure containing

the source ofσ, otherwiseqs = ε, and Pv is the singleton set of accepting states

consisting only of the procedure containingv in U .

Proof. If σ = (σ0 . . . σn) is a valid stack-context forv then the source ofσ0 is main,

the initial state ofMv. For eachσi in σ, δ(qs, σi) = qt in Mv, whereqs is the procedure

containing the source ofσi andqt is the procedure containing the target ofσi sinceσi

is an element of a valid stack-context forv. Finally, the target ofσn must reside in the

procedure containingv and this is an accepting state ofMv. Thus,Mv acceptsσ.

Conversely, ifMv acceptsσ, then there is a sequence of states,q0 . . . qn, that accepts

σ. Since no accepting sequence containsε, each state in the sequence is a procedure in

U . By definition ofMv, q0 is the proceduremain in U . For each subsequenceqiqi+1 in

the accepting sequenceδ(qi, σi) = qi+1 thus,σi is a call-edge whose source is a call-

vertex in the procedureqi and whose target is the entry-vertex of the procedureqi+1.

Finally, sinceqn is the single accepting state, it must be the procedure containing v.

Hence,σ is a valid stack-context forv.

Weihl [Wei80] has shown that constructing the call graph (which is required to build

Mv in Theorem 2) is PSPACE-hard for recursive programs with function pointers. How-

ever, a model that conservatively over-estimates the set ofpossible calls can be used.

Such a model is usually constructed by treating each call through a function pointer as a

multi-way branch, with each successor calling one of the possible pointer targets. This

makes the calling relation explicit andMv then accepts stack-contexts that are valid with

respect to the induced call graph.

Page 186: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

171

A

B

C

s2, s3, s7, s8

s7, s8, s17, s18

s17

s18

s7

s2

s8

εmains2, s3, s7, s8, s17, s18s3

s2, s3, s7, s8, s17, s18

s2, s3, s17, s18

Figure 4.13: Automaton AcceptingLv, the Language of Valid Stack-Contexts forv12

Figure 4.13 illustrates the finite automaton that acceptsLv, the language of valid

stack-contexts forv12.

4.2.2 Stack-Context Constraints

A stack-context constraint is a regular expression over theset of call edges,Σ, in an

unrestricted hierarchical state machine,U . The following macro notation is introduced

to simplify the expression of stack-context constraints.

Notation 1. GivenP , a procedure inU , ΠP refers to the regular expressionτ0+ . . .+τn

where the setT = τ0, . . . , τn is the subset ofΣ such thatσ ∈ T if and only if the target

of call-edgeσ is the entry-vertex ofP in U .

This captures the notion of any call to a specified procedure.

Notation 2. The symbolΩ refers to the regular expression(σ0 + . . .+σn)∗ = Σ∗, where

Σ = σ0, . . . , σn.

Page 187: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

172

For stack-context constraints,Ω is useful as a wildcard literal standing for an ar-

bitrary sequence of call-edges. Using this notation, the stack-context constraint of the

constraint query for solving the exercise isLc = ΠAΠBΩ.

Regular expressions are powerful enough to express most of the constraints that are

of interest for this type of query. They can encode fixed sequences of calls as well as

any criteria based on a finite decision tree of calls. The Kleene-star operator allows

constraints to include the notion or zero or more calls to a recursive function as well as

notions of everyn-th invocation of a recursive function. Finally, the closure properties

of regular expressions allow them to express the conjunction, disjunction, and negation

of constraints.

4.2.3 Solution Constraints

A solution constraint for a constraint query is a decidable subset of the power set of the

properties of a context-sensitive analysis. Given such a constraint, there exists a regular

language, again over the alphabet of call-edges,Σ, that precisely includes the set of

stack-contexts generating an element of the constraint as asolution for a vertexv. This

is again demonstrated by constructing a finite automaton that precisely accepts such a

language.

Theorem 3. Given v, a vertex in an UHSM,U , a context-sensitive analysis,

(C,X ,Γ,Φ, κ) over U , and a solution constraint∆, a decidable subset of2X , then

the set of stack-contexts,σ, for v such thatρ(σ, v) ∈ ∆, is precisely accepted by a finite

automaton,M∆ = (C,Σ, κ, δ, A), whereC is a set of contexts in the analysis,Σ is the

set of call-edges inU , κ is the initial context of the analysis,δ(αi, σ) = [Γ(σ)](αi), and

Page 188: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

173

s2, s3, s7, s8, s17, s18s3, s8, s17, s18

βαs2, s7

Figure 4.14: Automaton AcceptingL∆, the Language of Stack-Contexts,σ, such thatρ(σ, v12) ∈ ∆

A is the accepting set of contexts,α, such that[Φ(v)](α) ∈ ∆.

Proof. Given a stack-context,σ = σ0 . . . σn, by definition ofM∆,

δ(δ(. . . δ(κ, σ0) . . . , σn−1), σn) = Γ(σn) . . . Γ(σ0) = Γ∗(σ). Thus, ifρ(σ, v) =

[Φ(v)](Γ∗(σ)) ∈ ∆ thenM∆ terminates in statec = Γ∗(σ) which is an accepting state.

Thus,M∆ acceptsσ.

Conversely, ifM∆ acceptsσ thenM∆ terminates in a context,α, such that[Φ(v)](α) =

[Φ(v)](Γ∗(σ)) = ρ(σ, v) ∈ ∆. Therefore,σ satisfies the solution constraint.

It is not necessary to explicitly generate the solution constraint set∆. It is adequate

to define a constraint by a function that decides it. Alternatively, the solution constraint

set can be reduced to a single element by using the function that decides∆ to project

the analysis down to the single property of being included in∆. The technique for

performing this projection was presented in Section 3.1.2.Further, any choice of∆ can

be made decidable (finite) by intersecting it with the finite set of property sets that are

mapped-to by the finite number of context, vertex pairs covered by the analysis.

Notation 3. The construct,λX.[X ∈ ∆]?, is used to denote a function that takes a set

of properties and decides∆.

Figure 4.14 illustrates the finite automaton that acceptsL∆, the language of stack-

Page 189: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

174

contexts,σ, such thatρ(σ, v12) = p.

4.2.4 Resolving a Constraint Query

Given a context-sensitive analysis over an UHSM with call-edge alphabetΣ, let Lv be

the language corresponding to the valid stack-context requirement, letLc be the lan-

guage corresponding to the stack-context constraint, and letL∆ be the language corre-

sponding to the solution constraint. Each is a regular language contained inΣ∗. The

solution to the constraint query is then the automaton accepting

L = Lv ∩ Lc ∩ L∆.

This automaton, which can be minimized using the standard algorithm [Huf54],

encapsulates the set of solutions to the constraint query. It precisely accepts the set of

valid stack-contexts forv12 that meet both the stack-context constraint as well as the

solution constraint and thus satisfy all three criteria of the definition.

Figure 4.15 illustrates the solution to the example constraint query. This automa-

ton exactly captures the infinite set of stack-contexts, andthus implicitly the set of

paths in the program, that are solutions to the exercise posed in the introduction to this

section. For example, notice that this automaton accepts stack-contextss17s3s7s2 and

s17s3s7s3s8, but rejects valid stack-contextss17s2 ands18s8.

In essence, the construction of the automaton has extractedthe minimal information

from the analysis that is necessary to solve the problem and restricted it according to

the constraints. Once the automaton has been constructed, the source analysis can be

discarded.

Page 190: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

175

s7s3s17

s2, s3, s7, s8, s18

s2, s7, s8, s17, s18

s2, s3, s7, s8, s17, s18

s8, s17, s18s2, s3, s7,

s2, s3, s8, s17, s18

s7, s8, s17, s18

s3 s7

s2, s3, s17, s18

α Α, α Β, α Α, α

Β, α

main, C, β ε

s8

s2

Figure 4.15: Minimal Automaton AcceptingL = Lv ∩ Lc ∩ L∆, the Language ofSolutions to the Constraint Query

4.2.5 Optimizing Constraint Queries

The number of states in the intersection automaton,M , is bounded above by the product

of the number of states in the automata acceptingLv,Lc, andL∆. Minimizing this prod-

uct automaton requires what is essentially a greatest-fixed-point computation over pairs

of states in the model. While algorithms to perform these intersection and minimization

operations have been refined, the time required for the construction can be reduced by

restricting the scope of the automata encoding the valid stack-context and solution con-

straints. In fact, these optimizations are essential to thepracticality of this approach and

are assumed in the presentation of empirical data for the remainder of this section and

in the discussion of additional applications in Chapter 5.

For the valid stack-context constraint, the set of states comprisingMv can be re-

stricted toP ′ ∪ ε, whereP ′ is the set of procedures that occur on some path from

Page 191: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

176

main to the procedure containing the vertex in thecalls relation ofU . Transitions from

states in this set to states not in this set are redirected toε. Transitions from states not in

this set to states in this set are removed. It is trivial to seethat this machine still accepts

precisely the set of valid stack-contexts for the vertex.

Likewise, for the solution constraint, the set of states comprisingM∆ can be re-

stricted toC′ ∪ ε, whereC′ is the set of contexts that are live for procedures inP ′ and

ε is a new dead state. Transitions from states inC′ to states not inC′ are redirected toε.

Transitions from states not inC′ to states inC′ are removed. The new state,ε, transitions

to itself on all inputs. This machine acceptsL∆, whereLv ∩L∆ ⊆ L∆ ⊆ L∆. This does

not change the automaton solution to the constraint query since

L = Lv ∩ Lc ∩ L∆ ⊆ Lv ∩ Lc ∩ L∆ ⊆ Lv ∩ Lc ∩ L∆ = L.

4.2.6 Application to Code Comprehension

Code comprehension refers to the problem of programmers asking questions and getting

answers about the behavior of a program. Comprehension is anessential element in both

extending legacy source bases as well as ensuring the quality of software. Constraint

queries provide an additional mechanism for exploiting context-sensitive analyses for

this purpose.

The programtreecode [Bar90] is a C implementation of a popular algorithm

for performingn-body simulation based on octtrees. A context-sensitive live variable

analysis, described in detail in Section 5.1, is supplied for this code. This analysis, given

a statement and a stack-context, returns the set of program variables that may be used

from that point before they are redefined or the program terminates. Properties in this

Page 192: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

177

Query Constraints Automaton States CPULocation Context Solution Mc Mv M∆ M Time (ms)maketree:45 Ω λS.[root /∈ S]? 1 6 1 6 80makecell:109 Ω λS.[root ∈ S]? 1 8 5 8 120savestate:328 ΠoutputΠsavestate λS.[Saved ⊂ S]? 5 6 1 1 90cputime:40 ΩΠstepsystemΩ λS.[|S| > 35]? 2 9 9 8 281gravsum:212 ΩΠstepsystemΩ λS.[size(S) > 640 bytes]? 2 9 4 9 251allocate:27 ΩΠmakecellΩ λS.[root /∈ S ∨ bodytab /∈ S]? 2 15 1 9 441

treecode: Lines= 2711, |P| = 66, |Σ| = 224, |C| = 315, and|X | = 953

Figure 4.16: Examples of Constraint Queries for Program Comprehension

analysis correspond to the liveness of the individual program variables, and the contexts

correspond to sets of variables that are possibly live at theend of some procedure.

Figure 4.16 provides performance information on a selection of constraint queries

made over this analysis. The queries are meant to answer meaningful, non-trivial, ques-

tions about the live variable sets associated with different points in the code that could

be useful for understanding or modifying the behavior of theprogram. Locations are

provided as a procedure name followed by a source code line number. Variablesroot

andbodytab are the entry points for the octtree and body position-velocity vector data

structures, respectively. The code contains a routine,savestate, for recording its

state to a file andSavedrefers to the set of variables that the routine records.Automa-

ton Statesrefers to the number of states in the minimized automaton accepting each of

the constraint languages.Timegives the number of milliseconds required to resolve the

constraint query given the output of the analysis.

Notice that the solution constraints take advantage ofaggregate propertiesof the

analysis. That is, the queries are not limited to the liveness of particular combinations of

variables, but also include other decidable properties of the live variable set such as its

cardinality and memory footprint. A query such as the one associated with the location

in savestate could be used to check if there are cases where elements of thelive

Page 193: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

178

variable set are being omitted by the state saving routine. As the performance data in

Figure 4.16 illustrates, this is a practical technique for resolving actual queries over a

context-sensitive analysis for a real program.

The practicality of this technique relies critically on thesize of the constraint and

solution automata being small. In practice, the pre-reduction techniques outlined in

Section 4.2.5 trivially restrict the automata to a reasonable number of states before they

are passed to the more time-intensive intersection and minimization routines. For the

valid stack-context constraint, the number of states is limited to one plus the number of

procedures that occur on some path in the call graph frommain to the procedure con-

taining the query vertex. For programs with acyclic call graphs, this number also bounds

the number of states in the solution constraint automaton. Potential blow-ups are limited

to recursive cases where the live context set associated with a recursive function is large

and cases where function pointers force an extremely conservative approximation of the

call graph.

4.2.7 Conclusions and Future Work

Previous work [RHS95] demonstrated how demand based queries on an interprocedural

analysis could be recast as reachability problems over anexploded supergraphrepresen-

tation of the analysis. In this framework, it is possible to determine if a specific property

holds at a specific vertex for some stack-context by doing a reachability computation

over the entire interprocedural control-flow graph representation of the program. Like

this technique, these queries could be solved without the computation of any additional

fixed points. Unlike this technique, the reachability computation produces a complete

Page 194: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

179

path in the program. However, this technique is not robust with respect to stack-context

constraints and cannot be used for demand queries on aggregate property sets.

This technique is an improvement on the boundedk-CFA analysis techniques sur-

veyed by Grove [GDDC97]. In this class of analyses, decisions about stack-contexts

are made to ak-boundeddepth from the object of the query. The primary application

of these techniques is to distinguish cases for procedure cloning. Chapter 5 presents

applications that show how the output of constraint queriescan be used as an alternative

to these analyses.

Recent work [RSJ03] has demonstrated how interprocedural program analyses could

be recast as weighted pushdown automata. As as consequence of this reduction they

demonstrate how it is possible to extract from an analysis a regular expression that cap-

tures the set of stack-contexts for a program element such that the element has a particu-

lar property in those contexts. They alluded to the possibility of using this functionality

to solve problems similar to those discussed in this sectionand in Chapter 5. However,

since this technique again relies on an implicit representation of the data-flow, it cannot

be used to construct automata for the same types of aggregateproperty sets that were

shown to be useful for program comprehension queries.

Wallach, Appel, and Felten have pointed out [WAF00] that Java’s stack inspection

method for security [WF98], aimed at restricting access to critical system functions,

can be implemented more efficiently by constructing a finite state machine whose tran-

sitions correspond to procedure calls. One interpretationof constraint queries is that

they generalize this technique to arbitrary analyses and more robust constraints with the

caveat that constraint queries produce specialized automata for specific program loca-

Page 195: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

180

tions rather than a single monolithic state machine.

Section 4.1 explained how slicing of the condensate of an environment dependence

graph could be reduced to valid path reachability. A slice consists of a set of ver-

tices, ((α, v), z), corresponding to the equivalence class of states,((σ, v), z), where

Γ∗z(σ) = α. An automaton accepting the necessary equivalence class ofstack-contexts

can be recovered as the result of a modified constraint query for v, with stack-context

constraintΩ andL∆ defined as the language accepted by the automaton of Theorem 3

with set of accepting statesα. The modification is necessary since the relevant prop-

erty transformer forv, mapping contexts to sets of properties, is not necessarilyinjective,

hence the solution constraint set∆ can not be specified in the usual way as a decidable

property of the image of that property transformer. The reachability of the equivalence

class does not imply that every element of the class is reachable in the original environ-

ment dependence graph. It is a subject of future work to determine how to modify the

reachability algorithm to produce a corresponding stack-context constraint that would

restrict the output of the modified constraint query to only those stack-contexts,σ, such

that(σ, v) is part of the slice over the full environment dependence graph.

Page 196: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

Chapter 5

Additional Applications

Constraint queries solve the dual problem for context-sensitive analyses: given a vertex

and a desirable set of solutions, what is the set of potentialstack-contexts leading to

that vertex that result in one of those desirable solutions?While the ability to solve

queries of this sort on the output of model checking problemshas been shown to be

useful for program comprehension, the automata produced bysuch queries can also be

incorporated into a variety of code transformations for a range of applications.

In a context-insensitive analysis, the set of properties associated with a program

element is immutable over all occurrences of that element inthe program. As such,

the corresponding code can be modified directly, based on theoutput of the analysis.

However, these modifications must be conservative, taking into account all possible oc-

currences of an element. In a context-sensitive analysis, the associated set of properties

is determined by the occurring stack-context and hence can vary from one invocation to

another. The automaton produced by a constraint query can beseen to partition the set

of calling contexts according to the provided constraints.By introducing the output of a

181

Page 197: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

182

constraint query as a guard to a program element, the automaton can be tested with the

current stack-context each time the corresponding elementis reached (or alternatively,

the first time it is reached since the context has changed). The result of applying the

automaton can then be used to branch the behavior of the program in a way that depends

upon the property set thatactually holdswhen the statement executes. In all cases, the

analysis can be discarded after the automata necessary for the code transformations have

been performed. In this way, the full power of a context-sensitive analysis can be ex-

ploited to modify the behavior of a program in ways that are not possible given only the

output of a context-insensitive analysis.

This chapter sketches two applications of this idea. In the first, constraint queries

are applied to a live variable analysis with the intention ofreducing the size of the state

that must be saved in an application-level checkpointing system. The idea is that for

variables whose liveness is dependent upon the calling context of the checkpoint, an

automaton can decide the liveness dynamically. Variables that are not live at a spe-

cific invocation of a checkpoint do not need to be saved at thatcheckpoint. The result

is a potentially significant savings of both time and space. The second application in-

volves using constraint queries to enforce a safety policy decidable by the analysis.

Here, automaton are injected before statements whose potential for malice is dependent

upon their calling context. In cases where the guarding automaton accepts, the pro-

gram is halted before the offending statement can execute. In all other cases, execution

proceeds normally. This application is demonstrated on an analysis intended to detect

format string vulnerabilities. Both applications take advantage of the added precision

provided by context-sensitivity without requiring any of the code replication associated

Page 198: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

183

with procedure cloning or inlining [Hal91]. Preliminary empirical data is presented for

both applications and considerations for future development are discussed.

5.1 Application-Level Checkpointing

The running time of many applications now exceeds the mean time between failure of

the underlying hardware. For example, protein-folding using ab initio methods on the

IBM Blue Gene [IBM, The02] is expected to take a year for a single protein, but the

machine is expected to experience an average of one processor failure per day. Conse-

quently, there is a need for software to be tolerant of hardware faults.

Checkpointing is the most commonly used technique for faulttolerance. The state

of a running application is saved at prescribed intervals tostable storage. On failure, the

computation is restarted from the most recent checkpoint. Checkpointing that is initiated

from within a program and is sensitive to the specific semantics of that application is

referred to asapplication-level checkpointing[Nat]. This is to be distinguished from

the more commonsystem-level checkpointing[LTBL97] in which the entire state of the

machine is saved to stable storage at the operating system level without respect to the

individual applications that might be running at checkpoint time.

The advantage of application-level checkpointing is that for many long running ap-

plications there are a few key data structures from which theentire state of the computa-

tion can be recovered. For example, the protein-folding program on the IBM Blue Gene

saves only the position and velocity of each of the bases in the protein since this infor-

mation is sufficient to recover the entire computational state. Instead of saving terabytes

of intermediate data, it saves only a few megabytes. However, this savings comes at the

Page 199: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

184

cost of the developer having to write custom checkpointing routines that capitalize on

this domain-specific knowledge.

The goal then is to augment a compiler to determine the set of data that must be saved

at a checkpoint and then inject code to save only that data. Previous work [BPK94,

LF90] has shown that checkpoint sizes can be reduced by performing variations on

static live variable analysis over the set of variables in the program to determine, for

each checkpoint, the set of variables whose value may be required after the checkpoint.

Variables whose value is definitely not required can safely be excluded from the check-

point.

For checkpoints occurring deep within the call and return structure of a program, the

set of variables that are live at the checkpoint may depend upon the context in which

the checkpoint is reached. To address this, constraint queries can be posed on a context-

sensitive live variable analysis and the result injected into the code to make a liveness

decision at runtime. This allows the data saved at a checkpoint to be restricted to exactly

the information necessary to continue the program from the precise context in which the

checkpoint occurred. This is an improvement over previous approaches in which a static,

compile-time decision had to be made about each variable at each checkpoint location.

Further, this result comes without having to perform any type of function inlining and

without having to modify the code to pass any additional function parameters. This idea

is implemented as part of athree-tierapproach to deciding liveness at checkpoint time.

In this section, the context-sensitive live variable analysis is presented along with a

description of how constraint queries on the analysis can beused to drive checkpoint-

ing optimizations. This is followed by some empirical data derived from applying this

Page 200: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

185

technique to a selection of codes as well as some preliminarydata derived from incor-

porating these ideas into the Cornell Checkpointing Compiler,C3 [ISS]. A discussion

of directions for future work concludes this section.

5.1.1 Analysis

A live variable analysis requires that, for each program statement, the set of locations

that may be used and the set of locations that must be defined are identified. The defi-

nition of these sets for C requires slightly more sophistication than the intuitive notions

used for the pseudo-code example in Chapter 1 owing to the presence of pointers.

Use(v) is the set of locations whose valuemay be usedin the evaluation of the state-

ment associated with vertexv. For pointer expressions, this may include both the

pointer location as well as the location pointed-to by the pointer.

Def(v) is the set of locations thatmust be definedin the evaluation of the statement

associated with vertexv. By convention, the set for the statement at the beginning

of any lexical block includes all of the locations that are entering scope. For

assignments through pointers, this set includes the targetof the pointer only if the

pointer target can be unambiguously determined.

These sets can be conservatively determined by a local syntactic analysis [Muc97]

that makes use of an underlying pointer analysis [SH97]. Forthis live variable analy-

sis, structure fields are treated independently but arrays are treated monolithically with

overwriting array assignment loops treated as definitions.

Page 201: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

186

Data-Flow Equations:

φv =

Fv if v isP exit for some procedurePFv φP entry φvret if v is acall to procedurePFv (

v′∈succs(v) φv′) otherwise

whereFv = λX.Use(v) ∪ (X − Def (v))

Operations on Data-Transforming Functions:

Initial function: ⊥ = (∅,U), whereU is the universal set of locationsIdentity function: id = (∅, ∅)Data-flow functions: Fv = (Use(v),Def (v))Application: (G,K)(X) = G ∪ (X −K)Union confluence: (G1, K1) t (G2, K2) = ((G1 ∪G2), (K1 ∩K2))Composition: (G1, K1) (G2, K2) = (G1 ∪ (G2 −K1), K1 ∪K2)Canonical form: 〈(G,K)〉 = (G,K −G)

Figure 5.1: Equations for Generating a Context-Sensitive Live Variable Analysis

Given these sets, the live variable analysis is performed bycomputing the least fixed

point of the second-order equations shown in the top part of Figure 5.1. These are the

same equations that were introduced in Chapter 1. This fixed point is computed over the

interprocedural control-flow graph of the program using theusual lattice of functions

induced by the subset relation. The analysis is context-sensitive modulo the context and

flow-insensitive pointer analysis used to construct theDef andUse sets.

This analysis is efficient since each live variable functioncan be represented by a

pair of variable sets,(G,K). Each operation required to compute the fixed point is then

reduced to a constant number of set operations. A canonical form reduces the necessary

equality test to syntactic equality of the two sets representing each second-order data-

flow function. The complete list of operations is shown in thelower part of Figure 5.1.

Given a vertex,v, and a stack-context forv, σ = σ0 . . . σn, where eachσi is a call-edge,

Page 202: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

187

the set of live variables associated withv in contextσ is

L(σ, v) = φv φσretn . . . φσret

0(∅),

whereσreti refers to the return-vertex successor corresponding to thecall-vertex source

of call-edgeσi.

As illustrated in Section 1.2, the solution function of thisanalysis can be represented

as a context-sensitive analysis as defined in that section.

5.1.2 Optimizations and Code Transformations

The output of the live variable analysis can be used to partition the set of variables in the

program into three tiers with respect to the set of checkpoints.

Tier 1 Variables that are never live at any occurrence of any checkpoint.

Tier 2 Variables that are live at some occurrence of some checkpoint, but whose live-

ness at each individual checkpoint does not depend on the context in which the

checkpoint occurs.

Tier 3 Variables whose liveness at one or more checkpoints dependson the context in

which that checkpoint occurs.

When the program contains only a single checkpoint, the tiers can be concisely de-

scribed, in order, as the the set of variable that arenever live, always live, andsometimes

live at the checkpoint.

The partitioning of the variables into tiers requires examining the output of the live

context analysis, as described in Section 3.1.3, over the live variable context-sensitive

Page 203: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

188

analysis. LetC = c0 . . . cn be the set of checkpoints in the program,LC (ci) be the set

of contexts from the analysis that are live for the procedurecontaining the checkpoint,

andΦ(ci) be the property transformer from the live variable analysisassociated with the

checkpoint statement,ci. The partitioning of the variables is then as follows:

Tier 1 x | ∀c ∈ C.∀α ∈ LC (c).x /∈ [Φ(α)](c)

Tier 2 x | x /∈ Tier 1 ∧ ∀c ∈ C.∀α, β ∈ LC (c).x ∈ [Φ(α)](c)↔ x ∈ [Φ(β)](c)

Tier 3 x | x /∈ Tier 1 ∧ x /∈ Tier 2

For checkpointing, variables inTier 1can be ignored as they do not need to be saved

at any occurrence of any checkpoint. Variables inTier 2 andTier 3 are monitored by a

variable descriptor stack (VDS)[BMPS03]. The VDS is pushed and popped with the

name of each variable as it enters and leaves scope, respectively. At checkpoint time the

contents of this stack is examined to determine which variables may need to be saved.

Given a checkpoint, any variable inTier 2or Tier 3 that is never live at that checkpoint is

put on a separate, statically constructed,exclusion listspecific to that checkpoint. This

list contains the variables that do not need to be saved at thecheckpoint despite being

on the VDS. Finally, for variables inTier 3 whose liveness at a checkpoint depends on

the context in which the checkpoint occurs, a constraint query is posed.

Given a checkpoint,c, and a variable,x, from Tier 3 whose liveness depends on

the context in which the checkpoint occurs, letM cx be the automaton generated by the

constraint query(c,Ω, λS.[x /∈ S]?). Intuitively, this automaton accepts precisely the

set of stack-contexts, valid1 with respect to the checkpoint, for which the variable does

1Stack-contexts that occur at runtime are by definition valid, so it is not strictly necessary to include the

valid stack-context constraint. In some cases, excluding this constraint simplifies the solution automaton.

Page 204: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

189

ckpnt-example:Global Integer G, H

v0: Procedure F() v5: Procedure main() v1: G := H + 1; v6: G := 0;v2: take_checkpoint; v7: H := 0;v3: print(G); s8: F();v4: v9: H := 1;

s10: F();v11: print(H);v12:

Figure 5.2: Sample Programckpnt-example with Context-Sensitive Checkpoint

not need to be saved. For each such variable, code to execute the automaton is injected

as part of the checkpointing routine. At checkpoint time, the current stack-context is

applied to this automaton. If it accepts, the variable is added to the exclusion list associ-

ated with the checkpointfor the duration of that instance of the checkpoint. Otherwise,

the variable is saved normally.

By choosing a checkpointing strategy based on the liveness characteristics of each

variable, the runtime overhead of the state-saving mechanism is reduced while, at the

same time, the full power derived form the information contained in the context-sensitive

analysis is retained. This capability is made possible by the ability to resolve constraint

queries over the live variable analysis. Once the necessaryautomata have been con-

structed, the output of the analysis can be discarded.

5.1.3 An Example

Figure 5.2 introduces an imperative program with a single checkpoint statement. The

output of the live variable analysis, cast as a reduced context-sensitive analysis, is pre-

Page 205: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

190

Context Transformers Property Transformersv [Γ(v)](α) [Γ(v)](β) [Φ(v)](α) [Φ(v)](β)

v0 H Hv1 H Hv2 G G, Hv3 G G, Hv4 ∅ Hv5 ∅ ∅v6 ∅ ∅v7 ∅ ∅s8 α β ∅ ∅

v9 ∅ ∅s10 β β H Hv11 H Hv12 ∅ ∅

Thes-designated vertices also identify the elements ofΣ by call-edge source.

Figure 5.3: Context-Sensitive Live Variable Analysis,(α, β, G,H,Γ,Φ, α), overckpnt-example

sented in Figure 5.3. Contextα corresponds to the empty set of assumptions and context

β corresponds to the assumption setH. For the procedureF, both contextsα andβ

are live. SinceG ∈ [Φ(v2)](α) andG ∈ [Φ(v2)](β), the variableG is in Tier 2, is pushed

to and popped from the VDS, and does does not require an automaton to decide its live-

ness at the checkpoint. However,H ∈ [Φ(v2)](β) butH /∈ [Φ(v2)](β). Thus,H is in

Tier 3 so it is pushed to and popped from the VDS but also requires an automaton to

decide its liveness at checkpoint time.

The automaton deciding the liveness ofH is the solution to the constraint query

MH = (v2,Ω, λS.[H /∈ S]?). The automaton is illustrated in Figure 5.4 and accepts only

those valid stack-contexts for the checkpoint for which thevariable can be excluded.

For this example, the automaton accepts only the stack-context σ = s8. It is clear thatH

does not need to be saved in this context since the assignmentat v9 kills the checkpoint

Page 206: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

191

s8, s10

s10

s8

Figure 5.4: Minimal Automaton Accepting Contexts whereH can be Excluded

time value ofH before that value is ever used again. Hence, this value is unnecessary

to recover the computational state of the program from the checkpoint, taken in this

accepting context.

Figure 5.4 shows the transformations to the code that are necessary for the check-

pointing routine to function. The automatonMH addsH to the exclusion list for the

checkpoint only when the checkpoint is reached from the callat s8. In this case, the

checkpoint only savesG. When reached from the call ats10 the checkpoint saves both

G andH, as both are necessary to the continuation of the program. The actual time and

space savings derived from this optimization is dependent on the size and representation

of the variableH.

5.1.4 Experimental Results

The live variable analysis and subsequent live context analysis were performed on in-

terprocedural control-flow graph representations of a set of C benchmark programs. A

single logical checkpoint location was selected for each application that would exhibit

Tier 3 variables. Figure 5.6 shows the running time of these analyses along with statis-

tics on the lines of code in the benchmark as well as the total size of the static variables

Page 207: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

192

ckpnt-example-trans:Global Integer G, H

v0: Procedure F() v5: Procedure main() v1: G := H + 1; VDS_push(G);

exclusion_list = <empty> VDS_push(H);if(accept_stack(M_H)? v6: G := 0;exclude_add(H); v7: H := 0;

v2: take_checkpoint; s8: F();v3: print(G); v9: H := 1;v4: s10: F();

v11: print(H);VDS_pop(); /* pop G */VDS_pop(); /* pop H */

v12:

Figure 5.5: Transformed Sample Programckpnt-example-trans Modified to In-clude the Result of the Constraint Query

Benchmark Analysis Times (ms)Code Lines Variables Size (B) Live Variable Live Contextpostfix 134 102 1298 30 20179.art 1270 318 1711 120 20129.compress 1421 235 44111738 50 20183.equake 1514 407 4399 100 21treecode 2711 953 5446 291 40256.bzip 4652 715 4296560 391 40

Figure 5.6: Running Times for Application-Level Checkpointing Analyses

Benchmark Tier 1 Tier 2 Tier 3 QueriesCode # Size (B) # Size (B) # Size (B) Avg Time (ms)postfix 19 134 2 804 8 71 4 561179.art 42 165 23 176 11 72 4 761129.compress 43 196865 28 43914104 11 44 6.73 1212183.equake 44 416 55 2040 3 40 4 211treecode 33 256 23 96 6 24 8 1361256.bzip 64 12276 70 79020 8 32 9 1963

Figure 5.7: Variable Partitioning and Tier 3 Variable Constraint Query Averages Derivedfrom Checkpointing Analysis

Page 208: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

193

used in the program. Note that the large static memory sizes for 129.compress and

256.bzip are a consequence of the substantial statically allocated character frequency

lookup tables employed by those applications. The smallestpossible size was assumed

for variables whose size is dependent on the input. The result of the variable partitioning

is provided in Figure 5.7. The statistics for each tier referonly to variables that are in

scope for the choice of checkpoint. A constraint query was generated at the checkpoint

for each variable inTier 3. The two rightmost columns reflect the average number of

states in the minimal automaton generated by resolving eachquery and the total time

to generate all of the constraint automata and to generate and minimize the solution

automaton.

For most codes, the number of variables whose liveness is dependent on the check-

point context is small in contrast to the total number of variables that are in scope. The

actual number depends, of course, on the choice of checkpoint location. Checkpoints

placed in functions with only a single possible valid calling context cannot, by defini-

tion, produce anyTier 3 variables. This is frequently the case, especially for functions

called directly frommain. In many cases, these are the functions most likely to contain

the sort of “outer loops” that make obvious choices for checkpoint placement.Tier 3

variables are most likely to occur for checkpoints placed inside multi-use routines called

from numerous, disparate call sites.

5.1.5 Experience withC3

Under the supervision of the Intelligent Software Systems research group at Cornell, a

pair of students, David Crandall and David Vitek, extended the Cornell Checkpointing

Page 209: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

194

Benchmark No Analysis Tiers 1 and 2 Only Tiers 1, 2, and 3# Code Saved (KB) Saved (KB) Red (%) Saved (KB) Red (%)1 CG 62498.001 26657.757 57.3 26657.751 57.32 EP 1049.018 1049.018 0 1049.018 03 MG 0.589 0.521 11.5 0.498 15.44 SP 83495.529 83493.664 < 0.01 83493.664 < 0.015 xmppi 2293.329 2286.307 0.3 2032.169 11.4

Figure 5.8: Static Memory State Size Reduction Resulting from Variations on LiveVariable Analysis

Compiler to integrate automata derived as solutions to constraint queries into compiled

code [CVME04]. The automata are injected into the target application by theC3 pre-

compiler and then compiled together with the application bygcc.

Their objective was to measure both the potential savings from applying these au-

tomata to checkpointing as well as to the measure the additional runtime overhead in-

curred by executing them. They conducted experiments usinga sample of five C bench-

marks programs, four from the NAS [NAS] benchmark suite, anda fifth, xmppi, nu-

merical code [PTVF92] that computesπ to 20,000 decimal places and makes use of

large, statically allocated arrays. Their experiments were performed on a 2.6 GHz Pen-

tium 4 PC with 512 MB of RAM running Linux 2.4. The NAS codes were run in

sequential mode. Full optimizations (using the -02 commandline option) were enabled

during compilation. Checkpoint state information was saved to a local hard disk.

Their experiments confirmed that live variable analysis measurably improves the

performance of application-level checkpointing in both execution (checkpointing) time

and size of state saved. Figure 5.8 shows a comparison of the amount of static mem-

ory saved at a single checkpoint location without the benefitof the live variable anal-

ysis, with the benefit of the analysis but with the context-sensitivity disabled, and with

Page 210: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

195

1 2 3 4 50

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

Test Program

Nor

mal

ized

Exe

cutio

n T

ime

Original ProgramNo Liveness AnalysisContext−Insensitive OnlyFull Liveness Analysis

Figure 5.9: Program Execution Time by Type of Analysis

Page 211: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

196

Variable Type Size (B) Scope Tier Automaton Statespi char [] 100000 Global 3 2q char [] 100000 Global 3 2r char [] 100000 Global 3 2s char [] 100000 Global 3 2sx char [] 100000 Global 3 2sxi char [] 100000 Global 3 2t char [] 100000 Global 2 —x char [] 100000 Global 2 —y char [] 100000 Global 3 2xmppi: Lines= 1148, |P| = 23, |Σ| = 25, |C| = 128, and|X | = 105

Figure 5.10: Liveness Result for Non-trivial Static Allocations ofxmppi

the full analysis enabled. Figure 5.9, shows the normalizedexecution time of the pro-

gram against each of the three variations on the live variable analysis (none, context-

insensitive, context-sensitive). While checkpointing incurs some temporal overhead,

this overhead was never appreciably increased and in some cases was non-trivially re-

duced by the incorporation of the live variable analysis.

The four NAS benchmarks had the drawback of already having checkpoint locations

selected to significantly reduce the amount of state that would be in scope at checkpoint

time. Further, in two cases,EP andSP, the pre-selected checkpoint was within the

function main. Sincemain only executes in one context, this rendered the context-

sensitive extensions to the analysisa priori useless for those checkpoint locations.

The performance improvement achieved by the context-sensitive analysis forxmppi

is attributable to the structure of the program. The checkpoint was placed at entry of a

pervasive and expensive numerical routine,drealft. The computation ofπ is done

in base 256 so that a character array can be used to hold the result. Eight additional

global arrays are used to store intermediate data. The size of these arrays is fixed at

Page 212: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

197

compile time by a#define statement to be 100000. Figure 5.10 provides the liveness

characteristics for those arrays. These arrays dominate the checkpointing overhead of

the program. The remaining live variables consist of one, two, and four byte base type

variables used to hold individual values and array indices and do not significantly affect

the checkpointing performance.

Independent of the benchmark programs, the runtime incurred by integrating the

automata was also measured. The objective was to determine a“tipping point” for the

size of a variable that would determine when the time required to save the variable would

exceed the time required to execute an automaton to determine if the variable could be

excluded from the checkpoint. Since the execution time is linear in the size of the current

stack-context, an experiment was conducted at three distinct, fixed, stack depths, 5, 100,

and 1000, that compared the time required to save a single variable against the time to

execute the automaton that would exclude it. Those results are presented in Figure 5.11.

The results suggest that for a stack of depth 5, the choice of optimization strategy has

little impact on the total performance until the size of the variable reaches the checkpoint

transfer block size of 1 KB. Beyond the one kilobyte barrier,excluding the variable by

executing the automaton is more efficient than saving the unnecessary variable. For the

depth 1000 case, the context-sensitive code is roughly 10% slower than the context-

insensitive code for variable sizes up to about 3 KB, but becomes significantly faster

when the size exceeds 5 KB.

Additional experiments on the automata also concluded thatthey did not appreciably

add to the size of the resulting executable program and that,except in cases where the

checkpoint stack-depth is excessive, caching the automataoutput to handle cases where

Page 213: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

198

Stack Depth 510

010

110

210

310

40.75

0.8

0.85

0.9

0.95

1

1.05

1.1

1.15

Variable Size (bytes)

Avg

Tim

e pe

r C

heck

poin

t (m

illis

econ

ds)

Original C3 CodeContext−Sensitive Code

Stack Depth 10010

010

110

210

310

40.8

0.85

0.9

0.95

1

1.05

1.1

1.15

Variable Size (bytes)

Avg

Tim

e pe

r C

heck

poin

t (m

illis

econ

ds)

Original C3 CodeContext−Sensitive Code

Stack Depth 100010

010

110

210

310

41.1

1.15

1.2

1.25

1.3

1.35

1.4

1.45

1.5

Variable Size (bytes)

Avg

Tim

e pe

r C

heck

poin

t (m

illis

econ

ds)

Original C3 CodeContext−Sensitive Code

Figure 5.11: Average Checkpoint Execution Time as a Function of Call-Stack Depth

Page 214: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

199

the same stack-context was executed with a repeated stack-context had no detectable

impact on the performance of the system as a whole.

5.1.6 Conclusions and Future Work

The analysis presented in the section considers only statically allocated (lexical) vari-

ables. For many scientific applications, the data that dominates the size of application

checkpoints is heap-allocated. Although this analysis canbe directly extended to the

heap by requiring that heap-allocated memory accessible from the set of live static vari-

ables at a checkpoint also be saved, it is unclear that this isa satisfactory solution. Alter-

natively, the live variable analysis could be extended to cover regions [CR03, RC03] of

heap-allocated memory or utilize a tractable context-sensitive pointer analysis [LA03]

to improve precision. Preliminary experiments withC3 have suggested that the key

to controlling the overhead of saving the live heap storage lies in effectively combining

static analysis with runtime techniques that monitor the unreclaimed portion of the heap.

As a companion to the live variable analysis, the checkpointing system might also

benefit from the incorporation of anincrementalanalysis [BPK94, LM00]. This analy-

sis, of similar complexity to live variable analysis, determines the set of program vari-

ables that may have changed since the last checkpoint. The idea is that only those vari-

ables whose value may have changed since the previous checkpoint need to be saved.

The remaining values can be pulled, at recovery time, from previous checkpoint sets. If

the checkpoint locations are known at compile time, then this is a traditional data-flow

analysis and all of the techniques applicable to the live variable analysis can be applied

to this analysis. The equations for constructing a context-sensitive incremental analysis

Page 215: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

200

Data-Flow Equations:

φv =

id if v isP entry for some procedureP(⊔

v′∈preds(v)(Fv′ ψv′) otherwise

ψv =

φMexit FP exit if v is acall to procedurePφv otherwise

whereFv =

λX.∅ if v is a checkpointλX.X ∪Mod(v) otherwise

Operations on Data-Transforming Functions:

Initial function: ⊥ = (∅,U), whereU is the universal set of locationsIdentity function: id = (∅, ∅)

Data-flow functions: Fv =

(Mod(v), ∅) if v is a checkpoint⊥ otherwise

Application: (G,K)(X) = G ∪ (X −K)Union confluence: (G1, K1) t (G2, K2) = ((G1 ∪G2), (K1 ∩K2))Composition: (G1, K1) (G2, K2) = (G1 ∪ (G2 −K1), K1 ∪K2)Canonical form: 〈(G,K)〉 = (G,K −G)

Figure 5.12: Equations for Generating a Context-SensitiveIncremental Analysis

Page 216: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

201

are provided in Figure 5.12. This analysis uses the same function representation as was

used for the live variable analysis. The setMod(v) refers to the set of locations thatmay

be modifiedby the execution of the statement atv.

Incremental checkpointing reduces the size of each individual checkpoint but man-

dates that previous checkpoints with values that would be required in the event of failure

be retained. The incremental and live variable analyses areorthogonal — checkpoints

that are restricted to the set of variables that have changedsince the last checkpoint can

also be further restricted to the set of variable that are live at the checkpoint. That is,

automata can be generated from both analyses and combined for each checkpoint loca-

tion. The result being that only variables that are in the intersection of the incremental

and live variable sets at a checkpoint are not excluded.

In C3 the incremental checkpointing problem is complicated by the nuance that

checkpoints are not taken at every static occurrence of a checkpoint statement. Ex-

ternal counters are used prevent checkpoints from being taken too frequently. This be-

guiles the use of the provided incremental analysis since itis incorrect to conclude that

a checkpoint was taken simply because it dominates another checkpoint statement on

some path. The solution to this problem will likely involve some sort ofintegralanaly-

sis that computes incremental sets with respect to each set of assumptions about which

checkpoints have executed and then uses runtime information to select the correct set

and corresponding constraint query solution automaton.

Finally, the current incarnation ofC3, as well as most other application-level check-

pointing systems, require that checkpoints be placed in thecode by the programmer.

Given that the analyses of this section reduces the overheadof the checkpointing sys-

Page 217: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

202

tem, it is logical to ask whether it could be used to give insight into possibly advanta-

geous checkpoint locations. Profiling for checkpoint placement would most likely entail

making batches of temporal queries over the live variable analysis on properties (such

as size) derived from the analysis. Because of its ability tooptimize batches of queries

and to handle aggregate constraints, this is an applicationto which the system described

in this dissertation would be ideally suited.

5.2 Enforcing Policies

A policy [Sch00] is an instrument for specifying behavior that is unacceptable in the

execution of a program. Such a policy may place limits on suchthings as file or re-

source access, information flow, or simply general program behavior. Safety and se-

curity policies have been widely studied and can be enforcedby a wide range of both

static and runtime techniques. Typically, enforcement of apolicy involves modifying

code to either monitor for, or preclude, certain behaviors.This section demonstrates

how constraint queries on a context-sensitive analysis canbe used to enforce a policy

that is decidable by the analysis. While broadly applicable, the technique is here applied

to preventing exploitation of format string vulnerabilities in C. Here, the outputs from a

set of constraint queries become part of a set of code transformations that precisely en-

force a safety policy that prevents this vulnerability frombeing exploited. This approach

takes maximal advantage of the information contained in a context-sensitive analysis to

avoid precluding actions that do not violate the policy.

Page 218: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

203

5.2.1 Format String Vulnerabilities

Format string vulnerabilities are a class of security exploit in C that have arisen from

design features in the C standard library coupled with a problematic implementation of

variable argument functions. In C, certain library functions, such asprintf, operate

on format strings, strings that include format commands using the familiar “%” syntax.

Recent work [STFW01] has chronicled that when these format string commands are

passed malicious arguments in which the number of format string literals exceeds the

number of actual arguments, a security vulnerability can result that is significant enough

in some cases to cause arbitrary code to be executed [New00].

One solution to this problem is to define a type system that qualifies each C type as

either tainted or untainted. The notion of tainted data corresponds to data in the program

that cannot be trusted not to exploit this vulnerability. Tainted data is usually introduced

either through command line arguments or specific read operations. Mechanisms are

included to explicitly cast data as tainted or untainted at the programmer’s discretion.

This type system is essentially equivalent to a flow-insensitive program analysis with

some allowances for polymorphism; untainted data can pass through a function that

is also called with tainted data without being tainted. It has been demonstrated that

this subtyping relation is sufficient to detect and generatepotential instances of this

vulnerability with a modest rate of false positives.

5.2.2 Policy Enforcement

An alternate approach for tracking tainted data is to use a static analysis.

Definition 28. Given a policy and a context-sensitive analysis,A, spanning property set

Page 219: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

204

Data-Flow Equations:

φv =

id if v isP entry for some procedureP(⊔

v′∈preds(v)(Fv′ ψv′) otherwise

ψv =

φMexit FP exit if v is acall to procedurePφv otherwise

whereFv = λX.(eval(X, v) ∪ taint(v))− untaint(v)

Figure 5.13: Equations for Generating a Context-SensitiveFormat String VulnerabilityAnalysis

X , over an UHSM with set of statesS, thenA decidesthe policy if and only if there

exists a function,δ : S × 2X → true, false, such that the statement associated with

state(σ, v) in the model violates the policy precisely whenδ((σ, v), ρA(σ, v)) = true .

For this example, the formal safety policy is that format string statements should not

execute if one or more of their format string operands is tainted. This is intended as a

decidable, conservative over-approximation of the desired safety policy — that format

string commands should not be exploited to violate the integrity of the program.

Figure 5.13 provides an extensible system of equations for tracking tainted data. The

least fixed point of these second-order equations generatesa context-sensitive analysis

that maps each state to the set of program locations that are tainted at that state. Given a

vertex,v, and a stack-context forv, σ = σ0 . . . σn, where eachσi is a call-vertex, the set

of tainted locations associated withv in contextσ is

L(σ, v) = φv φσn . . . φσ0(∅).

This analysis is extensible through the choice of components to the data-flow func-

tions,Fv, associated with each vertex. The setstaint(v) anduntaint(v) allow for the

Page 220: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

205

explicit tainting and untainting of locations in the analysis. Their functionality mirrors

the explicit casts in the type-theoretic framework. The function eval, which must be

distributive, transfers tainted data that holds before a statement to tainted data that holds

after it. These functions propagate the tainted data in accordance with the semantics

of C and mirror the typing rules for individual statements. Atypical choice for this

function would be one that taints data on the left side of an assignment statement if and

only if some element of the expression on the right side is tainted. Rules of this sort

demonstrate that, unlike the live variable analysis, this analysis is not separable over the

data set. That each function must be distributive implies that each tainted output must

be traceable to some specific tainted input or else be taintedfor all inputs.

Given the output of the data-flow equations framed as a context-sensitive analysis,

the function deciding the the safety policy is easily defined.

δ((σ, v), X) =

λX.[X ∩ Pv 6= ∅]? v is a FSC with format string parametersPv

false otherwise

To implement this function, a constraint query is posed for each instance of a format

string command. Given a format string command associated with a vertex,v, let Pv,

be set set of data that, if tainted, could cause a vulnerability. For example, given a

call to printf , the setP would be the singleton set consisting of the first argument.

The constraint query is then,(v,Ω, λX.[X ∩ Pv 6= ∅]?). The remaining vertices do not

require constraint queries since the policy deciding function dictates that their associated

statements never violate the policy.

Each query produces a minimal automaton that accepts only the set of valid stack-

contexts that correspond to instances of the command that violate the policy. If no vul-

Page 221: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

206

fsv-example.c:int count;

v0: void display (char* buf) v1: printf(buf);v2: printf(": at position %d\n", count++);v3:

v4: int main(int argc, char* argv[]) v5: count = 0;v6: while(argc > 1) s7: display(argv[--argc]);

s8: display("done");v9: printf("Args: %d\n", count);v10:

Figure 5.14: Sample C Programfsv-example.c Exhibiting a Format String Vulner-ability

nerability exists, then the automaton degenerates to the single state machine that trivially

rejects all inputs. In these cases, the automaton can be discarded. If a vulnerability exists

that is independent of the calling context, thenLv → L. This is tested by checking the

emptiness ofLv ∩L′∆. In these cases, the automaton can be discarded and the offending

command replaced with halt. In all other cases, code to applythe current stack-context

is injected to guard the command the first time it executes since the last time the context

has changed. If the automaton accepts the stack-context then the program halts. In this

way, the safety policy is enforced through a purely static code-transforming analysis

without any additional programmer intervention.

Page 222: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

207

Context Transformers Property Transformersv [Γ(v)](α) [Γ(v)](β) [Φ(v)](α) [Φ(v)](β)

v0 argv argv, bufv1 argv argv, bufv2 argv argv, bufv3 argv argv, bufv4 ∅ ∅

v5 argv argvv6 argv argvs7 β β argv argvs8 α β argv argvv9 argv argvv10 argv argv

Thes-designated vertices also identify the elements ofΣ by call-edge source.

Figure 5.15: Context-Sensitive Format String Vulnerability Analysis,(α, β, count, argc, argv, buf,Γ,Φ, α), overfsv-example.c

5.2.3 An Example

Figure 5.14 introduces a C program that exhibits a format string vulnerability. When

executed with the command line

fsv-example ‘‘violate %d %d %d %d stack’’

the display of the string argument will cause illegal stack values to be read. The solution

to the format string vulnerability data-flow equations, cast as a context-sensitive analy-

sis, is presented in Figure 5.15. The analysis assumes that,argv, the array of command

line arguments is always tainted once it enters scope. Thereis no other explicit casting of

taint information. Given this initially tainted data, the analysis distinguishes two distinct

contexts for the functiondisplay. The formal parameterbuf is tainted in contextβ

but is not tainted in contextα. The former context,β, corresponds to calls todisplay

from s7 while the latter context,α, corresponds to the single call todisplay from s8.

Page 223: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

208

s7

s8

s7, s8

Figure 5.16: Minimal Automaton Accepting Contexts where Vulnerability Exists atv1

in fsv-example.c

For the format string commandprintf insidedisplay atv1, its susceptibility to

this vulnerability is dependent on whether the argumentbuf is tainted. The automaton

deciding this is the solution to the constraint queryM = (v1,Ω, λX.[buf ∈ X]?).

The automaton is illustrated in Figure 5.16 and accepts onlythose stack-contexts that

correspond to potentially dangerous executions of the command.

Figure 5.16 shows the transformations to the code that are necessary to enforce the

safety policy. The automatonM guards theprintf statement atv1 and the halts the

program when the command is about to execute in an offending context. This trans-

formed program would not read invalid values from the stack but would still run nor-

mally if the program were executed without any command line arguments. Thus, the

policy is enforced without having to trivially reject the program.

When constructingM , it is assumed that the stack-context constraint isΩ. In prac-

tice, the presence of this constraint provides another degree of freedom in tailoring the

enforcement of the policy. In cases where a programmer knowsthat a particular invoca-

tion of the command is safe, the constraint query can be reposed with this information

as the stack-context constraint. For example, if the invocation ofdisplay from s7 was

always known to be safe then the query could be reposed as(v1,¬(Ω s7), λX.[buf ∈

Page 224: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

209

fsv-example-trans.c:int count;

v0: void display (char* buf) if(accept_stack(M)) exit(0);

else v1: printf(buf);

v2: printf(": at position %d\n", count++);v3:

v4: int main(int argc, char* argv[]) v5: count = 0;v6: while(argc > 1) s7: display(argv[--argc]);

s8: display("done");v9: printf("Args: %d\n", count);v10:

Figure 5.17: Transformed Sample C Programfsv-example-trans.c Modified toInclude the Result of the Constraint Query

Page 225: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

210

Benchmark Analysis Output CPUProgram Lines |P| |Σ| |C| |X | FSC Req Avg Time (s)postfix 134 9 27 35 102 7 1 4 0.1129.compress 1421 23 39 24 235 15 2 4 0.2179.art 1270 24 46 52 318 29 2 4.5 0.4treecode 2711 67 225 492 953 46 2 6 2.2256.bzip 4652 67 208 109 715 71 3 13 2.6099.go 29246 385 2063 957 4530 11 2 11 7.9

Figure 5.18: Format String Vulnerability Safety Policy Benchmarks

X]?). This query would produce the same automaton but restrictedto reject that partic-

ular invocation. In this case, that automaton would be equivalent to the trivially rejecting

automaton and could be discarded.

5.2.4 Experimental Results

Figure 5.18 provides empirical data from applying this analysis and code transformation

to six example codes that have been seeded with tainted data to introduce simulated

vulnerabilities. None of the benchmarks contained any naturally occurring context-

sensitive format string vulnerability. Each row represents a unique benchmark. Column

FSCgives the number of format string commands in the benchmark and Reqgives the

number of queries for which the resulting automaton is non-trivial and thus required

in the code to enforce the policy.Avg gives the average number of states in the non-

trivial automata.Timegives the CPU time required to generate, resolve, and test for

triviality all of the constraint queries necessary to enforce the policy given the tainted

data analysis.

Unlike the equations for the live variable analysis, the format string vulnerability

analysis is not separable and the second-order data-flow functions cannot be represented

Page 226: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

211

explicitly using the same two-set technique utilized by thelive variable analysis of Sec-

tion 5.1. Solving these equations requires solving individual contexts for each procedure

by demand. Employing this technique implies that the running time of the analysis is

extremely sensitive to the order in which worklist elementsare processed in the fixed

point computation. Specifically, new contexts should not beprocessed until all of the

paths leading to a call have been processed as completely as possible. Failure to ensure

this condition can result in the equations being solved for unnecessary contexts. The

running times in Figure 5.18 are derived from a demand-driven implementation of the

analysis with a reasonable worklist selection heuristic.

As with the live variable analysis, for most codes, the number of format string com-

mands whose susceptibility to this vulnerability depends on the occurring stack-context

is small. This is to be expected and is, in fact, essential to this being a practical ap-

proach. Also, most of the constraint query solution automata required to be injected

into the code are small. In most cases where an automaton is required, it requires only

a single “marker” element of the call-stack to distinguish between dangerous and in-

nocuous executions. In these cases, the size of the automaton is usually bounded by the

number of distinct call-stack elements on a path to an offending execution. Except for

recursive cases, this depth rarely exceeds five intermediate procedures. As was demon-

strated in Section 5.1, automata for this stack depth are notprohibitive to execute.

In general, this technique is a useful refinement to the process of using a static anal-

ysis to enforce a policy when the potential violations of thepolicy can be localized to

the execution of a tractable number of program points that might require an automaton.

Format string vulnerabilities are an ideal example of such aproblem; violations of the

Page 227: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

212

policy can be localized to vertices associated with format string commands. While the

facts that determine if a violation occurs must be specific tothe state, it is useful to

recall that analyses generated as the solution to one or moremodel checking queries

can encapsulate extensive temporal data. While this data isnot as precise as data that is

checked dynamically during the execution of the program it can, in many cases, give a

reasonable approximation.

5.2.5 Related and Future Work

The idea of using a static analysis to enforce a policy is not anew one. Numerous other

special-purpose analyses exist to ensure the proper behavior of some aspect of a pro-

gram, a popular example being analyses to ensure memory safety [DKAL03, KDA02].

What is novel in this section is the demonstration of a technique for taking an abstract

context-sensitive analysis, with a function that decides where the property set associated

with a state disallows execution of the corresponding statement, and produces a more

precise set of code transformations to enforce that policy.These transformations fully

exploit the context-sensitive information contained in the analysis without requiring any

of the code replication that occurs as a consequence of function inlining or procedure

specialization by cloning. These techniques are applicable to any analysis that can be

cast into the abstraction for context-sensitive analyses first introduced in Section 1.2.

Recent work [HMS03] has demonstrated that security policies enforceable by pro-

gram rewriting comprise a hierarchy, with policies enforceable by static program anal-

yses at the base of that structure. While the technique presented in this section still fits

neatly within that class, it does provide a more precise enforcement mechanism within

Page 228: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

213

that class than previously existed. Further, for practicalproblems, there is the poten-

tial to apply it in conjunction with other techniques to provide an overall more precise

enforcement mechanism.

Prior work [Sch00] demonstrated how security policies could be enforced through

security automataor more broadlyexecution monitors. In that model, program events,

such as accessing a file, trigger a transition in a staticallyconstructed state machine.

When an event causes a transition to a terminal state the execution of the program is

halted thus preventing a violation of the policy. This differs from the approach pre-

sented here in that security automata execute concurrentlywith the program. Since they

transition based on dynamic rather than static informationthey are capable of enforcing

a strictly larger class of policies. In the approach presented in this section, the automata

are injected into the code and then used to decide stack-contexts on demand at runtime.

Thus, each automaton can be minimized and tuned to fit the precise instance where it is

to be injected. As a way of combining these techniques, automata derived as solutions

to constraint queries could be injected into code and used asevents to trigger transitions

in security automata. This would combine the precision of constraint queries with the

generality of security automata.

Page 229: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

Chapter 6

Conclusions

Carnauba was an ambitious project to address what was demonstrated at the outset

of this dissertation to be an inherently hard problem. As software systems become

more sophisticated the need for tools to grapple with their complexity is growing and

is ultimately inescapable. Carnauba represents an attemptto leverage the value in

three decades worth of static program analysis developmentby incorporating it into

a throughly modern approach to software verification and comprehension. The result is

an end-to-end system that can generate new results from specifically tailored analyses in

a manner that is both standardized and theoretically sound.This is a tool with powerful

interactive features as well as applications to diverse software engineering challenges.

This dissertation makes five broad contributions. First, itpresents a novel approach,

based on the framework of a programming language compiler, for efficiently reducing

the complexity of large batches of temporal logic formulas.Second, it introduces a

broad abstraction of context-sensitive analyses, derivedfrom second-order methods, that

permits well-defined transformations and reductions. Third, it provides an extension to

214

Page 230: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

215

the global approach to explicit state model checking to incorporate predicates projected

from these abstracted context-sensitive analyses and to recast the model checking result

as an abstracted context-sensitive analysis. Fourth, it demonstrates a new type of query

on context-sensitive analyses, theconstraint query, that allows the set of stack-contexts

for a program element that simultaneously meet solution andadmissibility constraints to

be concisely encapsulated as a regular language. Finally, it provides experimental data

and experience from a unified model checking system that addresses logic compilation,

model reduction, example generation, and query refinement,and has been applied to

emerging software engineering challenges such as program comprehension, saved state

reduction for application-level checkpointing, and policy enforcement.

The remainder of this chapter includes reflections on the choice of the modal mu-

calculus as the query language and the effect of choosing a model format capable of

including context-sensitive labeling functions. It concludes with a discussion of future

work that touches on code obfuscation and the need for predicate determination.

6.1 Using the Modal Mu-Calculus

While many temporal logics and automata formats exists to express model checking

queries, the modal mu-calculus was chosen for three principle reasons. First, the modal

mu-calculus is maximally expressive, subsuming all other temporal logics used in prac-

tice. Second, a good semantic algorithm exists for globallyresolving queries over hier-

archical models. As presented in Chapter 3, this algorithm is based on familiar second-

order methods that closely mirror the widely usedfunctional approachto interproce-

dural data-flow analysis. In fact, it was this close relationship that first motivated the

Page 231: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

216

idea to merge these techniques. Third, in equation block form, a rigorously factored

representation exists that it was clear from the beginning would be ideally amenable

to inter-query optimization. Indeed, one of the most significant successes of this work

has been the realization that a batch optimization system for model checking queries

could be organized as a compiler and that such a system can provide performance that

is superior to serial optimization methods.

However, the modal mu-calculus can also be unwieldy both to use and to resolve.

Carnauba mitigates its unintuitive syntax by providing translations from other logics that

are more user-friendly for query expression. For resolution, faster algorithms exist for

less expressive logics. For logics such asLTL that do not translate into the alternation-

free subclass of the modal mu-calculus, Carnauba has been only moderately successful

replicating that performance for the equivalent formulas.The difficulty can be traced to

the tight coupling that exists between formulas and the algorithm used to resolve them.

With the CTL∗ family of temporal logics, the syntax is abstract. Queries are repre-

sented in terms of path quantifiers and temporal operators that have an interpretation

that is separate from the algorithms to resolve them. This affords a degree of freedom

for optimization not generally available for the mu-calculus. Mu-calculus formulas are

expressed in terms of fixed points that all but force that an iterative algorithm be used to

resolve them. This also demonstrates why local model checking is difficult in the modal

mu-calculus, fixed point computations require global solutions to sub-expressions.

The value of some of this additional expressiveness is unclear. Alternating formulas

are almost universally used to express the notion of infinitestate repetition, as demon-

strated by the second example query of Section 4.1.3. Attempts have been made to

Page 232: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

217

optimize algorithms for certain subclasses of the modal mu-calculus that capture this

concept [SN98]. However, what was discovered in Carnauba was that much of the com-

plexity that is usually a part of the query is captured by the analyses used to generate

the predicates. Because of this transference, sophisticated ideas can be expressed in

compact forms. For many applications, this offsets the general complexity of resolving

these more complicated formulas.

6.2 Models with Context-Sensitive Labeling

The examples presented in this dissertation assume that theunrestricted hierarchical

state machine abstraction of a program is the generalization of the program’s interpro-

cedural control-flow graph. While this is not strictly required, this correspondence is

natural for a system which draws predicates from context-sensitive analyses.

As a corollary of the definition of a context-sensitive analysis, it can be shown that

using a labeling function derived from a context-sensitiveanalysis does not increase

the expressiveness of the model. In fact, the definition of a context-sensitive analysis

implicitly provides a translation for producing an equivalent machine with a standard

(context-insensitive) labeling. The set of live contexts for each procedure define clones

of that procedure. The new calling relation is then the product graph of the call graph

with the graph induced by the stack-context transformer. Inpractice, this model would

often be intractable, but it provides an insight on how to incorporate context-sensitive

predicates into other model checking frameworks.

As described in Section 1.5, MOPS [CW02] used a PDA (equivalent in expressive-

ness to an UHSM) to model a program. The translation then affords a way to extend

Page 233: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

218

the capability provided by the system to context-sensitivepredicates. Since that system

is restricted to local model checking and includes independent techniques for reducing

the PDA, this may be a tractable extension. However, the tractability would likely rely

critically on the ability to project down and reduce the analysis before constructing the

expanded model representation. These operations were the subject of Section 3.1.

6.3 Directions for Future Work

Directions for future work specific to each component have been discussed through-

out this dissertation. One of the most innovative features of Carnauba is theconstraint

query, that returns a set of stack-contexts encapsulated as a finite state machine. Chap-

ter 5 presented two applications of these queries that involve injecting the output of

these queries into code to make decisions at runtime that affect checkpointing perfor-

mance and policy enforcement. This approach is an excellentsurrogate for procedure

cloning; a finite state machined can decide on-the-fly which copy of a procedure should

be called based on the invoking stack-context. Another application that applies cloning,

albeit somewhat tangentially, is code obfuscation.

6.3.1 Code Obfuscation

Code obfuscation [CTL97] is the process of translating a program to a semantically

equivalent program that is more difficult to reason about. Itis the antithesis of code com-

prehension. The rationale for obfuscating code is to frustrate reverse engineering and

provide a measure of security against malicious users with access to the program source.

Page 234: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

219

Along with layout (variable and procedure name) obfuscation that scrambles identi-

fiers and data obfuscation that generates equivalent, less intuitive expression forms, sys-

tems such as Shroud [Jae90] use control-flow obfuscation to confuse decompilers for

C. Control-flow obfuscation attempts to mask the true call-graph of the program. Fi-

nite state machines generated from constraint queries are an excellent candidate for this

as they could be used to induce branch decisions based not on current variable values,

but rather on the occurring stack-context. The result couldeasily be a calling relation

that would be indecipherable, as future procedure calls would depend on some prop-

erty of the sequence of previous calls. Further, as was demonstrated in Section 5.1, the

overhead of these automata is sufficiently small that they should not cause a significant

degradation in program performance.

6.3.2 Predicate Determination

As a final comment, while model checking software is an inherently hard problem, the

choice of source language has a significant impact on its practical complexity. Because

of numerous language features such as arbitrary pointer casting and lack of bounds

checking, ANSI C is among the more difficult languages to verify. In many cases, this

complexity comes without any arguably beneficial trade-off. Attempts have been made

to sanitize the language to ensure safety [JMG+02], but not specifically for ease of

program analysis.

For aggressive model checking to be practical it is essential that answers can be

derived in a tractable way to questions concerning the basicfunctionality of the pro-

gram. Addressing this concern should most likely include a combination of techniques

Page 235: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

220

encompassing both language design with ease of analysis andverification in mind as

well as better mechanisms for programmers to express their semantic intent. One ap-

proach to this later concept is the use of specification languages [BCC+03, Eif]. These

systems provide a framework for programmers to specify known properties of the code

they write. When programs are compiled as collections of modules, these specifications

are aggregated and checked for consistency. While not theirintended application, these

specifications could provide valuable predicates to model checking and path analysis

systems that would be extremely costly to derive autonomously. It is the subject of fu-

ture work to determine how these specification languages canbe integrated or extended

to work with model checking techniques.

Page 236: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

BIBLIOGRAPHY

[ASU86] Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullman.Compilers, Principles,Techniques, and Tools. Addison-Wesley Publishing Company, 1986.

[Bar90] Joshua E. Barnes. A modified tree code: Don’t laugh, it runs. Journal ofComputational Physics 87, 161, 1990.

[BC94] Preston Briggs and Keith D. Cooper. Effective partial redundancy elimi-nation. InSIGPLAN Conference on Programming Language Design andImplementation, pages 159–170, 1994.

[BC96] Girish Bhat and Rance Cleaveland. Efficient model checking via the equa-tional mu-calculus. InEleventh Annual Symposium on Logic in ComputerScience, pages 304–312. IEEE Computer Society Press, July 1996.

[BCC+03] Lilian Burdy, Yoonsik Cheon, David Cok, Michael Ernst, Gary T. LeavensJoe Kiniry, K. Rustan M. Leino, and Erik Poll. An overview of JML toolsand applications. InEighth International Workshop on Formal Methods forIndustrial Critical Systems (FMICS ’03), pages 73–89, June 2003.

[BGR01] Michael Benedikt, Patrice Godefroid, and Thomas Reps. Model check-ing of unrestricted hierarchical state machines. InProc. of ICALP 2001,Twenty-Eighth Int. Colloq. on Automata, Languages, and Programming (toappear), Crete, Greece, July 2001.

[BMPS03] Greg Bronevetsky, Daniel Marques, Keshav Pingali, and Paul Stodghill.Automated application-level checkpointing of MPI programs. In Princi-ples and Practice of Parallel Programming, June 2003.

[BP03] Gianfranco Bilardi and Keshav Pingali. Algorithms for computing thestatic single assignment form.Journal of the ACM, 50(3):375–425, May2003.

221

Page 237: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

222

[BPK94] Micah Beck, James S. Plank, and Gerry Kingsley. Compiler-assistedcheckpointing. Technical Report UT-CS-94-269, Dept. of Computer Sci-ence, University of Tennessee, 1994.

[BR00] T. Ball and S. Rajamani. Bebop: A symbolic model checker for booleanprograms. InProceedings of the 7th International SPIN Workshop (LectureNotes in Computer Science No. 1885), pages 113–130, Zurich, Switzerland,September 2000. Springer-Verlag.

[BR01a] Thomas Ball and Sriram K. Rajamani. Automatically validating temporalsafety properties of interfaces. Technical report, Microsoft Corportation,2001.

[BR01b] Thomas Ball and Sriram K. Rajamani. Bebop: A path-sensitive interpro-cedural dataflow engine. InProceeding of the ACM Workshop on ProgramAnalysis for Software Tools and Engineering, pages 97–103, June 2001.

[BS92] Olaf Burkart and Bernhard Steffen. Model checking for context-free pro-cesses. InInternational Conference on Concurrency Theory, pages 123–137, 1992.

[BS99] Olaf Burkart and Bernhard Steffen. Model checking the full modal mu-calculus for infinite sequential processes.Theoretical Computer Science,221(1–2):251–270, 1999.

[CC92] Patrick Cousot and Radhia Cousot. Abstract interpretation frameworks.Journal of Logic and Computation, 2(4):511–547, August 1992.

[CCF91] Jong-Deok Choi, Ron Cytron, and Jeanne Ferrante. Automatic construc-tion of sparse data flow evaluation graphs. InProceedings of the Sympo-sium on Principles of Programming Languages, pages 55–66, 1991.

[CDH+00] James Corbett, Matthe Dwyer, John Hatcliff, Corina Pasareanu, ShawnLaubach, and Hongjun Zheng. Bandera : Extracting finite-state modelsfrom Java source code. InProceedings of the 22nd International Confer-ence on Software Engineering, pages 439–448, June 2000.

[CE81] E.M. Clarke and E.A. Emerson. Design and synthesis ofsynchronizationskeletons using branching time temporal logic. InProceedings of the Work-shop on Logics of Programs, volume 131 ofLecture Notes in ComputerScience, pages 52–71, Yorktown Heights, New York, May 1981. Springer-Verlag.

Page 238: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

223

[CES83] E. M. Clarke, E. A. Emerson, and A. P. Sistla. Automatic verification offinite-state concurrent systems using temporal logic specifications: A prac-tical approach. InConference Record of the 10th ACM Symposium on Prin-ciples of Programming Languages (POPL), pages 117–126, 1983.

[CFR+89] Ron Cytron, Jeanne Ferrante, Barry Rosen, Mark Wegman, and KennethZadeck. An efficient method for computing static single assignment form.In Sixteenth Annual ACM Symposium on Principles of Programming Lan-guages, pages 25–35, January 1989.

[CKS92] Rance Cleaveland, Marion Klein, and Bernhard Steffen. Faster modelchecking for the modal mu-calculus. InComputer Aided Verification, vol-ume 663 ofLecture Notes in Computer Science, pages 410–422, 1992.

[CLR01] Thomas Cormen, Charles Leiserson, and Ronald Rivest. Introduction toAlgorithms, 2nd Edition. The MIT Press, 2001.

[CR03] Stephen Chong and Radu Rugina. Static analysis of accessed regions inrecursive data structures. InStatic Analysis Symposium, pages 463–482,June 2003.

[CS91] Rance Cleaveland and Bernhard Steffen. Computing behavioral relations,logically. ICALP ’91, LNCS 510, 1991.

[CS92] Rance Cleaveland and Bernhard Steffen. A linear-time model checkingalgorithm for the alternation-free modal mu-calculus.CAV ’91, LNCS 575,pages 48–58, 1992.

[CT04] Keith Cooper and Linda Torczon.Engineering a Compiler. Morgan Kauf-mann Publishers, 2004.

[CTL97] Christian Collberg, Clark Thomborson, and DouglasLow. A taxonomy ofobfuscating transformations. Technical Report 148, University of Auck-land Department of Computer Science, July 1997.

[CVME04] David Crandall, David Vitek, Daniel Marques, and James Ezick. Integrat-ing automata derived from context-sensitive analyses intocompiled code.Cornell University Department of Computer Science Course Project Re-port for CS612: Software Design for High Performance Architectures, May2004.

[CW02] Hao Chen and David Wagner. MOPS: An infrastructure for examining se-curity properties of software. InACM Conference on Computer and Com-munications Security, pages 235–244, November 2002.

Page 239: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

224

[Dam94] Mads Dam. CTL* and ECTL* as fragments of the modal mu-calculus.Theoretical Computer Science, 126(1):77–96, April 1994.

[DKAL03] Dinakar Dhurjati, Sumant Kowshik, Vikram Adve, and Chris Lattner.Memory safety without runtime checks or garbage collectionfor embeddedsystems. InProceedings of the 2003 Conference on Languages, Compilers,and Tools for Embedded Systems (LCTES’03), pages 69–80, June 2003.

[DN90] Lionel E. Deimel and J. Fernando Nevada.Reading Computer Programs:Instructor’s Guide and Exercises. Software Engineering Institute, CarnegieMellon University, August 1990. Educational Materials CMU/SEI-90-EM-3.

[EBP01] James Ezick, Gianfranco Bilardi, and Keshav Pingali. Efficient compu-tation of interprocedural control dependence. Technical Report TR2001-1850, Cornell University, September 2001.

[ECCH00] D. Engler, B. Chelf, A. Chou, and S. Hallem. Checking system rules usingsystem specific, programmer-written compiler extensions,October 2000.

[ECH+01] Dawson Engler, David Yu Chen, Seth Hallem, Andy Chou, andBenjaminChelf. Bugs as deviant behavior: A general approach to inferring errors insystems code. InSOSP (to appear), 2001.

[EH86] E.A. Emerson and J.Y. Halpern. ’sometimes’ and ’not never’ revisited:On branching versus linear tune temporal logic.Journal of the ACM,33(1):151–178, 1986.

[EHRS00] Javier Esparza, David Hansel, Peter Rossmanith, and Stefan Schwoon. Ef-ficient algorithms for model checking pushdown systems. InComputerAided Verification, pages 232–247, 2000.

[Eif] Eiffel Software. The home of Eiffel Studio and Eiffel ENViSioN. http://www.eiffel.com.

[EJ99] E.A. Emerson and C.S. Jutla. The complexity of tree automata and logicsof programs.SIAM Journal on Computation, 29(1):132–158, 1999.

[EL85] E. Allen Emerson and Chin-Laung Lei. Modalities for model checking:Branching time strikes back. InConference Record of the Twelfth AnnualACM Symposium on Principles of Programming Languages, pages 84–96,New Orleans, Louisiana, 1985.

Page 240: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

225

[EL86] E. Allen Emerson and Chin-Laung Lei. Efficient model checking in frag-ments of the propositional mu-calculus. InProceedings, Symposium onLogic in Computer Science, pages 267–278, Cambridge, Massachusetts,June 1986. IEEE Computer Society.

[Eme97] E. Emerson. Model checking and the mu-calculus, 1997.

[Fla04] Cormac Flanagan. Automatic software model checking via constraintlogic. Science of Computer Programming, 50(1-3):253–270, March 2004.

[FW02] Carsten Fritz and Thomas Wilke. State space reductions for alternatingBuchi automata: Quotienting by simulation equivalences.In Foundationsof Software Technology and Theoretical Computer Science: 22th Confer-ence, volume 2556 ofLecture Notes in Computer Science, pages 157–169,Kanpur, India, 2002.

[Gal99] Erick Gallesio.STk Reference Manual, Version 4.0. Universite de Nice -Antipolis, Laboratorie I3S - CNRS URA 1376 - ESSI, Route des Colles,B.P. 145 06903 Sophia-Antipolis Cedex - FRANCE, September 1999.

[GDDC97] David Grove, Greg DeFouw, Jeffrey Dean, and Craig Chambers. Callgraph construction in object-oriented languages. InConference on Object-Oriented Programming, Systems, Languages, and Applications, pages108–124, 1997.

[GJ79] Michael Garey and David S. Johnson.Computers and Intractability, AGuide to the Theory of NP-Completeness. W. H. Freeman and Company,New York, 1979.

[GO01] Paul Gastin and Denis Oddoux. Fast LTL to Buchi automata translation. InLecture Notes in Computer Science, volume 2102, pages 53–65. Springer,2001.

[God97] Patrice Godefroid. Model checking for programminglanguages usingVerisoft. In Symposium on Principles of Programming Languages, pages174–186, 1997.

[GPVW95] Rob Gerth, Doron Peled, Moshe Y. Vardi, and Pierre Wolper. Simple on-the-fly automatic verification of linear temporal logic. InProtocol Specifi-cation Testing and Verification, pages 3–18, Warsaw, Poland, 1995. Chap-man & Hall.

Page 241: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

226

[Gra] GrammaTech, Inc. CodeSurfer product site. http://www.grammatech.com/products/codesurfer/.

[Hal91] Mary Hall. Managing Interprocedural Optimization. Ph.D. dissertation,Rice University, 1991.

[HJMM04] Thomas A. Henzinger, Ranjit Jhala, Rupak Majumdar, and Kenneth L.McMillan. Abstractions from proofs. InThirty-first ACM Symposium onPriniples of Programming Languages, pages 232–244, January 2004.

[HKT00] David Harel, Dexter Kozen, and Jerzy Tiuryn.Dynamic Logic. The MITPress, 2000.

[HMS03] Kevin W. Hamlen, Greg Morrisett, and Fred B. Schneider. Computabil-ity classes for enforcement mechanisms. Technical Report TR2003-1908,Cornell University, August 2003.

[HP96] Gerald J. Holzmann and Doron Pelad. The state of spin.In 8th Inter-national Conference on Computer Aided Verification, LNCS 1102, pages385–389. Springer, 1996.

[HRB90] Susan Horwitz, Thomas Reps, and David Binkley. Interprocedural slicingusing dependence graphs.Transactions on Programming Languages andSystems, 12(1):22–60, January 1990.

[Huf54] D.A. Huffman. The synthesis of sequential switching circuits. Journal ofthe Franklin Institute, 257:161–190, 275–303, 1954.

[IBM] IBM Research. Blue Gene project overview. http://www.research.ibm.com/bluegene/.

[IOC] IOCCC. The international obfuscated C code contest.http://www.ioccc.org/.

[ISS] ISS Group. Intelligent software systems group website. http://iss.cs.cornell.edu/.

[Jae90] Rex Jaeschke. Encrypting C source for distribution. Journal of C Lan-guage Translation, 2(1), 1990.

[JL96] Richard Jones and Rafael Lins.Garbage Collection: Algorithms for Auto-matic Dynamic Memory Management. John Wiley & Sons, 1996.

Page 242: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

227

[JMG+02] Trevor Jim, Greg Morrisett, Dan Grossman, Michael Hicks, James Cheney,and Yanling Wang. Cyclone: A safe dialect of C. USENIX AnnualTech-nical Conference, June 2002.

[JP93] Richard Johnson and Keshav Pingali. Dependence-based program analy-sis. InACM SIGPLAN Conference on Programming Language Design andImplementation (PLDI), pages 78–89, June 1993.

[KCE98] Richard Kelsey, William Clinger, and Jonathan Rees(Editors). Revised5

report on the algorithmic language Scheme.ACM SIGPLAN Notices,33(9):26–76, 1998.

[KDA02] Sumant Kowshik, Dinakar Dhurjati, and Vikram Adve.Ensuring codesafety without runtime checks for real time control systems. In Proceed-ings of the International Conference on Compilers, Architectures and Syn-thesis for Embedded Systems, CASES 2002, pages 288–297, October 2002.

[Ken81] Ken Kennedy. A survey of data flow analysis techniques. In Steven Much-nick and Neil D. Jones, editors,Program Flow Analysis: Theory and Ap-plications, chapter 1, pages 5–54. Prentice-Hall, 1981.

[Kil73] Gary Kildall. A unified approach to global program optimization. InFirstACM Symposium on Principles of Programming Languages, pages 194–206, Boston, Massachusetts, January 1973.

[Kno99] Jens Knoop. Demand-driven model checking for context-free processes.In ASIAN ’99, LNCS 1742, pages 201–213, 1999.

[Koz83] Dexter Kozen. Results on the propositionalµ-calculus.Theoretical Com-puter Science (TCS), 27:333–354, 1983.

[KR88] Brian W. Kernighan and Dennis W. Ritchie.The C Programming Lan-guage, 2nd Edition. Prentice-Hall, 1988.

[LA03] Chris Lattner and Vikram Adve. Data structure analysis: A fast and scal-able context-sensitive heap analysis. Technical Report UIUCDCS-R-2003-2340, University of Illinois at Urbana-Champaign, April 2003.

[LF90] Chung-Chi Jim Li and W. Kent Fuchs. CATCH: Compiler-assisted tech-niques for checkpointing. In20th International Symposium of Fault Toler-ant Computing, pages 74–81, June 1990.

Page 243: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

228

[LM00] J. L. Lawall and G. Muller. Efficient incremental checkpointing of Javaprograms. InProceedings of the International Conference on DependableSystems and Networks, pages 61–70, New York, NY, USA, 2000. IEEE.

[LTBL97] Michael Litzkow, Todd Tannenbaum, Jim Basney, andMiron Livny.Checkpoint and migration of UNIX processes in the Condor distributedprocessing system. Technical Report 1346, University of Wisconsin-Madison, 1997.

[Mad97] Angelika Mader.Verification of Modal Properties Using Boolean EquationSystems. Dieter Bertz Verlag, Berlin, 8th edition, 1997.

[McK65] W.M. McKeeman. Peephole optimization.Communications of the ACM,8(7):443–444, July 1965.

[MR81] Etienne Morel and Claude Renvoise. Interproceduralelimination of partialredundancies. In Steven Muchnick and Neil Jones, editors,Program FlowAnalysis: Theory and Applications, chapter 7, pages 160–188. Prentice-Hall, 1981.

[MSS99] Markus Muller-Olm, David Schmidt, and Bernhard Steffen. Model check-ing: A tutorial introduction. InProceedings of the Static Analysis Sympo-sium. Springer, September 1999.

[Muc97] Steven Muchnick.Advanced Compiler Design and Implementation. Mor-gan Kaufmann Publishers, 1997.

[NAS] NASA. Advanced supercomputing website.http://www.nas.nasa.gov/.

[Nat] National Nuclear Security Administration. ASCI home. http://www.nnsa.doe.gov/asc/.

[New00] Tim Newsham. Format string attacks.http://www.guardent.com/docs/FormatString.pdf, September 2000. Guardent, Inc.

[Pod00] Andreas Podelski. Model checking as constraint solving. InSAS’00, pages22–37, 2000.

[PTVF92] William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P.Flannery.Numerical Recipies in C. Cambridge University Press, 2nd edi-tion, 1992.

Page 244: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

229

[RC03] Radu Rugina and Sigmund Cherem. Region analysis for imperative lan-guages. Technical Report TR2003-1914, Cornell University, October2003.

[RFT99] G. Ramalingam, John Field, and Frank Tip. Aggregatestructure identifica-tion and its application to program analysis. InSymposium on Principlesof Programming Languages, pages 119–132, 1999.

[RHS95] Thomas Reps, Susan Horwitz, and Mooly Sagiv. Precise interproceduraldataflow analysis via graph reachability. InConference Record of POPL’95: 22nd ACM SIGPLAN-SIGACT Symposium on Principles of Program-ming Languages, pages 49–61, San Francisco, California, January 1995.

[RP86] Barbara Ryder and Marvin Paull. Elimination algorithms for data flowanalysis.ACM Computing Surveys, 18(3):277–316, September 1986.

[RSJ03] Thomas W. Reps, Stefan Schwoon, and Somesh Jha. Weighted pushdownsystems and their application to interprocedural dataflow analysis. In10thInternational Symposium on Static Analysis, pages 189–213, June 2003.

[SB00] Fabio Somenzi and Roderick Bloem. Efficient Buchi automata from LTLformulae. InLecture Notes in Computer Science, volume 1633, pages 247–263. Springer, 2000.

[Sch00] Fred B. Schneider. Enforceable security policies.Information and SystemSecurity, 3(1):30–50, 2000.

[Sei98] Helmut Seidl. Model-checking forL2. Technical report, FB IV InformaticUniversitat Trier, January 1998.

[SGL96] Vugranam C. Sreedhar, Guang R. Gao, and Yong-Fong Lee. A new frame-work for exhaustive and incremental data flow analysis usingDJ graphs. InSIGPLAN Conference on Programming Language Design and Implemen-tation, pages 278–290, 1996.

[SH97] Marc Shapiro and Susan Horwitz. Fast and accurate flow-insensitivepoints-to analysis. InSymposium on Principles of Programming Lan-guages, pages 1–14, 1997.

[SN98] Helmut Seidl and Damian Niwinski. On distributive fixed-point expres-sions. Technical report, FB IV Informatic Universitat Trier, November1998.

Page 245: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

230

[SP81] Micha Sharir and Amir Pnueli. Two approaches to interprocedural dataflow analysis. In Steven Muchnick and Neil Jones, editors,Program FlowAnalysis: Theory and Applications, chapter 7, pages 189–233. Prentice-Hall, 1981.

[SPE] SPEC. Standard performance evaluation corporation website. http://www.spec.org/.

[SS98] David Schmidt and Bernhard Steffen. Program analysis as model checkingof abstract interpretations. InSAS’98, pages 351–380, 1998.

[Ste90] Guy L. Steele Jr.Common Lisp: the Language, 2nd Edition. Digital Press,12 Crosby Drive, Bedford, MA 01730 USA, 1990.

[Ste91] Bernhard Steffen. Data analysis as model checking.In TACS’91, pages346–364, 1991.

[STFW01] Umesh Shankar, Kunal Talwar, Jeffrey S. Foster, and David Wagner. De-tecting format string vulnerabilities with type qualifiers. In 10th USENIXSecurity Symposium, pages 201–220, 2001.

[Str81] Robert S. Streett. Propositional dynamic logic of looping and converse. InConference Proceedings of the Thirteenth Annual ACM Symposium on The-ory of Computation, pages 375–383, Milwaukee, Wisconsin, May 1981.

[SW91] Colin Stirling and David Walker. Local model checking in the modal mu-calculus. In2nd International Joint Conference on Theory and Practice ofSoftware Development, pages 161–177, 1991.

[SY00] Nikolay Shilov and Kwangkeun Yi. A new proof of exponential decidabil-ity for propositional mu-calculus with program converse. In III Interna-tional Conference on Theoretical Aspects of Computer Science, Novi Sad,Yugoslavia, September 2000.

[Tar55] A. Tarski. A lattice-theoretical fixpoint theorem and its applications.Pa-cific Journal of Mathematics, 5:285–309, 1955.

[The02] The Blue Gene Team. An overview of the Blue Gene supercomputer. InSC 2000 High Performance Networking and Computing, 2002.

[Tho90] Wolfgang Thomas. Automata on infinite objects. In Jan van Leeuwen,editor,Handbook of Theoretical Computer Science, volume B, pages 133–191. The MIT Press, 1990.

Page 246: AN END-TO-END SYSTEM FOR MODEL CHECKING OVER …

231

[Tur36] Alan M. Turing. On computable numbers with application to theentscheidungs-problem.Proceedings of the London Mathematical Society,2(42):230–256, 1936. A correction,ibid., 43, pp. 544–546.

[Var98] M.Y. Vardi. Reasoning about the past with two-way automata. InLectureNotes in Computer Science, volume 1443, pages 628–641. Springer, 1998.

[VW94] Moshe Y. Vardi and Pierre Wolper. Reasoning about infinite computations.Information and Computation, 115(1):1–37, November 1994.

[WAF00] Dan S. Wallach, Andrew Appel, and Edward W. Felten. SAFKASI: Asecurity mechanism for language-based systems.ACM Transactions onSoftware Engineering and Methodology, 9(4):341–378, October 2000.

[Wei80] William E. Weihl. Interprocedural data flow analysis in the presence ofpointers, procedure variables, and label variables. InConference Recordof POPL ’80: 7th ACM SIGPLAN-SIGACT Symposium on PrinciplesofProgramming Languages, pages 83–94, Las Vegas, Nevada, January 1980.

[WF98] Dan S. Wallach and Edward W. Felten. Understanding Java stack inspec-tion. In IEEE Symposium on Security and Privacy, pages 52–65, May1998.

[WL95] Robert P. Wilson and Monica S. Lam. Efficient context-sensitive pointeranalysis for C programs. InSIGPLAN Conference on Programming Lan-guage Design and Implementation, pages 1–, 1995.

[WZ91] Mark N. Wegman and Kenneth Zadeck. Constant propagation with con-ditional branches.Transactions on Programming Languages and Systems,13(2):181–210, April 1991.

[YA98] M. Yannakakis and R. Alur. Model checking of hierarchical state ma-chines. InProc. 6th ACM Symp. on Foundations of Software Engineering,1998.