Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

59
Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned Lionel Briand IEEE Fellow, FNR PEARL Chair Interdisciplinary Centre for ICT Security, Reliability, and Trust (SnT) University of Luxembourg, Luxembourg SAM, Valencia, 2014

description

Keynote address SAM 2014

Transcript of Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Page 1: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Making Model-Driven Verification Practical and Scalable:

Experiences and Lessons Learned

Lionel Briand IEEE Fellow, FNR PEARL Chair Interdisciplinary Centre for ICT Security, Reliability, and Trust (SnT) University of Luxembourg, Luxembourg SAM, Valencia, 2014

Page 2: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

SnT Software Verification and Validation Lab

•  SnT centre, Est. 2009: Interdisciplinary, ICT security-reliability-trust

•  230 scientists and Ph.D. candidates, 20 industry partners

•  SVV Lab: Established January 2012, www.svv.lu

•  25 scientists (Research scientists, associates, and PhD candidates)

•  Industry-relevant research on system dependability: security, safety, reliability

•  Six partners: Cetrel, CTIE, Delphi, SES, IEE, Hitec …

2

Page 3: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

An Effective, Collaborative Model of Research and Innovation

Basic  Research   Applied  Research  

Innova3on  &  Development  

•  Basic and applied research take place in a rich context

•  Basic Research is also driven by problems raised by applied research, which is itself fed by innovation and development

•  Publishable research results and focused practical solutions that serve an existing market. 3

Schneiderman, 2013

Page 4: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Collaboration in Practice

•  Well-defined problems in context •  Realistic evaluation •  Long term industrial collaborations

4

Problem Formulation

Problem Identification

State of the Art Review

Candidate Solution(s)

Initial Validation

Training

Realistic Validation

IndustryPartners

ResearchGroups

1

2

3

4

5

7Solution Release

8

6

Page 5: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Motivations

•  The term “verification” is used in its wider sense: Defect detection and removal

•  One important application of models is to drive and automate verification

•  In practice, despite significant advances in model-based testing, this is not commonly part of practice

•  Decades of research have not yet significantly and widely impacted practice

5

Page 6: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

6

Applicability? Scalability?

Page 7: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Definitions

•  Applicable: Can a technology be efficiently and effectively applied by engineers in realistic conditions? –  realistic ≠ universal –  includes usability

•  Scalable: Can a technology be applied on large artifacts (e.g., models, data sets, input spaces) and still provide useful support within reasonable effort, CPU and memory resources?

7

Page 8: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Outline

•  Project examples, with industry collaborations

•  Lessons learned regarding developing applicable and scalable solutions (our research paradigm)

•  Meant to be an interactive talk – I am also here to learn

8

Page 9: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Some Past Projects (< 5 years)

9

Company Domain Objective Notation Automation

Cisco Video conference Robustness testing UML profile Search, model transformation

Kongsberg Maritime Oil & Gas CPU usage UML+MARTE Constraint Solving

WesternGeco Marine seismic acquisition

Functional testing UML profile + MARTE Search, constraint solving

SES Satellite Functional and robustness testing, requirements QA

UML profile Search, Model mutation, NLP

Delphi Automotive systems Testing safety+performance

Matlab/Simulink Search, machine learning, statistics

CTIE Legal & financial Legal Requirements testing

UML Profile Model transformation, constraint checking

HITEC Crisis Support systems Security, Access Control

UML Profile Constraint verification, machine learning, Search

CTIE eGovernment Conformance testing UML Profile, BPMN, OCL extension

Domain specific language, Constraint checking

IEE Automotive, sensor systems

Functional and Robustness testing, traceaibility and certification

UML profile, Use Case Modeling extension, Matlab/Simulink

NLP, Constraint solving

Page 10: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Testing Closed-Loop Controllers

References:

10

•  R. Matinnejad et al., “MiL Testing of Highly Configurable Continuous Controllers: Scalable Search Using Surrogate Models”, IEEE/ACM ASE 2014

•  R. Matinnejad et al., “Search-Based Automated Testing of Continuous Controllers: Framework, Tool Support, and Case Studies”, Information and Software Technology (2014)

Page 11: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Dynamic continuous controllers are present in many embedded systems

11

Page 12: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Development Process (Delphi)

12

Hardware-in-the-Loop Stage

Model-in-the-Loop Stage

Simulink Modeling

Generic Functional

Model

MiL Testing

Software-in-the-Loop Stage

Code Generationand Integration

Software Running on ECU

SiL Testing

SoftwareRelease

HiL Testing

Page 13: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Controllers at MIL

13

Plant Model

++

+

+-

e(t)

actual(t)

desired(t)

KP e(t)

KDde(t)dt

KI

Re(t) dt

P

I

D

output(t)

Inputs: Time-dependent variables

Configuration Parameters

Page 14: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Inputs, Outputs, Test Objectives

14

Initi

al D

esire

d(ID

)

Desired ValueI (input)Actual Value (output)

Fina

l Des

ired

(FD

)

timeT/2 T

Smoothness

Responsiveness

Stability

Page 15: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Process and Technology

15

HeatMap Diagram

1. ExplorationList of Critical RegionsDomain

Expert

Worst-Case Scenarios

+Controller-

plant model

Objective Functionsbased on

Requirements 2. Single-State

Search

https://sites.google.com/site/cocotesttool/

Initial Desired (ID)

Fina

l Des

ired

(FD

) Worst Case(s)?

Page 16: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Process and Technology (2)

16

HeatMap Diagram

1. ExplorationList of Critical RegionsDomain

Expert

+Controller-

plant model

Objective Functionsbased on

Requirements

(a) Liveness (b) Smoothness

Page 17: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Process and Technology (3)

17

List of Critical RegionsDomain

Expert

Worst-Case Scenarios

2. Single-State Search

time

Desired ValueActual Value

0 1 20.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Initial Desired

Final Desired

Page 18: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Challenges, Solutions

•  Achieving scalability with configuration parameters: –  Simulink simulations are expensive –  Sensitivity analysis to eliminate irrelevant

parameters –  Machine learning (Regression trees) to partition

the space automatically and identify high-risk areas

–  Surrogate modeling (statistical and machine learning prediction) to predict properties and avoid simulation, when possible

18

Page 19: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Results

•  Automotive controllers on Electronics Control Units

•  Our approach enabled our partner to identify worst-case scenarios that were much worse than known and expected scenarios, entirely automatically

19

Page 20: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Fault Localisation in Simulink Models

Reference:

20

•  Bing Liu et al., “Kanvoo: Fault Localization in Simulink Models”, submitted

Page 21: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Context and Problem

21

•  Simulink models –  are complex

•  hundreds of blocks and lines •  many hierarchy levels •  continuous functions

–  might be faulty •  output signals do not match •  wrong connection of lines •  wrong operators in blocks

•  Debugging Simulink models is –  difficult –  time-consuming –  but yet crucial

•  Automated techniques to support debugging?

Page 22: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Context and Problem (2)

22

•  Simulink models –  are complex

•  hundreds of blocks and lines •  many hierarchy levels •  continuous functions

–  might be faulty •  output signals do not match •  wrong connection of lines •  wrong operators in blocks

•  Debugging Simulink models is –  difficult –  time-consuming –  but yet crucial

•  Automated techniques to support debugging?

Page 23: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Solution Overview

23

Test%Case%Genera+on%

Test%Case%Execu+on%

Slicing%

Ranking%

?% Test%Oracle%Test%Suite%

Coverage%Reports%

PASS/FAIL%Results%

Simulink%Model%

Model%Slices%

Specifica+on%

0.95% 0.71% 0.62% 0.43%

Ranked%Blocks%

Any test strategy

Provided by Matlab tool

One slice for each test case and output

For each test case and output, or overall

Page 24: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Evaluation and Challenges

•  Good accuracy overall: 5-6% blocks must be inspected on average to detect faults

•  But less accurate predictions for certain faults: Low observability •  Possible Solution: Augment test oracle (observability)

–  Use subsystems outputs –  Iterate at deeper levels of hierarchy –  Tradeoff: cost of test oracle vs. debugging effort –  2.3% blocks on average

•  5-6%: still too many blocks for certain models •  Information requirements to help further filtering blocks?

24

Page 25: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Modeling and Verifying Legal Requirements

Reference:

25

•  G. Soltana et al., “ UML for Modeling Procedural Legal Rule”, IEEE/ACM MODELS 2014

•  M. Adedjouma et al., “Automated Detection and Resolution of Legal Cross References”, RE 2014

Page 26: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Context and Problem

26

•  CTIE: Government computer centre in Luxembourg

•  Large government (information) systems

•  Implement legal requirements, must comply with the law

•  The law usually leaves room for interpretation and changes on a regular basis, many cross-references

•  Involves many stakeholders, IT specialists but also legal experts, etc.

Page 27: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Article Example

Art. 105bis […]The commuting expenses deduction (FD) is defined as a function over the distance between the principal town of the municipality on whose territory the taxpayer's home is located and the place of taxpayer’s work. The distance is measured in units of distance expressing the kilometric distance between [principal] towns. A ministerial regulation provides these distances. The amount of the deduction is calculated as follows: If the distance exceeds 4 units but is less than 30 units, the deduction is € 99 per unit of distance. The first 4 units does not trigger any deduction and the deduction for a distance exceeding 30 units is limited to € 2,574.

Page 28: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Project Objectives

28

Objective Benefits Specification of legal requirements •  including rationale and traceability

to the text of law

•   Make interpretation of the law explicit •  Improve communication •  Prerequisite for automation

Checking consistency of legal requirements

•  Prevent errors in the interpretation of the law to propagate

Automated test strategies for checking system compliance to legal requirements

•  Provide effective and scalable ways to verify compliance Run-time verification mechanisms to

check compliance with legal requirements Analyzing the impact of changes in the law

•  Decrease costs and risks associated with change •  Make change more predictable

Page 29: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Solution Overview

29

Test cases

Actual software system

Traces to

Traces to

Analyzable interpretation of the law Generates

Results match?

Impact of legal changes

Simulates

Page 30: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Research Steps

30

2. Build UML profile

3. Model Transformation to enable V&V

•  What information content should we expect?

•  What are the complexity factors?

•  Explicit means for capturing information requirements

•  Basis for modeling methodology

•  Target: Legal experts and IT specialists

•  Target existing automation techniques

•  Solvers for testing •  MATLAB for simulation

1. Conduct grounded theory study

Page 31: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Example

31

Art. 105bis […]The commuting expenses deduction (FD) is defined as a function over the distance between the principal town of the municipality on whose territory the taxpayer's home is located and the place of taxpayer’s work. The distance is measured in units of distance expressing the kilometric distance between [principal] towns. A ministerial regulation provides these distances.

Interpretation + Traces

Page 32: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Example

32

The amount of the deduction is calculated as follows: If the distance exceeds 4 units but is less than 30 units, the deduction is € 99 per unit of distance. The first 4 units does not trigger any deduction and the deduction for a distance exceeding 30 units is limited to € 2,574.

Interpretation + Traces

Page 33: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Challenges and Results

•  Profile must lead to models that are: –  understandable by both IT specialists and legal experts –  precise enough to enable model transformation and support

our objectives –  tutorials, many modeling sessions with legal experts

•  In theory, though such legal requirements can be captured by OCL constraints alone, this is not applicable

•  That is why we resorted to customized activity modeling, carefully combined with a simple subset of OCL

•  Many traces to law articles, dependencies among articles: automated detection (NLP) of cross-references 33

Page 34: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Run-Time Verification of Business Processes

References:

34

•  W. Dou et al., “OCLR: a More Expressive, Pattern-based Temporal Extension of OCL”, ECMFA 2014

•  W. Dou et al., “Revisiting Model-Driven Engineering for Run-Time Verification of Business Processes”, IEEE/ACM SAM 2014

•  W. Dou et al., “A Model-Driven Approach to Offline Trace Checking of Temporal Properties with OCL”, submitted

Page 35: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Context and Problem

•  CTIE: Government Computing Centre of Luxembourg

•  E-government systems mostly implemented as business processes

•  CTIE models these business processes

•  Business models have temporal properties that must be checked –  Temporal logics not applicable –  Limited tool support (scalability)

•  Goal: Efficient, scalable, and practical off-line and run-time verification 35

Page 36: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Solution Overview

36

Page 37: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Solution Overview

37

•  We identified patterns based on analyzing many properties of real business process models

•  Properties must be defined based on business process models (BPMN) according to modeling methodology at CTIE (applicability)

•  The goal was to achieve usability •  Early adoption by our partner

Page 38: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Solution Overview

38

•  Want to transform the checking of temporal constraints into checking regular constraints on trace conceptual model

•  OCL engines (Eclipse) are our target, to rely on mature technology (scalability)

•  Defined extension of OCL to facilitate translation

•  Target: IT specialists, BPM analysts

Page 39: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Scalability Analysis

•  Analyzed 47 properties in Identity Card Management System •  “Once a card request is approved, the applicant is notified within

three days; this notification has to occur before the production of the card is started.”

•  Scalability: Check time as a function of trace size …

39

100

200

300

400

500

600

700

800

900

1,00

0

0

20

40

60

80

100

Trace Size (k)

Ave

rage

Che

ckTi

me

(s)

P7 P8 P9

(a) globally / P7–P9

100

200

300

400

500

600

700

800

900

1,00

0

0

500

1,000

1,500

2,000

2,500

3,000

3,500

4,000

4,500

Trace Size (k)

Ave

rage

Che

ckTi

me

(s)

P10 P11 P12

(b) globally / P10–P12

10 20 30 40 50 60 70 80 90 100

0

5

10

15

20

25

30

Boundary Position (k)A

vera

geC

heck

Tim

e(s

)

P17 P18

P19 P20

(c) before / P17–P20

Fig. 3: Average check time of properties with globally and before scopes

eight properties with the before scope (properties P13–P20 inTable I) and eleven properties with the after scope (propertiesP21–P31 in Table I).

1) Trace Generation Strategy: For both types of scopes,we fix the length of the generated trace to 100K; what wevary in the various traces is the length of the sub-trace asdetermined by the scope boundary, i.e., we vary the positionof the boundary event in the trace. In the case of propertieswith a before scope, the boundary event is placed in positionsfrom 10K to 100K, with a 10K step increment; dually, forproperties with an after scope, the position of the boundaryevent varies from 10K to 90K, with a 10K step increment.

For properties referring to a specific occurrence of an eventin their scope part, such as before 3 B. . . or after 4 A. . . , weonly control the position of the actual scope boundary (e.g.,the third occurrence of B or the fourth occurrence of A inthe examples above); the other previous occurrences of theboundary event are generated in random positions before theactual boundary event.

The generation of the patterns in properties follows thesame steps described in Section VI-A1.

2) Evaluation: The relationship between the average checktime for properties with the before scope and the boundaryposition is shown in Fig. 3(c). For the sake of readability, weomitted the data line for properties P13–P16, since their trendis very similar to the one of P17. We also omitted the indicationof the standard deviation, since it is quite low (CV= 0.01).

We also measured the overhead to compute the sub-traceon which to check each property pattern, which correspondsto the time required to find the scope boundary. Based onour measurements, this time is independent from the actualposition of the boundary in the trace and on average it amountsto one hundred milliseconds. Although not shown in a plot, theproperties using the after scope have a similar trend.

C. Properties using the Between-and scope

Properties with a between-and scope, similarly to the oneswith a before/after scope, are checked on a portion of trace

provided in input. Depending on the variant of this scope, theportion of the trace on which properties are checked mightinclude one or more segments. The scopes used in propertiesP32–P35 (see Table I) can potentially select multiple segmentson a trace, while the scopes in properties P36–P38 (see Table I)select exactly one segment on a trace, as determined by thespecific event occurrence used in the scope boundaries (e.g.,as in the case of between 3 A and 2 B).

1) Trace Generation Strategy: For both types of between-and scope variants, we fix the length of the generated trace to100K. For properties P32–P35, we could control two param-eters for the trace generation: the length L of each segmentselected by the scope and the number of segments N. By fixingL to 2000, we can split the 100K trace into 50 segments. Thegenerator varies the number N of actual segments to selectfrom 5 to 50, with a 5-step increment. By fixing N to 20, andassuming a minimum length of 2000 for a segment (given thetime constraints in P33), the generator can produce traces withsegments of length varying from 2000 to 5000, with 1000-stepincrement.

For properties P36–P38, we could control two parameters:the length L0 of the segment and the position P of one of itsbounds. By fixing L0 to 10K, we vary the position of the rightbound from position 10K to position 100K, i.e., we vary theposition of the segment in the trace. By fixing the position P to10001, we can vary L0 from 10000 to 90000, with 10000-stepincrements.

2) Evaluation: The average check time for properties P32–P35 when varying the number of segments (as determined bythe scope) on which to check the property pattern, varies fromabout 4s to 31s . This time increases linearly with respect to thenumber of segments on which the property pattern is checked.In the second case, with the number of segment fixed to 20,we noticed that varying the segment length did not impact thecheck time, which on average was 13.15s (CV=0.05).

As for checking properties P36–P38, when varying theposition of the segment on which the property pattern ischecked, our experiments show that the average checking time

Page 40: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Schedulability Analysis and Stress Testing

References:

40

•  S. Nejati, S. Di Alesio, M. Sabetzadeh, and L. Briand, “Modeling and analysis of cpu usage in safety-critical embedded systems to support stress testing,” in IEEE/ACM MODELS 2012.

•  S. Di Alesio, S. Nejati, L. Briand. A. Gotlieb, “Stress Testing of Task Deadlines: A Constraint Programming Approach”, ISSRE 2013, San Jose, USA

•  S. Di Alesio, S. Nejati, L. Briand. A. Gotlieb, “Worst-Case Scheduling of Software Tasks – A Constraint Optimization Model to Support Performance Testing, Constraint Programming (CP), 2014

Page 41: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Problem

•  Real-time, concurrent systems (RTCS) have concurrent interdependent tasks which have to finish before their deadlines

•  Some task properties depend on the environment, some are design choices

•  Tasks can trigger other tasks, and can share computational resources with other tasks

•  Schedulability analysis encompasses techniques that try to predict whether all (critical) tasks are schedulable, i.e., meet their deadlines

•  Stress testing runs carefully selected test cases that have a high probability of leading to deadline misses

•  Testing in RTCS is typically expensive, e.g., hardware in the loop

41

Page 42: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Arrival Times Determine Deadline Misses

42

0123456789

j0, j1 , j2 arrive at at0 , at1 , at2 and must finish before dl0 , dl1 , dl2

J1 can miss its deadline dl1 depending on when at2 occurs!

0123456789

j0 j1 j2 j0 j1 j2 at0

dl0

dl1

at1 dl2

at2

T

T

at0

dl0 dl1

at1 at2

dl2

Page 43: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Context

43

Drivers (Software-Hardware Interface)

Control Modules Alarm Devices (Hardware)

Multicore Architecture

Real-Time Operating System

Monitor gas leaks and fire in oil extraction platforms

Page 44: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Challenges and Solutions

•  Ranges for arrival times form a very large input space

•  Task interdependencies and properties constrain what parts of the space are feasible

•  We re-expressed the problem as a constraint optimisation problem

•  Constraint programming

44

Page 45: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Constraint Optimization

45

Constraint Optimization Problem

Static Properties of Tasks (Constants)

Dynamic Properties of Tasks (Variables)

Performance Requirement (Objective Function)

OS Scheduler Behaviour (Constraints)

Page 46: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Process and Technologies

46

UML Modeling (e.g., MARTE)

Constraint Optimization

Optimization Problem (Find arrival times that maximize the

chance of deadline misses)

System Platform

Solutions (Task arrival times likely to

lead to deadline misses)

Deadline Misses Analysis

System Design Design Model (Time and Concurrency

Information)

INPUT

OUTPUT

Stress Test Cases

Constraint Programming

(CP)

Page 47: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Challenges and Solutions (2)

•  Scalability problem: Constraint programming (e.g., IBM CPLEX) cannot handle such large input spaces (CPU, memory)

•  Solution: Combine metaheuristic search and constraint programming –  metaheuristic search identifies high risk regions in

the input space –  constraint programming finds provably worst-case

schedules within these (limited) regions

47

Page 48: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Process and Technologies

48

UML Modeling (e.g., MARTE)

Constraint Optimization

Optimization Problem (Find arrival times that maximize the

chance of deadline misses)

System Platform

Solutions (Task arrival times likely to

lead to deadline misses)

Deadline Misses Analysis

System Design Design Model (Time and Concurrency

Information)

INPUT

OUTPUT

Genetic Algorithms

(GA)

Stress Test Cases

Constraint Programming

(CP)

Page 49: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Applicable? Scalable?

49

Page 50: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Scalability examples

•  This is the most common challenge in practice •  Testing closed-loop controllers

–  Large input and configuration space –  Smart search optimization heuristics (machine learning)

•  Fault localization –  Large number of blocks and lines in Simulink models –  Even a small percentage of blocks to inspect can be

impractical –  Additional information to support decision making?

Incremental fault localisation? •  Schedulability analysis and stress testing

–  Constraint programming cannot scale by itself –  Must be carefully combined with genetic algorithms

50

Page 51: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Scalability examples (2)

•  Verifying legal requirements –  Traceability to the law is complex –  Many provisions and articles –  Many dependencies within the law –  Natural Language Processing: Cross references, support for

identifying missing modeling concepts •  Run-time Verification of Business Processes

–  Traces can be large and properties complex to verify –  Transformation of temporal properties into regular OCL

properties, defined on a trace conceptual model –  Incremental verification at regular time intervals –  Heuristics to identify subtraces to verify

51

Page 52: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Scalability: Lessons Learned

•  Scalability must be part of the problem definition and solution from the start, not a refinement or an after-thought

•  It often involves heuristics, e.g., meta-heuristic search, NLP, machine learning, statistics

•  Scalability often leads to solutions that offer “best answers” within time constraints, not guarantees

•  Solutions to scalability are multi-disciplinary •  Scalability analysis should be a component of every research

project – otherwise it is unlikely to be adopted in practice •  How many papers in MODELS or SAM do include even a

minimal form of scalability analysis?

52

Page 53: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Applicability

•  Definition?

•  Usability: Can the target user population efficiently apply it?

•  Assumptions: Are working assumptions realistic, e.g., realistic information requirements?

•  Integration into the development process, e.g., are required inputs available in the right form and level of precision?

53

Page 54: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Applicability examples

•  Testing closed-loop controllers –  Working assumption: availability of sufficiently precise plant

(environment) models –  Means to visualize relevant properties in the search space

(inputs, configuration), to get an overview and focus search on high-risk areas

•  Schedulability analysis and stress testing –  Availability of tasks architecture models –  Precise WCET analysis –  Applicability requires to assess risk based on near-deadline

misses

54

Page 55: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Applicability examples (2)

•  Fault localization: –  Trade-off between # of model outputs considered versus cost of

test oracles –  Better understanding of the mental process and information

requirements for fault localization •  Run-time verification of business process models

–  Temporal logic not usable by analysts –  Language closer to natural language, directly tied to business

process model –  Easy transition to industry strength constraint checker

•  Verifying legal requirements –  Modeling notation must be shared by IT specialists and legal

experts –  One common representation for many applications, with traces

to the law to handle changes –  Multiple model transformation targets

55

Page 56: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Applicability: Lessons Learned

•  Make working assumptions explicit: Determine the context of applicability

•  Make sure those working assumptions are at least realistic in some industrial domain and context

•  Assumptions don’t need to be universally true – they rarely are anyway

•  Run usability studies – do it for real!

56

Page 57: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Conclusions

•  In most research endeavors, applicability and scalability are an after-thought, a secondary consideration, when at all considered

•  Implicit assumptions are often made, often unrealistic in any context

•  Problem definition in a vacuum

•  Not adapted to research in an engineering discipline

•  Leads to limited impact

•  Research in model-based V&V is necessarily multi-disciplinary

•  User studies are required and far too rare

•  In engineering research, there is no substitute to reality

57

Page 58: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Acknowledgements

PhD. Students: •  Marwa Shousha •  Reza Matinnejad •  Stefano Di Alesio •  Wei Dou •  Ghanem Soltana •  Bing Liu

Scientists: •  Shiva Nejati •  Mehrdad Sabetzadeh •  Domenico Bianculli •  Arnaud Gotlieb •  Yvan Labiche

58

Page 59: Making Model-Driven Verification Practical and Scalable: Experiences and Lessons Learned

Making Model-Driven Verification Practical and Scalable: Experiences and Lessons

Learned

Lionel Briand IEEE Fellow, FNR PEARL Chair Interdisciplinary Centre for ICT Security, Reliability, and Trust (SnT) University of Luxembourg, Luxembourg SAM, Valencia, 2014 SVV lab: svv.lu SnT: www.securityandtrust.lu