Cross-Cutting Seminar Verification in the 10 N thread regime Ganesh Gopalakrishnan

21
Cross-Cutting Seminar Verification in the 10 N thread regime Ganesh Gopalakrishnan http://www.cs.utah.edu/formal_verificatio n 1

description

Cross-Cutting Seminar Verification in the 10 N thread regime Ganesh Gopalakrishnan. http://www.cs.utah.edu/formal_verification. Correctness Concerns Will Loom Everywhere… Debug Concurrent Systems, providing rigorous guarantees. How is system correctness established?. - PowerPoint PPT Presentation

Transcript of Cross-Cutting Seminar Verification in the 10 N thread regime Ganesh Gopalakrishnan

1

Cross-Cutting Seminar

Verification in the 10N thread regime

Ganesh Gopalakrishnan

http://www.cs.utah.edu/formal_verification

Correctness Concerns Will Loom Everywhere…Debug Concurrent Systems, providing rigorous guarantees

3

How is system correctness established?• Certify a system to be correct without attempting to

create all real-world conditions– Airplanes

• Can’t simulate all stress, turbulence• Hence mathematically model, and analyze• Over-engineer

– Software / Hardware : The old way• Attempt (in vain) at what was proscribed • Time-out and “ship it” based on ad-hoc / monetary criteria

– The new way• Do real engineering – i.e. really attempt to mathematically

analyze and predict (coverage metrics, formal methods)• Over-engineering also an option

– Redundant cores, Fault Tolerance Algorithms, …

4

Verification of Large-scale Concurrent Systems

• Must conquer the exponentials in the state space of a concurrent program / system

– Data space is exponential

– Symmetry space is exp

– Interleaving space is exp

5

Conquering the exponentials (practice)• Data space

– Find data that does not affect control• Often possible to guess data that does not influence

control• Static Analysis can help

• Symmetry space– Find symmetry reduction arguments

• Often possible to guess the instance to model– Example: Three Dining Philosophers

• Symbolic Analysis can help – e.g. symbolic MPI ranks

• Interleaving space– Employ action independence

• Perhaps THE most non-intuitive of spaces – Designers find it difficult to guess which interleavings to

ignore

• Hence need automation in this space– Partial Order Reduction methods

6

Challenge: exponential interleavings

P0 P1 P2 P3 P4

A B

Dependent actions Only these 2 are RELEVANT!!!

TOTAL > 10 Billion Interleavings !!

7

Focal Areas for Correctness Research

8

Focal Areas

• Hardware– Pre-Silicon– Post-Silicon

• Firmware– Microcode on chip– Microcode in subsystems (bus bridges, I/O, …)

• Software– User apps– APIs and libraries– Compilation– Runtime support (OS, work-stealing

algorithms, …)

9

Where Post-Si Verification fitsin the Hardware Verification Flow

SpecificationValidation

DesignVerification

Testing forFabrication

Faults

Post-SiliconVerification

product

Does functionalitymatch designed behavior?

Pre-manufacture Post-manufacture

Spec

10

Focal Areas : Hardware

• Hardware– Pre-Silicon

• Logical bugs must be caught through– Systematic testing– Formal analysis (i7 core was FV-ed in lieu of testing)

– Post-Silicon• Fresh logical bugs are triggered by

– high-frequency, and– integrated operation

• Must detect them through– Systematic testing– Limited Observability Testing– Built-in support for post-silicon debugging

» e.g. “backspace queues”» staggered clock-phase cores

11

Focal Areas : Hardware

• Hardware– Pre-Silicon

–When it works, Formal Verification rocks!– Gold-star result:

» i7 core execution engine was FV-ed in lieu of testing !!

Forthcoming CAV 2009 paper:

Roope Kaivola, Rajnish Ghughal, Naren Narasimhan, Amber Telfer, Jesse Whittemore, Sudhindra Pandav, Anna Slobodova, Christopher Taylor, Vladimir Frolov, Erik Reeber and Armaghan Naik, “Replacing testing with formal verification in Intel Core i7 processor execution engine validation”

(Utah Alums are in red)

12

Post-Silicon Challenge : Limited Observability!

a

b

x

y

c

d

a x c d y b …

13

Focal Areas : Firmware

• Firmware– Microcode on chip

• Huge amounts of late-binding microcode on chip– Path analysis of microcode is a HUGE problem– Industry is heavily investing into formal methods

• Must encrypt • Must have sufficient “planned wiggle-room”

– Microcode in subsystems (bus bridges, I/O, …)• Crucial for overall system integrity

14

Focal Areas : Software

• Software– User apps– APIs and libraries– Compilation– Runtime support (OS, work-stealing

algorithms, …)– Dynamic Verification offers considerable hope

• MODIST (dynamic verif. of distributed systems)• Backtrackable VMs• Testing using FPGA hardware (Simics)• CHESS project of Microsoft Research (for threads)• Local projects

– ISP (for MPI)– Inspect (for threading)

15

Executable

Proc1

Proc2

……Procn

SchedulerRun

Runtime

Hijack scheduler

Playout Relevant Interleavings

Programor FPGA

emulation or VM

Interposition Layer

Workflow of a dynamic verifier

16

Focal Areas : Software

• Software– Compilation

• So many focal areas just here– Correct compilation respecting API / library

semantics

– Correctness with respect to weak memory orderings

– Interaction of compiled code with intelligent runtime

– Failure handling» What must be done when a core says “bye

bye” ?

17

Correctness Myths in the Multi-core Era• Myth: Simpler cores -> easier to verify

– Reality: Circuits will be highly energy optimized• Circuit-level verification• Verification of power-down / power-up protocols

• Myth: Streaming models have no “rats nest” control – Reality: Proper use of streaming models will re-introduce

reactive / control complexity

• Myth: Single programming paradigms simplify things– Reality: Performance will force multi-paradigm

programming• Road-runner uses MPI and IBM Cell • Mixed programming paradigms make verification tricky

• Myth: More cores will allow parallel verification– Reality: Superior abstraction methods often outperform

any brute-force

18

How about the performance space?

19

Performance / reliability bugs

• Not meeting energy budgets

• “Feature interactions” that occur only at scale

• Inability to gauge efficacy of parallelization– For each context, decide whether

parallelization pays off

20

Concluding Remarks (1)

• Correctness is an enabler of performance!– Safety through timidity never worked

• People will seek performance eventually

– Provide them strong safety-nets – Incrementally re-verify performance

optimizations

• Plan for correctness– Correctness cannot be left to chance– Not an after-thought– Formal Correctness WILL sell and pay for

itself

21

Concluding Remarks (2)

• Standardization is very important!• How to standardize? Two options

– One minimal API with a few high-level functions• Pros:

– Easier to understand and verify

• Cons:– Ensuring portable performance could be difficult

» So people will learn to bypass and hack around them

– One very broad API that exposes a LOT of low level• Cons:

– Steeper learning curve

• Pros:– Has a much better chance to succeed – example : MPI– Formal Methods can help mitigate burden of learning / use

» Example: Our ISP push-button verifier of MPI programs» Recently solved many large apps» Recently worked out most examples from Pacheco’s book