Hardware Functional Verification Class Verification Advisory Team October, 2000 Non Confidential...

119
Hardware Functional Verification Class Verification Advisory October, 2000 Non Confidential Version

Transcript of Hardware Functional Verification Class Verification Advisory Team October, 2000 Non Confidential...

Hardware Functional Verification Class

VerificationAdvisory Team October, 2000

Non Confidential Version

Contents Introduction Verification "Theory" Secret of Verification Verification Environment Verification Methodology Tools Future Outlook

Introduction

What is functional verification? Act of ensuring correctness of the logic design Also called:

Simulationlogic verification

High Level Design

Tape-Out(Fabrication)

PerformanceVerification

CPI

Implementationin VHDL

TimingVerification

What is Verification

FunctionalVerification

Architecture

Cycle TimeLogic Equival.

Verification

Verification Challenge How do we know that a design is correct? How do we know that the design behaves as expected?

How do we know we have checked everything? How do we deal with size increases of designs faster than tools performance?

How do we get correct Hardware for the first RIT?

Answer: Functional Verification

Also called:Simulationlogic verification

Verification is based onTestpattern GenerationReference Model

DevelopmentResult Checking

DesignunderTest

Testpattern

ReferenceModel

Results Checking

Why do functional verification?

Product time-to-markethardware turn-around timevolume of "bugs"

Development costs"Early User Hardware" (EUH)

Some lingo Facilities: a general term for named wires (or signals) and

latches. Facilities feed gates (and/or/nand/nor/invert, etc) which feed other facilities.

EDA: Engineering Design Automation--Tool vendors. IBM has an internal EDA organization that supplies tools. We also procure tools from external companies.

More lingo Behavioral: Code written to perform the function of logic on

the interface of the design-under-test Macro: 1. A behavioral 2. A piece of logic Driver: Code written to manipulate the inputs of the

design-under-test. The driver understands the interface protocols.

Checker: Code written to verify the outputs of the design-under-test. A checker may have some knowledge of what the driver has done. A check must also verify interface protocol compliance.

Still more lingo Snoop/Monitor: Code that watches interfaces or internal

signals to help the checkers perform correctly. Also used to help drivers be more devious.

Architecture: Design criteria as seen by the customer. The design's architecture is specified in documents (e.g. POPS, Book 4, Infiniband, etc), and the design must be compliant with this specification.

Microarchitecture: The design's implementation. Microarchitecture refers to the constructs that are used in the design, such as pipelines, caches, etc.

Verification "Theory"

Verification Cycle

Create TestplanDevelop environment

Debug hardware

Regression

FabricationHardware debug

Escape Analysis

Verification Testplan Team leaders work with design leaders to create a

verification testplan. The testplan includes:ScheduleSpecific tests and methods by simulation levelRequired toolsInput criteriaCompletion criteriaWhat is expected to be found with each test/levelWhat's not covered by each test/level

Hierarchical Design

System

Chip

Unit

Macro

...

Allows design team to break system down into logical and comprehendable components. Also allows for repeatable components.

Hierarchical design Only lowest level macros contain latches and combinatorial logic (gates)Work gets done at these levels

All upper layers contain wiring connections onlyOff chip connections are C4 pins

Current Practices for Verifying a System

Designer Level simVerification of a macro (or a few small macros)

Unit Level simVerification of a group of macros

Element Level simVerification of a entire logical function such as a processor,

storage controller or I/O controlCurrently synonomous with a chip

System Level simMultiple chip verificationOften utilizes a mini operating system

The Black Box

Some piece of logic design written in

VHDL

Inputs Outputs

The black box has inputs, outputs, and performs some function. The function may be well documented...or not. To verify a black box, you need to understand the function and be

able to predict the outputs based on the inputs. The black box can be a full system, a chip, a unit of a chip, or a single

macro.

White box/Grey box White box verification means that the internal facilities are visible and utilized by the testcase driver.Examples: 0-in (vendor) methods

Grey box verification means that a limited number of facilities are utilized in a mostly black-box environment.Example: Most environments! Prediction of

correct results on the interface is occasionally impossible without viewing and internal signal.

Perfect VerificationTo fully verify a black box, you must show that the logic works correctly for all combinations of inputs.

This entails:Driving all permutations on the input linesChecking for proper results in all cases

Full verification is not practical on large pieces of designs...but the principles are valid across all verification.

In an Ideal World.... Every macro would have perfect verification performed

All permutations would be verified based on legal inputsAll outputs checked on the small chunks of the design

Unit, chip, and system level would then only need to verify interconnections

Ensure that designers used correct Input/Output assumptions and protocols

Reality Check Macro verification across an entire system is not feasible

for the businessThere may be over 400 macros on a chip, which would

require about 200 verification engineers!That number of skilled verification engineers does not existThe business can't support the development expense

Verification Leaders must make reasonable trade-offsConcentrate on Unit levelDesigner level on riskiest macros

Typical Bug rates per level

Tape-Out Criteria Checklist of items that must be completed before RITVerification items, along with Physical/Circuit

design criteria, etcVerification criteria is based on

– Function tested– Bug rates– Coverage data– Clean regression

Escape Analysis Escape analysis is a critical part of the verification process

Important data:Fully understand bug! Reproduce in sim if

possible– Lack of repro means fix cannot be verified– Could misunderstand the bug

Why did the bug escape simulation?Process update to avoid similar escapes in future

(plug the hole!)

Escape Analysis: Classification We currently classify all escapes under two views

Verification view– What areas are the complexities that allowed the

escape? – Cache Set-up, Cycle dependency, Configuration

dependency, Sequence complexity, and expected resultsDesign View

– What was wrong with the logic?– Logic hole, data/logic out of synch, bad control reset,

wrong spec, Bad logic

Cost of Bugs Over Time The longer a bug goes undetected, the more expensive the fix

A bug found early (designer sim) has little costFinding a bug at Chip or System Sim has moderate cost

– Requires more debug time and problem isolation– Could require new algorithm, which could effect schedule and

cause rework of physical designFinding a bug in System Test (testfloor) requires new hardware RITFinding a bug in the customer's environment can cost hundreds of

millions in hardware and brand image

$

Time

Secret of Verification(Verification Mindset)

The Art of Verification

Two simple questions

Am I driving all possible input scenarios?

How will I know when it fails?

Thou shalt stress thine logic harder than it will ever be

stressed again

Thou shalt place checking upon all

things

Thou shalt not move onto a higher platform until the

bug rate has dropped off

Three Simulation Commandments

Need for Independent Verification

The verification engineer should not be an individual who participated in logic design of the DUT

Blinders: If a designer didn't think of a failing scenario when creating the logic, how will he/she create a test for that case?

However, a designer should do some verification on his/her design before exposing it to the verification team

Independent Verification Engineer needs to understand the intended function and the interface protocols, but not necessarily the implementation

Verification Do's and Don'ts DO:

Talk to designers about the function and understand the design first, but then

Try to think of situations the designer might have missed

Focus on exotic scenarios and situations– e.g try to fill all queues while the design is done

in a way to avoid any buffer full conditionsFocus on multiple events at the same time

Verification Do's and Don'ts (continued)

Try everything that is not explicitly forbiddenSpend time thinking about all the pieces that you

need to verifyTalk to "other" designers about the signals that

interface to your design-under-test Don't:

Rely on the designer's word for input/output specification

Allow RIT Criteria to bend for sake of schedule

Typical Verification diagram

DUT(bridge chip)

Protocol

Packet

Sequence

Conv-ersation Errors

Struct: Header Payload checking

gen packetdrive packetpost packet

Bus

Scoreboard

xlatepredict

Checking framework

Stimulus Devicetypes FSMslatency conditionsaddress transactionssequences transitions

Coverage Data

The Line Delete Escape Escape: A problem that is found on the test floor and therefore has escaped the verification process

The Line Delete escape was a problem on the H2 machine S/390 Bipolar, 1991Escape shows example of how a verification

engineer needs to think

The Line Delete Escape(pg 2)

Line Delete is a method of circumventing bad cells of a large memory array or cache arrayAn array mapping allows for removal of defective

cells for usable space

The Line Delete Escape(pg 3)

If a line in an array has multiple bad bits (a single bit usually goes unnoticed due to ECC-error correction codes), the line can be taken "out of service".In the array pictured, row 05 has a bad congruence class entry.

.

.

.

05

The Line Delete Escape(pg 4)

Data enters ECC creation logic prior to storage into the array. When read out, the ECC logic corrects single bit errors and tags Uncorrectable Errors (UEs), and increments a counter corresponding to the row and congruence class.

.

.

.

05

Data in

ECC Logic Counters

ECC Logic

Data out

The Line Delete Escape(pg 5)

When a preset threshhold of UEs are detected from a array cell, the service controller is informed that a line delete operation is needed.

.

.

.

05

Data in

ECC Logic Counters

ECC Logic

Data outThreshhold

Service Controller

The Line Delete Escape(pg 6)

The Service controller can update the configuration registers, ordering a line delete to occur. When the configuration registers are written, the line delete controls are engaged and writes to row 5, congruence class 'C' cease.

However, because three other cells remain good in this congruence class, the sole repercussion of the line delete is a slight decline in performance. .

.

.

05

ECC Logic Counters

Data in

ECC Logic

Data outThreshhold

Service Controller

Line delete controlStorage

Controller configuration

registers

The Line Delete Escape(pg 7)

.

.

.

05

ECC Logic Counters

Data in

ECC Logic

Data outThreshhold

Service Controller

Line delete controlStorage

Controller configuration

registers

How would we test this logic?What must occur in the testcase?What checking must we implement?

Verification Environment

General Simulation Environment

OutputSimulator

Event simulatorCycle simulatorEmulator

Testcase results

TestcaseTestcase

Driver

C/C++HDL TestbenchesSpecman eSynopsis' VERA

Compiler(not always required)

Design Source

Model

VHDLVerilog

Event Simulation compilerCycle simulation compiler....Emulator Compiler

EnvironmentData

InitializationRun-time requirements

Transfer Testcase

Configure Environment

Run Foreground Simulation

Release Environment

Regress Fails

Redirect Defect

Verify Defect Fix

Create Defect

Answer Defect

Specify Batch

Simulation

Monitor Batch

Simulation

Debug Environment

View Trace

Release Model

Define Project Goals

Project Status Report

Logic Designer Environment Developer

Model Builder Project Manager

Verification Engineer

Run Background Simulation

Debug Fail

Types of Simulators Event Simulators

Model Technology's (MTI) VSIM is most commoncapable of simulating analog logic and delays

Cycle Simulators For clocked, digital designs onlyModel is compiled and signals are "ordered". Infinite loops

are flagged during compile as "signal ordering deadlocks". Each signal is evaluated once per cycle, and latches are set for the next cycle based on the final signal value.

Types of Simulators(con't)

Simulation FarmMultiple computers are used in parallel for simulation

Acceleration Engines/EmulatorsQuickturn, IKOS, AXIS.....Custom designed for simulation speed (parallelized)Accel vs. Emulation

– True emulation connects to some real, in-line hardware– Real software eliminates need for special testcase

Speed compare Influencing Factors:

Hardware Platform– Frequency, Memory, ...

Model content– Size, Activity, ...

Interaction with Environment

Model load timeTestpattern Network utilization

Event Simulator 1

Cycle Simulator 20

Event driven cycle Simulator

50

Acceleration 1000

Emulation 100000

Relative Speed of different Simulators

Speed - What is fast? Cycle Sim for one processor chip

1 sec realtime = 6 month

Sim Farm with a few hundred computers 1 sec realtime = ~ 1 day

Accelerator/Emulator 1 sec realtime = ~ 1 hour

Basic Testcase/Model Interface: Clocking

Clocking cyclesA simulator has the concept of time.

– Event sim uses the smallest increment of time in the target technology

– All other sim environments use a single cycleA testcase controls the clocking of cycles (movement of

time)– All APIs include a clock statement– Example: "Clock(n)", where n is the increment to clock

(usually '1')

Cycle 0 Cycle 1 Cycle 2 ... ....Cycle n

Basic Testcase/Model Interface: Setfac/Putfac

Setting facilitiesA simulator API allows you to alter the value of facilitiesUsed most often for driving inputsCan be used to alter internal latches or signalsCan set a single bit or multi-bit facility

– values can be 0,1, or possibly X, high impedence, etcExample syntax: "Setfac facility_name value"

Cycle 0 Cycle 1 Cycle 2 ... ....Cycle n

Setfac address_bus(0:31) "0F3D7249"x

Basic Testcase/Model Interface: Getfac

Reading facilities valuesA simulator API allows you to read the value of a facilityUsed most often checking outputsCan be used to read internal latches or signalsExample syntax: "Getfac facility_name varname"

Cycle 0 Cycle 1 Cycle 2 ... ....Cycle n

Getfac adder_sum checksum

Basic Testcase/Model Interface: Putting it together

Clocking, setfacs and putfacs occur at set times during a cycle

Setting of facilities must be done at the beginning of the cycle.

Getfacs must occur at the end of a cycleIn between, control goes to the simulation engine, where

the logic under test is "run" (evaluated)

Cycle 0 Cycle 1 Cycle 2 ... ....Cycle n

Getfac adder_sum checksumSetfac address_bus(0:31) "0F3D7249"x

Running SimulationBasic steps:

Create a testcaseBuild a model

– Different model build programs for different simulation engines

Run the simulation engineCheck results. If testcase fails

– do preliminary debug (create AET, view scans)– Get fix from designer and repeat from step 2

Calculator Design Calculator has 4 functions:

AddSubtractShift leftShift right

Calculator can handle 4 requests in parallelAll 4 requestors use separate input signalsAll requestors have equal priority

Calculator design Input/Output description

c_clk

out_resp2<0:1>

req1_data_in<0:31>

req1_cmd_in<0:3>

out_data1<0:31>

req4_cmd_in<0:3>

req3_cmd_in<0:3>

req2_cmd_in<0:3>

req4_data_in<0:31>

req3_data_in<0:31>

req2_data_in<0:31>

reset<0:7>

out_data4<0:31>

out_data3<0:31>

out_data2<0:31>

out_resp4<0:1>

out_resp3<0:1>

out_resp1<0:1>

calc_top

Calculator Design I/O Description

Input commands:– 0 - No-op– 1 - Add operand1 and operand2– 2 - Subtract operand2 from operand1– 5 - Shift left operand1 by operand2 places– 6 - Shift right operand1 by operand2 places

Input Data– Operand1 data arrives with command– Operand2 data arrives on the following cycle

Calculator Design Outputs

Response line definition– 0 - no response– 1 - successful operation completion– 2 - invalid command or overflow/underflow error– 3 - Internal error

Data– Valid result data on output lines accompanies

response (same cycle)

Calculator Design Other information

Clocking– When using a cycle simulator, the clock should be held

high (c_clk in the calculator model)– The clock should be toggled when using an event

simulatorCalculator priority logic

– Priority logic works on first come first serve algorithm

– Priority logic allows for 1 add or subtract at a time and one shift operation at a time

Input/Output timing

Calculator Design

req1_data_in<0:31>

req1_cmd_in<0:3>

out_data1<0:31>

out_resp1<0:1>

Calculator Exercise part 1 Build the model

make a directory – mkdir calc_test– cd calc_test– ../calc_build

Run the model– calc_run

Check the AET– scope tool– use calc4.wave for input/output facility names

Calculator Exercise Part 2 There are 5+ bugs in the design!

How many can you find by altering the simple testcase?

Verification Methodology

Verification Methodology EvolutionT

ime

Test Patterns

Testcases

TestcaseGenerators

TestcaseDrivers

Hand GeneratedHand CheckedHardcoded

Hand GeneratedSelf CheckingHardcodedAVPs, IVPs

HardcodedAVPGEN, GENIE/GENESYSSAK

Interactive on-the-fly generationOn-the-fly checkingRandom SMP, C/C++

Tool GeneratedSelf Checking

More Stress per Cycle

Coverage tools

Formal Verification

Reference Model Abstraction of design implementation Could be a

complete behavior description of the design using a standard programming language

formal specification using math. languagescomplete state transition graphdetailed testplan in english language for

handwritten testpatternpart of a random driver or checker....

Behavioral Design One of the most difficult concepts for new verification engineers is that your behavioral can "cheat".The behavioral only needs to make the design-

under-test think that the real logic is hanging off its interface

The behavioral can:– predetermine answers– return random data– look ahead in time

Behavioral Design Cheating examples

Return random data in Memory modeling– A memory controller does not know what data was

stored into the memory cards (behavioral). Therefore, upon fetching the data back, the memory behavioral can return random data.

Branch prediction– A behavioral can look ahead in the instruction

stream and know which way a branch will be resolved. This can halve the required work of a behavioral!

Hardcoded Testcases and IVPs IVP (Implementation Verification Program)

A testcase that is written to verify a specific scenario

Appropriate usage:– during initial verification – as specified by the designer/verification

engineer to ensure that important or hard-to-reach scenarios are verified.

Other hardcoded testcases are done for simple designs

Hardcoded indicates a single scenario

Testbenches Testbench is a generic term that is used differently across locations/teams/industry

It always refers to a testcase Most commonly (and appropriately), a testbench refers to code written in the design language (eg VHDL) at the top level of the hierarchy. The testbench is often simple, but may have some elements of randomness.

Testcase Generators Software that creates multiple testcases Parameters control the generator in order to focus the testcases on more specific arch/ microarchitectural components.Ex: If branch intensive testcases are desired, the

parameters would be set to increase the probability of creating branch instructions.

Can create "tons" of testcases which have desired level of randomness.broad-brush approach complements IVP planRandomness can be in data or control

Random Environments "Random" is used to describe many environmentsSome teams call testcase generators "random"

(they have randomness in the generation process)The two major differentiators are:

– Pre-determined vs. on-the-fly generation– Post processing vs. on-the-fly checking

Random Drivers/checkers The most robust random environments use on-the-fly drivers and on-the-fly checkingOn-the-fly drivers will give more flexibility and

more control, along with the cabability to stress the logic to the micro-architecture's limit

On-the-fly checkers will flag interim errors. The testcase is stopped upon hitting an error.

However, the overall quality is determined by how good the verification engineer is! If scenarios aren't driven or checks are missing, the environment is incomplete!

Random Drivers/Checkers Costs of optimal random environment

Code intensiveNeed an experienced verification engineer to

oversee effort to ensure quality Benefits of optimal random environment

More stress on the logic than any other environment, including the real hardware

It will find nearly all of the most devious bugs and all of the easy ones.

Random Drivers/Checkers Sometimes too much randomness will prevent drivers from uncovering design flaws."Un-randomizing the random drivers" needs to be

built into the environment depending upon the design

– Hangs due to looping– Low activity scenarios

"Micro-modes" can be built into the driversAllows user to drive very specific scenarios

Random Example: Cache model Cache coherency is a problem for multiprocessor designs

Cache must keep track of ownership and data on a predetermined boundary (quad-word, line, double-line, etc)

Cache Coherency example High stress environment requires limiting size of data used in testcase

.

.

.

A limited number of congruence classes are chosen at the start of the testcase to ensure stress. Only these addresses will be used by the drivers to generate requests.

Cache Coherency example

Multiprocessor Environment

.

.

.

CacheStorage

Controller

proc1macro

procNmacro

proc2macro ...

I/Omacro

Checking program

Cache coherency example Driver algorithm

For each cycle

Protocol allows

command to be sent?

Request address

from addr space

Choose command using parm

table

Create random data as required

Send command

Start

N Y

Cache Coherency example

This environment drives more stress than with the real processors in a system environmentMicro-architectural level on the interfaces vs.

architectural instruction streamReal processor and I/O will add delays based on

it's own microarchitecture

Random Seeds Testcase seed is randomly chosen at the start of simulation

The initial seed is used to seed decision-making driver logic

Watch out for seed synchronization across drivers

Formal Verification Formal Verification employs mathematic algorithms to prove correctness or compliance

Formal applications fall under the following:Model Checking (used for logic verification)Equivelence Checking (ex: VHDL vs. Synthesis

output)Theorem ProvingSymbolic Trajectory Analysis (STE)

Simulation vs. Model Checking

If the overall State space of a design is the universe, then

Model checking is like a bulb and

Simulation is like a laser beam

Formal Verification-Model Checking

IBM's "Rulebase" is used for Model CheckingChecks properties against the logic

– Uses EDL and Sugar to express environment and properties

Limit of about 300 latches after reduction– State space size explosion is biggest challenge

in FV

Formal Verification-Model Checking

Rulebase

Coverage Coverage techniques give feedback on how much the testcase or driver is exercising the logicCoverage makes no claim on proper checking

All coverage techniques monitor the design during simulation and collect information about desired facilities or relationships between facilities

Measure the "quality" of a set of tests Supplement test specifications by pointing to untested areas

Help create regression suites Provide a stopping criteria for unit testing Better understanding of the design

Coverage Goals

Coverage Techniques People use coverage for multiple reasons

Designer wants to know how much of his/her macro is exercised

Unit/Chip leader wants to know if relationships between state machine/microarchitectural components have been exercised

Sim team wants to know if areas of past escapes are being tested

Program manager wants feedback on overall quality of verification effort

Sim team can use coverage to tune regression buckets

Coverage Techniques Coverage methods include:

Line-by-line coverage– Has each line of VHDL been exercised?

(If/Then/Else, Cases, states, etc)Microarchitectural cross products

– Allows for multiple cycle relationships– Coverage models can be large or small

Coverage is based on the functionality of the design

Coverage models are specific to a given design Models cover

The inputs and the outputsInternal statesScenariosParallel propertiesBug Models

Functional Coverage

The Model:We want to test all dependency types of a resource (register) relating to all

instructions

The attributes I - Instruction: add, add., sub, sub.,...R - Register (resource): G1, G2,...DT - Dependency Type: WW, WR, RW, RR and None

The coverage tasks semanticsA coverage instance is a quadruplet <Ij, Ik, Rl, DT>, where Instruction Ik

follows Instruction Ij, and both share Resource Rl with Dependency Type DT.

Interdependency-Architectural Level

Additional semanticsThe distance between the instructions is no more

than 5The first instruction is at least the 6th

RestrictionsNot all combinations are valid Fixed point instructions cannot share FP registers

Interdependency-Architectural Level (2)

Size and grouping:

Original size: ~400 x 400 x 100 x 5 = 8*107

Let the Instructions be divided into disjoint groups I1 ... In

Let the Resources be divided into disjoint groups R1 ... Rk

After grouping: ~60 x 60 x 10 x 5 = 180000

Interdependency-Architectural Level (3)

Defining the domains of coverageWhere do we want to measure coverageWhat attributes (variables) to put in the trace

Defining modelsDefining tuples and semantic on the tuplesRestrictions on legal tasks

Collecting dataInserting traces to the databaseProcessing the traces to measure coverage

Coverage analysis and feedbackMonitoring progress and detecting holesRefining the coverage modelsGenerating regression suites

The Coverage Process

Look for the most complex, error prone part of the application Create the coverage models at high level design

Improve the understanding of the design Automate some of the test plan

Create the coverage model hierarchicallyStart with small simple models Combine the models to create larger models.

Before you measure coverage check that your rules are correct on some sample tests.

Use the database to "fish" for hard to create conditions.Try to generalize as much as possible from the data:

– X was never 3 is much more useful than the task (3,5,1,2,2,2,4,5) was never covered.

Coverage Model Hints

Future Coverage Usage One area of research is automated coverage directed feedbackIf testcases/drivers can be automatically tuned to

go after more diverse scenarios based on knowledge about what has been covered, then bugs can be encountered much sooner in design cycle

Difficulty lies in the expert system knowing how to alter the inputs to raise the level of coverage.

How do I pick a methodology? Components to help guide you are in the design

Amount of work required to verify is often proportional to the complexity of the design-under-test

– Simple macro may need only IVPs– Is design dataflow or control?

FV works well on control macros Random works on dataflow intensive macros

How do I pick a methodology? Experience!

Each design-under-test has a best-fit methodologyIt is human nature to use the techniques in which

you're familiarGaining experience with multiple techniques will

increase your ability to properly choose a methodology

How would you test a Branch History Table?

BHT looks ahead in the instruction stream in order to prefetch branch target addressesLarge performance benefit

BHT Array keeps track of previous branch target address

BHT uses current instruction address to look forward for known branch addresses

BHT uses "taken" or "not-taken" branch execution results to update array

Tools

Most testcase drivers/checkers are targeted for a specific level

There may be some usage by related levels

Tools are targeted for specific levels

Designer Unit Element System Bringup

Tool target level

Potential Usage Potential Usage

Examples of tools targeted for specific levels

Formal VerificationDesigner sim levelCannot handle large pieces of design

Architectural Testcase Generators AVPGEN, GENIE/GENESYS-PRO, SAK, TnKIntended for Microprocessor or System levelsSome usage at neighboring levels

There are no drivers/checkers that are used at all levels

Mainline vs. Pervasive: definitions

Mainline function refers to testing of the logic under normal running conditions. For example, the processor is running instructions streams, the storage controller is accessing memory, and the I/O is processing transactions.

Pervasive function refers to testing of logic that is used for non-mainline functions, such as power-on-reset (POR), hardware debug, error injection/recovery, scanning, BIST or instrumentation.

Pervasive functions are more difficult to test!

Mainline testing examples

Architectural testcase generators (processor) Random drivers

Storage control verificationData moving devices

System level testcase generators

Some Pervasive Testing targets

Trace arrays Scan Rings Power-on-reset Recovery and bad machine paths BIST (Built-in Self Test) Instrumentation

And at the end ...

At the end the verification engineer understands the design better than anybody else !

Future Outlook

Reasons for Evolution

Increasing Complexity Increasing Modelsize Exploding State spaces Increasing number of functions ... but ...

Reduced timeframe Reduced development budget

Evolution of Problem Debug

Analysis of simulation results (no tools support) Interactive observation of model facilities Tracing of certain model facilities Trace postprocessing to reduce amount of data On the fly checking by writing programs

Intelligent agents, knowledge based systems :

RTL-level Model

Evolution of Functional Verification

Testpattern

Architecture

Chip

Micro-Architecture

Simulation

RTL-level Model

Evolution of Functional Verification

Testpattern

Architecture

Chip

Micro-Architecture

Simulation

Manual, labor intensive, too expensive for increasing

complexity

RTL-level Model

Evolution of Functional Verification

Testpattern

Architecture

Chip

Micro-Architecture

Simulation

TestcaseGenerator

RTL-level Model

Testpattern

Architecture

Chip

Unit

Micro-Architecture

Simulation

Covers only small subset of total state space, often finds one bug in a

problem area but not all related ones

TestcaseGenerator

Evolution of Functional Verification

RTL-level Model

Testpattern

Architecture

Micro-Architecture

Simulation

TestcaseGenerator

Evolution of Functional Verification

Formal Rules

FormalVerification

Chip

Unit

RTL-level Model

Testpattern

Architecture

Micro-Architecture

Simulation

TestcaseGenerator

Evolution of Functional Verification

Formal Rules

FormalVerification

Chip

Unit

Manual definition of rules, limited to small

design pieces

RTL-level Model

Testpattern

Architecture

Micro-Architecture

Simulation

TestcaseGenerator

Evolution of Functional Verification

Formal Rules

FormalVerification

Chip

Unit

CoverageModels

StatisticsAnalysis

RTL-level Model

Testpattern

Architecture

Micro-Architecture

Simulation

TestcaseGenerator

Evolution of Functional Verification

Formal Rules

FormalVerification

Chip

Unit

CoverageModels

StatisticsAnalysis

High effort for environment setup and

further design complexity increase

RTL-level Model

Testpattern

Architecture

Micro-Architecture

Simulation

TestcaseGenerator

Evolution of Functional Verification

Formal Rules

FormalVerification

Chip

Unit

CoverageModels

StatisticsAnalysis

High Level Model

RTL-level Model

Testpattern

Architecture

Micro-Architecture

Simulation

TestcaseGenerator

Evolution of Functional Verification

Formal Rules

FormalVerification

Chip

Unit

CoverageModels

StatisticsAnalysis

High Level Model

Manual effort to reflect Coverage Analysis in testcase generation

RTL-level Model

Testpattern

Architecture

Micro-Architecture

Simulation

TestcaseGenerator

Evolution of Functional Verification

Formal Rules

FormalVerification

Chip

Unit

Statistics

High Level Model

CoverageModels

Data Mining

New ways / New development Combination of formal methods and simulation

First tools available today New algorithms in formal methods to solve size problems

Verification of specification and formal proof that implementation is logically correctrequires formal specification language

Coverage directed testcase generation HW/SW coverification