Faults - analysis

80
Copyright © 2001 Praxis Critical Systems Limited Faults - analysis Z proof and System Validation Tests were most cost-effective. Traditional “module testing” was arduous and found few faults, except in fixed-point numerical code.

description

Faults - analysis. Z proof and System Validation Tests were most cost-effective. Traditional “module testing” was arduous and found few faults, except in fixed-point numerical code. Proof metrics. Probably the largest program proof effort attempted… - PowerPoint PPT Presentation

Transcript of Faults - analysis

Page 1: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Faults - analysis

• Z proof and System Validation Tests were most cost-effective.

• Traditional “module testing” was arduous and found few faults, except in fixed-point numerical code.

Page 2: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Proof metrics

• Probably the largest program proof effort attempted…

– c. 9000 VCs - 3100 Functional & Safety properties, 5900 from RTC generator.

– 6800 discharged by simplifier (hint: buy a bigger workstation!)

– 2200 discharged by SPARK proof checker or “rigorous argument.”

Page 3: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Proof metrics - comments

• Simplification of VCs is computationally intensive, so buy the most powerful server available.

• (1998 comment) A big computer is far cheaper than the time of the engineers using it!

• (Feb. 2001 comment) Times have changed - significant proofs can now be attempted on a £1000 PC!

• Proof of exception-freedom is extremely useful, and gives real confidence in the code.

• Proof is still far less effort than module testing.

Page 4: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Difficult bits...

• User-interface.

• Tool support.

• Introduced state.

Page 5: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

User Interface

• Sequential code & serial interface to displays.

• Driving an essentially parallel user-interface is difficult.

• e.g. Updating background pages, run-indicator, button tell-backs etc.

• Some of the non-SIL4 displays were complex, output-intensive and under-specified in SRS.

Page 6: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Tool support

• SPARK tools are now much better than they were five years ago! Over 50 improvements identified as a result of SHOLIS.

• SPARK 95 would have helped.

• Compiler has been reliable, and generates good code.

• Weak support in SPARK proof system for fixed and floating point.

• Many in-house static analysis tools developed: WCET analysis, stack analysis, requirements traceability tools all new and successful.

Page 7: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Introduced state

• Some faults owe to introduced state:

– Optimisation of graphics output.

– Device driver complexity.

– Co-routine mechanisms.

Page 8: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

SHOLIS - Successes

• One of the largest Z/SPARK developments ever.– Z proof work proved very effective.

• One of the largest program proof efforts ever attempted.

• Successful proof of exception-freedom on whole system.

• Proof of system-level safety-properties at both Z and code level.

Page 9: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

SHOLIS - Successes(2)

• Strong static analysis removes many common faults before they even get a chance to arrive.– Software Integration was trivial.

• Successful use of static analysis of WCET and stack use.

• Successful mixing of SIL4 and non-SIL4 code is one program using SPARK static analysis.

• The first large-scale project to meet 00-55 SIL4. SHOLIS influenced the revision of 00-55 between 1991 and 1997.

Page 10: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

00-55/56 Resources

• http://www.dstan.mod.uk/

• “Is Proof More Cost-Effective Than Testing?” King, Chapman, Hammond, and Pryor. IEEE Transactions on Software Engineering, Volume 26, Number 8. August 2000.

Page 11: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Programme

• Introduction• What is High Integrity Software?• Reliable Programming in Standard Languages

– Coffee

• Standards Overview• DO178B and the Lockheed C130J

– Lunch

• Def Stan 00-55 and SHOLIS• ITSEC, Common Criteria and Mondex

– Tea

• Compiler and Run-time Issues• Conclusions

Page 12: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Outline

• UK ITSEC and Common Criteria schemes– What are they?

– Who’s using them?

• Main Principles• Main Requirements• Practical Consequences• Example Project - the MULTOS CA

Page 13: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

The UK ITSEC Scheme

• The “I.T. Security Evaluation Criteria”

• A set of guidelines for the development of secure IT systems.

• Formed from an effort to merge the applicable standards from Germany, UK, France and the US (the “Orange Book”).

Page 14: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

ITSEC - Basic Concepts

• The “Target of Evaluation” (TOE) is an IT System (possibly many components).

• The TOE provides security (e.g. confidentiality, integrity, availability)

Page 15: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

ITSEC - Basic Concepts (2)

• The TOE has:– Security Objectives (Why security is wanted.)

– Security Enforcing Functions (SEFs) (What functionality is actually provided.)

– Security Mechanisms (How that functionality is provided.)

• The TOE has a Security Target– Specifies the SEFs against which the TOE will be evaluated.

– Describes the TOE in relation to its environment.

Page 16: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

ITSEC - Basic Concepts (3)

• The Security Target contains:– Either a System Security Policy or a Product Rationale.

– A specification of the required SEFs.

– A definition of required security mechanisms.

– A claimed rating of the minimum strength of the mechanisms. (“Basic, Medium, or High”, based on threat analysis)

– The target evaluation level.

Page 17: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

ITSEC Evaluation Levels

• ITSEC defines 7 levels of evaluation criteria, called E0 through E6, with E6 being the most rigorous.

• E0 is “inadequate assurance.”

• E6 is the toughest! Largely comparable with the most stringent standards in the safety-critical industries.

Page 18: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

ITSEC Evaluation Levelsand Required Informationfor Vulnerability Analysis

Page 19: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Evaluation

• To claim compliance with a particular ITSEC level, a system or product must be evaluated against that level by a Commercial Licensed Evaluation Facility (CLEF).

• Evaluation reports answers “Does the TOE satisfy its security target at the level of confidence indicated by the stated evaluation level?”

• A list of evaluated products and systems is maintained.

Page 20: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

ITSEC Correctness Criteria for each Level

• Requirements for each level are organized under the following headings:

• Construction - The Development Process– Requirements, Architectural Design, Detailed Design,

Implementation

• Construction - Development Environment– Configuration Control, Programming Languages and

Compilers, Developer Security

• Operation - Documentation– User documentation, Administrative documentation

• Operation - Environment– Delivery and Configuration, Start-up and Operation

Page 21: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

ITSEC Correctness Criteria - Examples

• Development Environment - Programming languages and compilers– E1 - No Requirement

– E3 - Well defined language - e.g. ISO standard. Implementation dependent options shall be documented. The definition of the programming languages shall define unambiguously the meaning of all statements used in the source code.

– E6 - As E3 + documentation of compiler options + source code of any runtime libraries.

Page 22: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

The Common Criteria

• The US “Orange Book” and ITSEC are now being replaced by the “Common Criteria for IT Security Evaluation.”

• Aims to set a “level playing field” for developers in all participating states.– UK, USA, France, Spain, Netherlands, Germany, Korea,

Japan, Australia, CanAda, Israel...

• Aims for international mutual recognition of evaluated products.

Page 23: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

CC - Key Concepts

• Defines 2 type of IT Security Requirement:

• Functional Requirements– Defines behaviour of system or product.

– What a product or system does.

• Assurance Requirements– For establishing confidence in the implemented security

functions.

– Is the product built well? Does it meet its requirements?

Page 24: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

CC - Key Concepts (2)

• A Protection Profile (PP) - A set of security objectives and requirements for a particular class of system or product.– e.g. Firewall PP, Electronic Cash PP etc.

• A Security Target (ST) - A set of security requirements for specifications for a particular product (the TOE), against which its evaluation will be carried out.– e.g. The ST for the DodgyTech6000 Router

Page 25: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

CC Requirements Hierarchy

• Functional and assurance requirements are categorized into a hierarchy of:

• Classes– e.g. FDP - User Data Protection

• Families– e.g. FDP_ACC - Access Control Policy

• Components– e.g. FDP_ACC.1 - Subset access control

– These are named in PPs and STs.

Page 26: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Evaluations Assurance Levels (EALs)

• CC Defined 7 EALs - EAL1 through EAL7• An EAL defines a set of functional and

assurance components which must be met.• For example, EAL4 requires ALC_TAT.1, while

EAL6 and EAL7 require ALC_TAT.3

• EAL7 “roughly” corresponds with ITSEC E6 and Orange Book A1.

Page 27: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

The MULTOS CA

• MULTOS is a multi-application operating system for smart cards.

• Applications can be loaded and deleted dynamically once a card is “in the field.”

• To prevent forging, applications and card-enablement data are signed by the MULTOS Certification Authority (CA).

• At the heart of the CA is a high-security computer system that issues these certificates.

Page 28: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

The MULTOS CA (2)

• The CA has some unusual requirements:– Availability - aimed for c. 6 months between reboots, and

has warm-standby fault-tolerance.

– Throughput - system is distributed and has custom cryptographic hardware.

– Lifetime - of decades, and must be supported for that long.

– Security - most of system is tamper-proof, and is subject to the most stringent physical and procedural security.

– Was designed to meet the requirements of U.K. ITSEC E6.

• All requirements, design, implementation, and (on-going) support by Praxis Critical Systems.

Page 29: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

The MULTOS CA - Development Approach

• Overall process conformed to E6• Conformed in detail where retro-fitting

impossible:– development environment security

– language and specification standards

– CM and audit information

• Reliance on COTS for E6 minimized or eliminated.– Assumed arbitrary but non-byzantine behaviour

Page 30: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Development approach limitations

• COTS not certified (Windows NT, Backup tool, SQL Server…)

• We were not responsible for operational documentation and environment

• No formal proof• No systematic effectiveness analysis

Page 31: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

System Lifecycle

• User requirements definition with REVEALTM

• User interface prototype • Formalisation of security policy and top level

specification in Z.• System architecture definition• Detailed design including formal process

structure• Implementation in SPARK, Ada95 and VC++• Top-down testing with coverage measurement

Page 32: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Some difficulties...

• Security Target - What exactly is an SEF?– No one seems to have a common understanding…

• “Formal description of the architecture of the TOE…”– What does this mean?

• Source code or hardware drawings for all security relevant components…– Not for COTS hardware or software.

Page 33: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

The CA Test System

Page 34: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Use of languages in the CA

• Mixed language development - the right tools for the right job!– SPARK 30% “Security kernel” of tamper-proof

software– Ada95 30% Infrastructure (concurrency, inter-task

and inter-process communications,

database interfaces etc.), bindings to

ODBC and Win32– C++ 30% GUI (Microsoft Foundation Classes)– C 5% Device drivers, cryptographic

algorithms– SQL 5% Database stored procedures

Page 35: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Use of SPARK in the MULTOS CA

• SPARK is almost certainly the only industrial-strength language that meets the requirements of ITSEC E6.

• Complete implementation in SPARK was simply impractical.

• Use of Ada95 is “Ravenscar-like” - simple, static allocation of memory and tasks.

• Dangerous, or new language features avoided such as controlled types, requeue, user-defined storage pools etc.

Page 36: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Conclusions - Process Successes

• Use of Z for formal security policy and system spec. helped produce an indisputable specification of functionality

• Use of Z, CSP and SPARK “extended” formality into design and implementation

• Top-down, incremental approach to integration and test was effective and economic

Page 37: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Conclusions - E6 Benefits and Issues

• E6 support of formality is in-tune with our “Correctness by Construction” approach– encourages sound requirements and specification

– we are more rigorous in later phases

• High-security using COTS both possible and necessary– cf safety world

• E6 approach sound, but clarifications useful– and could gain even higher levels of assurance...

• CAVEAT– We have not actually attempted evaluation

– but benefits from developing to this standard

Page 38: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

ITSEC and CC Resources

• ITSEC– www.cesg.gov.uk

• Training, ITSEC Documents, UK Infosec Policy, “KeyMat”, “Non Secret Encryption”

– www.itsec.gov.uk

• Documents, Certified products list, Background information.

• Common Criteria– csrc.nist.gov/cc

– www.commoncriteria.org

• Mondex– Ives, Blake and Earl Michael: Mondex International: Reengineering

Money. London Business School Case Study 97/2. See http://isds.bus.lsu.edu/cases/mondex/mondex.html

Page 39: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Programme

• Introduction• What is High Integrity Software?• Reliable Programming in Standard Languages

– Coffee

• Standards Overview• DO178B and the Lockheed C130J

– Lunch

• Def Stan 00-55 and SHOLIS• ITSEC, Common Criteria and Mondex

– Tea

• Compiler and Run-time Issues• Conclusions

Page 40: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Programme

• Introduction• What is High Integrity Software?• Reliable Programming in Standard Languages

– Coffee

• Standards Overview• DO178B and the Lockheed C130J

– Lunch

• Def Stan 00-55 and SHOLIS• ITSEC, Common Criteria and Mondex

– Tea

• Compiler and Run-time Issues• Conclusions

Page 41: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Outline

• Choosing a compiler

• Desirable properties of High-Integrity Compilers

• The “No Surprises” Rule

Page 42: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Choosing a compiler

• In a high-integrity system, the choice of compiler should be documented and justified.

• In a perfect world, we would have time and money to:– Search for all candidate compilers,

– Conduct an extensive practical evaluation of each,

– Choose one, based on fitness for purpose, technical features and so on...

Page 43: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Choosing a compiler (2)

• But in the real-world…

• Candidate set of compilers may only have 1 member!

• Your client’s favourite compiler is already bought and paid for…

• Bias and/or familiarity with a particular product may override technical issues.

Page 44: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Desirable Properties of an HI compiler

• Much more than just “Validation”

• Annex H support• Qualification • Optimization and other “switches” • Competence and availability of support• Runtime support for HI systems• Support for Object-Code Verification

Page 45: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

What does the HRG Report have to say?

• Recommends validation of appropriate annexes - almost certainly A, B, C, D, and H. Annex G (Numerics) may also be applicable for some systems.

• Does not recommend use of a subset compiler, although recognizes that a compiler may have a mode in which a particular subset is enforced.– Main compiler algorithms should be unchanged in such a

mode.

Page 46: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

HRG Report (2)

• Evidence required from compiler vendor:– Quality Management System (e.g. ISO 9001)

– Fault tracking and reporting system

– History of faults reported, found, fixed etc.

– Availability of test evidence

– Access to known faults database

• A full audit of a compiler vendor may be called for.

Page 47: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Annex H Support

• Pragma Normalize_Scalars– Useful! Compilers should support this, but remember that

many scalar types do not have an invalid representation.

• Documentation of Implementation Decisions– Yes. Demand this from compiler vendor. If they can’t or

won’t supply such information, then find out why not!

Page 48: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Annex H Support (2)

• Pragma Reviewable.– Useful in theory. Does anyone implement this other than to

“turn on debugging”?

• Pragma Inspection_Point– Yes please. Is particularly useful in combination with

hardware-level debugging tools such as in-circuit emulation, processor probes, and logic analysis.

Page 49: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Annex H Support (3)

• Pragma Restrictions– Useful.

– Some runtime options (e.g. Ravenscar) imply a predefined set of Restrictions defined by the compiler vendor.

– Better to use a coherent predefined set than to “roll your own”

– Understand effect of each restriction on code-gen and runtime strategies.

– Even in SPARK, some restrictions are still useful - e.g. No_Implicit_Heap_Allocation, No_Floating_Point, No_Fixed_Point etc.

Page 50: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Compiler Qualification

• “Qualification” (whatever that means) of a full compiler is beyond reach.

• Pragmatic approaches:– Avoidance of “difficult to compile” language features in HI

subset.– In service history– Choice of “most commonly used” options– Access to faults history and database– Verification and Validation– Object code verification (last resort!)

Page 51: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Optimization and other “switches”

• Was the compiler validated using the set of “switches” that will be used in practice?

• Is the compiler fully supported using those switches?

• Your choice must be (as with everything else…) documented and justified.

Page 52: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Optimization and High-Integrity

• Optimization remains a difficult issue

• In the past (I.e. 5 - 10 years ago…):– Optimizers were seen as “buggy” or unreliable.

– Performance improvement was not that great.

– “High Integrity implies No Optimization” became accepted practice.

– This position is no longer appropriate in many cases!

Page 53: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Optimization and High-Integrity (2)

• Today…– Optimizers are far more reliable.

– Some optimization is very important in obtaining acceptable performance from modern architectures.

– Appropriate optimizations: local improvements such as CSE, register tracking, elimination of redundant loads and stores etc.

– Inappropriate: Global restructuring, such as loop unrolling, elimination of partial redundancies etc.

• Seek evidence from compiler vendor as to most appropriate, widely used setting.

Page 54: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Support...

• Support is a crucial issue.• High Integrity projects tend to “stretch”

compilers, so finding and resolving problems is common.

• Try to foster a close relationship with compiler support staff. Become friends with the technical staff as well.

• You get what you pay for!

Page 55: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Runtime Systems and High Integrity

• In delivering a high-integrity system, a run-time library (RTL) must be verified to the same level of assurance as the rest of the application.

• Most Ada compiler vendors have responded to this need with various products:– Ada83 - SMART, C-SMART (Alsys, TSP); VADSsc (Verdix,

Rational)

– Ada95 - ObjectAda Real-Time/Raven (Aonix); APEX MARK (Rational); GMART, GSTART (GreenHills); GNAT-No-Runtime (ACT)

Page 56: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Certifiable runtime systems

• 3 Main approaches:– “Small, Certifiable” runtime systems

– Ravenscar

– No Runtime

Page 57: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

“Small, Certifiable” runtime systems

• Marketing goal: largely aimed at meeting the requirements of DO-178B up to and including level A systems.

• Technical goal: Elimination of language features with an unacceptably large runtime impact. e.g. tasking, heap allocation, exceptions, predefined I/O etc.

• Examples: Aonix C-SMART and Rational VADSsc for Ada83.

Page 58: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Ravenscar Profile

• A “profile” of tasking and other features in Ada95 that is appropriate for high-integrity and hard real-time systems.

• Technical goals:– Particularly efficient, simple runtime system implementation

on a single processor target.

– Amenable to static timing and schedulability analysis.

• Examples: Aonix ObjectAda Real-Time/Raven.

Page 59: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

No Run Time

• A radical idea - eliminate all language features which require any runtime library support.

• Advantages: no runtime implies no certification of any COTS component.

• Example: ACT GNAT-No-Runtime (GNORT).

Page 60: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Runtime options and development

• In all these runtime options, the Board Support Package (BSP) remains.

• The BSP provides support for:– Download and startup

– “Cold Boot” (I.e. from ROM)

– Hardware-specific initialisation

– Debugging (I.e. breakpoint, single step)

– Coverage analysis

– Some predefined, “simple” I/O

Page 61: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Runtime options and development

• Another problem: traditional testing techniques such as unit-test and coverage analysis may not work on the BSP.

• In some projects, we have fielded 2 BSPs:• A “Debug” BSP

– Supports all of the above functionality for any application.

• A “Strip” BSP– Supports only the functionality that is required for the delivery of a

specific application only. (e.g. no debug, no I/O etc.)

• Advantages of Strip BSP: easier (manual) coverage analysis and testing, no “dead” code.

Page 62: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Runtime Systems and High Integrity Subsets

• Design goals and choices in developing an HI runtime system largely intersect with those of an HI language.

• SPARK was designed to require little or no runtime library.

• It is no surprise, then, that SPARK is compatible with all the products mentioned so far.

• Some expansion of SPARK (towards Ravenscar, for example), is now possible.

Page 63: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Object Code Verification (OCV)

• In the absence of sufficient trust in a compiler, manual verification of object code may be required.

• Avoid this if at all possible!• Requires detailed knowledge: of language, of

compiler, and of target processor.• Very hard work!

Page 64: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

OCV (2)

• A central problem is Traceability of source to object code.– Source code statements to object code instructions

– Source code declarations to memory layout

– See Annex LRM H.3.1

• Existing traditional support for debugging offers a partial solution.– Support could be better, but there is little call for

improvement from customers.

• Complex interaction with language subset and optimization issues.

Page 65: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

OCV Techniques - 1 step

• Compare source code with disassembled object code directly.

• Disadvantages:– The “semantic gap” between the 2 is very wide (especially

for a rich language like Ada)

– Requires detailed knowledge of Ada compilation, code generation, language issues, target architecture and so on.

Page 66: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

OCV Techniques - 2 step

• Step 1 - Review source against compiler-generated intermediate language (IL)

• Step 2 - Review IL against disassembled object code.

• Advantages– semantic gaps are narrower.

– Splits skills required.

• Disadvantages– Not many compilers allow users access to IL.

Page 67: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

OCV Example

• One OCV exercise found a single nonsensical code sequence in a program.

• Analysis by 3 different people could not understand what was going on - was it a compiler bug?

• No! Turned out to be a bug in the disassembler (listing the wrong instruction for a certain op-code)

• No other problems found.

Page 68: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

The “No Surprises” rule

• My own personal rule-of-thumb for HI programming.

• When programming, try to predict the generated code, including timing and memory usage.– Means you have to “know” the compiler and target pretty

well!

• The compiler should never surprise you.

Page 69: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

An example surprise...

• In the SHOLIS application software, there are several large constant arrays:

My_Constant : constant Big_Type := Big_Type’( ... );

• Compiler is Alsys AdaWorld Ada83 targeting 68040. What code is generated?

Page 70: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

An example surprise (2)

• The aggregate is not static in Ada83, so is evaluated at run-time into a temporary variable.

• This temporary variable is larger than 1024 bytes, so compiler puts it on the heap!

declare type Temp_Ptr is access Big_Type; Temp : Temp_Ptr;

begin Temp := RTS.Allocate_Memory(Big_Type’Size); Temp.all := Big_Type’( ... ); My_Constant := Temp.all; RTS.Free(Temp);

end;

Page 71: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

An example surprise (3)

• Unfortunately, SMART runtime does not implement Heap allocation…oh dear…a big surprise!

• N.B. Things are much better in Ada95.

• Another rule of thumb: always have someone on a project team who is capable of reading and reviewing the object code.

Page 72: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Compiler and Runtime Resources

• See the compiler vendors!

• “Re-engineering a Safety-Critical Application Using SPARK95 and GNORT” R. Chapman and R. Dewar, in Reliable Software Technologies - Ada-Europe 1999, Springer LNCS Vol. 1622.

Page 73: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Programme

• Introduction• What is High Integrity Software?• Reliable Programming in Standard Languages

– Coffee

• Standards Overview• DO178B and the Lockheed C130J

– Lunch

• Def Stan 00-55 and SHOLIS• ITSEC, Common Criteria and Mondex

– Tea

• Compiler and Run-time Issues• Conclusions

Page 74: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

Some properties of critical systems

• Building safe systems is not the same as certification– the goal is to build a demonstrably safe system

– a demonstrably safe system can be certified to any standard

• Showing a system is safe is invariably harder than building a safe system

• Quality and safety cannot be retro-fitted

Page 75: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

A cost-effective approach:

• Requires a “correctness by construction” approach– build in safety from the start

– build in quality from the start

– bug prevention not bug detection

• The approach should be “verification driven”– evidence of suitability should be produced as a side-effect

of the development process

Page 76: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

No magic

• There is no easy way, no magic wands and no silver bullets

• Therefore we must deploy range of techniques that attack all sources of risk and error– hazard analysis

– requirements capture

– specification

– programming languages

– code

– analysis

– test

Page 77: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

The Role of Ada

• Superficially Ada is not a good match to DO-178B– source code to object mapping can be harder (e.g. run-time checks)

– run-time library overheads increase certification cost

• However, in practice– the savings from better abstraction, better front-end checking etc.

greatly outweigh this extra cost

– the disadvantages can be eliminated by subsetting

– only Ada allows the construction of rigorous subsets

Page 78: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

A Model Process

• Really understand the requirements• Really understand the hazards• Design a system to mitigate the hazards• Design software to preserve the mitigations

– specify it formally

– prove the specification has the required properties

• Code in an unambiguous language– analyse the code before compilation

– ideally don’t give coders access to the compiler!

• Test, with emphasis on requirements based testing– obtain test cases from requirements, hazard analysis and formal spec

– don’t waste time on blanket unit test

Page 79: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

A Model Team (An Opinion!)

• Small and not too hierarchical• Integrated systems and software expertise• Engineering expertise is more important than tool- or

language-specific knowledge• But team should include:

– a domain specialist (someone who really understands what is being built)

– a language/compiler guru

– a mathematician

– a quality/configuration management and documentation pedant

Page 80: Faults - analysis

Copyright © 2001 Praxis Critical Systems Limited

A final word...

Better really can be cheaper!