ESA Software Engineering Standards

956
ESA PSS-05-0 Issue 2 February 1991 european space agency / agence spatiale européenne 8-10, rue Mario-Nikis, 75738 PARIS CEDEX, France ESA software engineering standards Issue 2 Prepared by: ESA Board for Software Standardisation and Control (BSSC)

description

Conjunto de Normas para el Desarrollo de SW

Transcript of ESA Software Engineering Standards

Page 1: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2February 1991

european space agency / agence spatiale européenne8-10, rue Mario-Nikis, 75738 PARIS CEDEX, France

ESAsoftwareengineeringstandardsIssue 2

Prepared by:ESA Board for SoftwareStandardisation and Control(BSSC)

Page 2: ESA Software Engineering Standards

ii ESA PSS-05-0 Issue 2 (February 1991)DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

1. DOCUMENT TITLE: ESA PSS-05-0 Software Engineering Standards

2. ISSUE 3. REVISION 4. DATE 5. REASON FOR CHANGE

0 0 1984 First issue of BSSC(84)1, Issue no 11 0 1987 First issue of ESA PSS-05-0, Issue 12 0 1991 See Foreword2 1 1994 Editorial corrections and minor changes

Issue 2 Revision 1 approved, October 1994Board for Software Standardisation and ControlC.Mazza, Chairman

Issue 2 approved, 21st February 1991Telematics Supervisory Board

Issue 2 approved by:The Inspector General, ESA

Published by ESA Publications Division,ESTEC, Noordwijk, The Netherlands.Printed in the Netherlands.ESA Price code: E2ISSN 0379-4059

Copyright © 1994 by European Space Agency

Page 3: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) iiiTABLE OF CONTENTS

TABLE OF CONTENTSINTRODUCTION................................ ................................ ............................... xi

1 PURPOSE OF THE STANDARDS.......................................................................xi2 STRUCTURE OF THE STANDARDS.................................................................. xi3 CLASSES OF STANDARD PRACTICES............................................................xii

3.1 Mandatory practices ...................................................................................xii3.2 Recommended practices............................................................................ xii3.3 Guideline practices......................................................................................xii

4 APPLYING THE STANDARDS............................................................................ xii4.1 Factors in applying the Standards..............................................................xii4.2 Applying the Standards in a system development ....................................xiii4.3 Methods and tools .....................................................................................xiv

Part 1 Product StandardsCHAPTER 1 THE SOFTWARE LIFE CYCLE ................................ ................ 1-3

1.1 INTRODUCTION ...........................................................................................1-31.2 PHASES, ACTIVITIES AND MILESTONES...................................................1-3

1.2.1 UR phase: user requirements definition................................................1-51.2.2 SR phase: software requirements definition..........................................1-51.2.3 AD phase: architectural design .............................................................1-61.2.4 DD phase: detailed design and production..........................................1-61.2.5 TR phase: transfer..................................................................................1-71.2.6 OM phase: operations and maintenance .............................................1-7

1.3 LIFE CYCLE APPROACHES.........................................................................1-81.3.1 The waterfall approach...........................................................................1-81.3.2 The incremental delivery approach........................................................1-91.3.3 The evolutionary development approach............................................1-10

1.4 PROTOTYPING ...........................................................................................1-111.5 HANDLING REQUIREMENTS CHANGE....................................................1-12

CHAPTER 2 THE USER REQUIREMENTS DEFINITION PHASE .............. 1-132.1 INTRODUCTION .........................................................................................1-132.2 INPUTS TO THE PHASE.............................................................................1-132.3 ACTIVITIES .................................................................................................. 1-13

2.3.1 Capture of user requirements..............................................................1-142.3.2 Determination of operational environment ..........................................1-142.3.3 Specification of user requirements......................................................1-14

2.3.3.1 Classification of user requirements...........................................1-142.3.3.2 Attributes of user requirements .................................................1-15

2.3.4 Reviews................................................................................................. 1-16

Page 4: ESA Software Engineering Standards

iv ESA PSS-05-0 Issue 2 (February 1991)TABLE OF CONTENTS

2.4 OUTPUTS FROM THE PHASE ................................................................... 1-162.4.1 User Requirements Document ............................................................1-172.4.2 Acceptance test plans .........................................................................1-172.4.3 Project management plan for the SR phase.......................................1-172.4.4 Configuration management plan for the SR phase ............................1-172.4.5 Verification and validation plan for the SR phase................................1-182.4.6 Quality assurance plan for the SR phase............................................1-18

CHAPTER 3 THE SOFTWARE REQUIREMENTS DEFINITION PHASE ... 1-193.1 INTRODUCTION .........................................................................................1-193.2 INPUTS TO THE PHASE.............................................................................1-19

3.3 ACTIVITIES...............................................................................................1-203.3.1 Construction of the logical model........................................................1-203.3.2 Specification of software requirements ...............................................1-21

3.3.2.1 Classification of software requirements .................................... 1-223.3.2.2 Attributes of software requirements ..........................................1-243.3.2.3 Completeness of software requirements .................................. 1-253.3.2.4 Consistency of software requirements......................................1-253.3.2.5 Duplication of software requirements .......................................1-26

3.3.3 Reviews................................................................................................. 1-263.4 OUTPUTS FROM THE PHASE ................................................................... 1-26

3.4.1 Software Requirements Document......................................................1-263.4.2 System test plans.................................................................................1-273.4.3 Project management plan for the AD phase.......................................1-283.4.4 Configuration management plan for the AD phase............................1-283.4.5 Verification and validation plan for the AD phase...............................1-283.4.6 Quality assurance plan for the AD phase............................................1-28

CHAPTER 4 THE ARCHITECTURAL DESIGN PHASE .............................. 1-294.1 INTRODUCTION .........................................................................................1-294.2 INPUTS TO THE PHASE.............................................................................1-294.3 ACTIVITIES .................................................................................................. 1-30

4.3.1 Construction of the physical model.....................................................1-304.3.1.1 Decomposition of the software into components ....................1-314.3.1.2 Implementation of non-functional requirements.......................1-314.3.1.3 Design quality criteria................................................................. 1-324.3.1.4 Trade-off between alternative designs......................................1-33

4.3.2 Specification of the architectural design .............................................1-334.3.2.1 Functional definition of the components................................... 1-344.3.2.2 Definition of the data structures ................................................1-344.3.2.3 Definition of the control flow ......................................................1-354.3.2.4 Definition of the computer resource utilisation .........................1-35

4.3.3 Selection of programming languages.................................................1-35

Page 5: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) vTABLE OF CONTENTS

4.3.4 Reviews................................................................................................. 1-354.4 OUTPUTS FROM THE PHASE ................................................................... 1-36

4.4.1 Architectural Design Document...........................................................1-364.4.2 Integration test plans............................................................................1-374.4.3 Project management plan for the DD phase ......................................1-374.4.4 Configuration management plan for the DD phase............................1-374.4.5 Verification and validation plan for the DD phase...............................1-374.4.6 Quality assurance plan for the DD phase ...........................................1-37

CHAPTER 5 THE DETAILED DESIGN AND PRODUCTION PHASE ......... 1-385.1 INTRODUCTION .........................................................................................1-385.2 INPUTS TO THE PHASE.............................................................................1-385.3 ACTIVITIES .................................................................................................. 1-39

5.3.1 Detailed design.....................................................................................1-405.3.2 Production ............................................................................................1-40

5.3.2.1 Coding........................................................................................1-405.3.2.2 Integration ..................................................................................1-425.3.2.3 Testing........................................................................................1-43

5.3.2.3.1 Unit testing.......................................................................1-435.3.2.3.2 Integration testing............................................................1-435.3.2.3.3 System testing ................................................................. 1-44

5.3.3 Reviews................................................................................................. 1-445.4 OUTPUTS FROM THE PHASE ................................................................... 1-45

5.4.1 Code .....................................................................................................1-455.4.2 Detailed Design Document.................................................................. 1-465.4.3 Software User Manual..........................................................................1-465.4.4 Project management plan for the TR phase .......................................1-475.4.5 Configuration management plan for the TR phase.............................1-475.4.6 Acceptance test specification..............................................................1-475.4.7 Quality assurance plan for the TR phase............................................1-47

CHAPTER 6 THE TRANSFER PHASE ................................ ........................ 1-496.1 INTRODUCTION .........................................................................................1-496.2 INPUTS TO THE PHASE.............................................................................1-496.3 ACTIVITIES .................................................................................................. 1-50

6.3.1 Installation.............................................................................................1-506.3.2 Acceptance tests..................................................................................1-506.3.3 Provisional acceptance........................................................................1-50

6.4 OUTPUTS FROM THE PHASE ................................................................... 1-516.4.1 Statement of provisional acceptance..................................................1-516.4.2 Provisionally accepted software system .............................................1-516.4.3 Software Transfer Document ...............................................................1-51

Page 6: ESA Software Engineering Standards

vi ESA PSS-05-0 Issue 2 (February 1991)TABLE OF CONTENTS

CHAPTER 7 THE OPERATIONS AND MAINTENANCE PHASE ............... 1-537.1 INTRODUCTION .........................................................................................1-537.2 INPUTS TO THE PHASE.............................................................................1-537.3 ACTIVITIES .................................................................................................. 1-53

7.3.1 Final Acceptance..................................................................................1-547.3.2 Maintenance.........................................................................................1-54

7.4 OUTPUTS FROM THE PHASE ................................................................... 1-557.4.1 Statement of final acceptance .............................................................1-557.4.2 Project History Document .................................................................... 1-567.4.3 Finally accepted software system .......................................................1-56

Part 2 Procedure Standards

CHAPTER 1 MANAGEMENT OF THE SOFTWARE LIFE CYCLE ............... 2-31.1 INTRODUCTION ...........................................................................................2-31.2 SOFTWARE PROJECT MANAGEMENT.......................................................2-31.3 SOFTWARE CONFIGURATION MANAGEMENT.........................................2-41.4 SOFTWARE VERIFICATION AND VALIDATION...........................................2-41.5 SOFTWARE QUALITY ASSURANCE............................................................2-4

CHAPTER 2 SOFTWARE PROJECT MANAGEMENT ................................ . 2-52.1 INTRODUCTION ...........................................................................................2-52.2 ACTIVITIES .................................................................................................... 2-5

2.2.1 Organising the project............................................................................2-52.2.2 Leading the project ................................................................................2-62.2.3 Risk management ..................................................................................2-62.2.4 Technical management .........................................................................2-62.2.5 Planning, scheduling and budgeting the work...................................... 2-72.2.6 Reporting project progress.................................................................... 2-8

2.3 THE SOFTWARE PROJECT MANAGEMENT PLAN.................................... 2-82.4 EVOLUTION OF THE SPMP THROUGHOUT THE LIFE CYCLE.................2-8

2.4.1 UR phase................................................................................................2-82.4.2 SR phase ................................................................................................2-92.4.3 AD phase..............................................................................................2-102.4.4 DD phase..............................................................................................2-10

CHAPTER 3 SOFTWARE CONFIGURATION MANAGEMENT.................. 2-123.1 INTRODUCTION .........................................................................................2-123.2 ACTIVITIES .................................................................................................. 2-12

3.2.1 Configuration identification .................................................................. 2-123.2.2 Configuration item storage .................................................................. 2-153.2.3 Configuration change control ..............................................................2-16

3.2.3.1 Levels of change control............................................................2-16

Page 7: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) viiTABLE OF CONTENTS

3.2.3.2 Change control procedures.......................................................2-173.2.3.2.1 Documentation change procedures...............................2-173.2.3.2.2 Problem reporting procedures........................................2-18

3.2.4 Configuration status accounting .........................................................2-193.2.5 Release ................................................................................................. 2-19

3.3 THE SOFTWARE CONFIGURATION MANAGEMENT PLAN ....................2-203.4 EVOLUTION OF THE SCMP THROUGHOUT THE LIFE CYCLE..............2-20

3.4.1 UR phase..............................................................................................2-203.4.2 SR phase ..............................................................................................2-213.4.3 AD phase..............................................................................................2-213.4.4 DD phase..............................................................................................2-21

CHAPTER 4 SOFTWARE VERIFICATION AND VALIDATION ................... 2-224.1 INTRODUCTION .........................................................................................2-224.2 ACTIVITIES .................................................................................................. 2-23

4.2.1 Reviews................................................................................................. 2-244.2.1.1 Technical reviews.......................................................................2-244.2.1.2 Walkthroughs .............................................................................2-254.2.1.3 Software inspections.................................................................. 2-25

4.2.2 Tracing.................................................................................................. 2-254.2.3 Formal proof .........................................................................................2-264.2.4 Testing .................................................................................................. 2-264.2.5 Auditing................................................................................................. 2-28

4.3 THE SOFTWARE VERIFICATION AND VALIDATION PLAN......................2-294.4 EVOLUTION OF THE SVVP THROUGHOUT THE LIFE CYCLE................2-29

4.4.1 UR phase..............................................................................................2-294.4.2 SR phase ..............................................................................................2-294.4.3 AD phase..............................................................................................2-304.4.4 DD phase..............................................................................................2-30

CHAPTER 5 SOFTWARE QUALITY ASSURANCE ................................ .... 2-325.1 INTRODUCTION .........................................................................................2-325.2 ACTIVITIES .................................................................................................. 2-32

5.2.1 Management ........................................................................................2-335.2.2 Documentation .....................................................................................2-335.2.3 Standards, practices, conventions and metrics ................................. 2-335.2.4 Reviews and audits ..............................................................................2-345.2.5 Testing activities ...................................................................................2-345.2.6 Problem reporting and corrective action.............................................2-345.2.7 Tools, techniques and methods ..........................................................2-345.2.8 Code and media control ......................................................................2-355.2.9 Supplier control ....................................................................................2-355.2.10 Records collection, maintenance and retention ...............................2-35

Page 8: ESA Software Engineering Standards

viii ESA PSS-05-0 Issue 2 (February 1991)TABLE OF CONTENTS

5.2.11 Training...............................................................................................2-355.2.12 Risk management ..............................................................................2-35

5.3 THE SOFTWARE QUALITY ASSURANCE PLAN .......................................2-365.4 EVOLUTION OF THE SQAP THROUGHOUT THE LIFE CYCLE...............2-36

5.4.1 UR phase..............................................................................................2-365.4.2 SR phase ..............................................................................................2-365.4.3 AD phase..............................................................................................2-365.4.4 DD phase..............................................................................................2-36

Part Three Appendices

APPENDIX A GLOSSARY ................................ ................................ ........... 3-A1APPENDIX B SOFTWARE PROJECT DOCUMENTS................................ .3-B1APPENDIX C DOCUMENT TEMPLATES ................................ ................... 3-C1APPENDIX D SUMMARY OF MANDATORY PRACTICES ......................... 3-D1APPENDIX E FORM TEMPLATES ................................ .............................. 3-E1APPENDIX F INDEX ................................ ................................ .................... 3-F9

Page 9: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) ixFOREWORD

FOREWORD

Software engineering is an evolving discipline, and many changes haveoccurred in the field since the last issue of the ESA PSS-05-0 Software EngineeringStandards in January 1987. The BSSC started, therefore, a programme to updatethe Standards in 1989. The opinions of users of the Standards were invited, softwareengineering methods were reviewed, and standards recently issued by otherorganisations were examined.

In 1989, the BSSC called for users of the Standards to submit changeproposals. Nearly 200 separate proposals were received and many of the pointsthey raised have been incorporated in this new issue of the Standards.

The BSSC has started the development of lower-level documents, called‘Guides', which will describe in detail how to implement the practices described inthe Standards. The Guides will also discuss software engineering methods. Thedevelopment work on the Guides has required careful scrutiny of the Standards, andsome changes in them have been found to be needed.

The last issue of the Standards took into account the software engineeringstandards published by the Institute of Electrical and Electronics Engineers (IEEE).Most of these standards are recognised by the American National StandardsInstitute (ANSI). This issue takes into account several new standards which havebeen published by the IEEE since the last issue.

The following BSSC members have contributed to the implementation of thisissue: Carlo Mazza (chairman), Bryan Melton, Daniel De Pablo, Adriaan Scheffer andRichard Stevens.

The BSSC wishes to thank Jon Fairclough for his assistance in thedevelopment of the Standards and the Guides. The BSSC expresses its gratitude toall those software engineers in ESA and in Industry who have contributed proposalsfor the improvement of the Standards.

Requests for clarifications, change proposals or any other commentconcerning this guide should be addressed to:BSSC/ESOC Secretariat or BSSC/ESTEC SecretariatAttention of Mr C Mazza Attention of Mr A SchefferESOC ESTECRobert Bosch Strasse 5 Postbus 299D-6100 Darmstadt NL-2200 AG NoordwijkGermany The Netherlands

Page 10: ESA Software Engineering Standards

x ESA PSS-05-0 Issue 2 (February 1991)FOREWORD

This page is intentionally left blank.

Page 11: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) xiINTRODUCTION

INTRODUCTION

1 PURPOSE OF THE STANDARDS

This document describes the software engineering standards to beapplied for all deliverable software implemented for the European SpaceAgency (ESA), either in house or by industry.

Software is defined in these Standards as the programs,procedures, rules and all associated documentation pertaining to theoperation of a computerised system. These Standards are concernedwith all software aspects of a system, including its interfaces with thecomputer hardware and with other components of the system. Softwaremay be either a subsystem of a more complex system or it may be anindependent system.

Where ESA PSS-01-series documents are applicable, and as aconsequence ESA PSS-01-21, ‘Software Product AssuranceRequirements for ESA Space Systems’ is also applicable, Part 2, Chapter5 of these Standards, ‘Software Quality Assurance’, ceases to apply.

2 STRUCTURE OF THE STANDARDS

The ESA Software Engineering Standards are divided into threeparts:

Part 1, Product Standards, contains standards, recommendationsand guidelines concerning the product, i.e. the software to be defined,implemented, operated and maintained.

Part 2, Procedure Standards, describes the procedures which areused to manage a software project.

Part 3, ‘Appendices’, contains summaries, tables, forms andchecklists of mandatory practices.

Page 12: ESA Software Engineering Standards

xii ESA PSS-05-0 Issue 2 (February 1991)INTRODUCTION

3 CLASSES OF STANDARD PRACTICES

Three categories of standard practices are used in the ESASoftware Engineering Standards: mandatory practices, recommendedpractices and guidelines.

3.1 Mandatory practices

Sentences containing the word ‘shall’ are mandatory practices.These practices must be followed, without exception, in all softwareprojects. The word ‘must’ is used in statements that repeat a mandatorypractice.

3.2 Recommended practices

Sentences containing the word ‘should’ are stronglyrecommended practices. A justification to the appropriate level in theAgency’s hierarchy is needed if they are not followed.

3.3 Guideline practices

Sentences containing the word ‘may’ are guidelines. Nojustification is required if they are not followed.

4 APPLYING THE STANDARDS

Software projects vary widely in purpose, size, complexity andavailability of resources. Software project management should definehow the Standards are to be applied in the planning documents (see Part2). Deciding how standards are to be applied in specific projects is oftencalled ‘tailoring’.

4.1 Factors in applying the Standards

A number of factors can influence how the Standards are applied,for example the:project cost, both in development and operation;• number of people required to develop, operate and maintain the

software;• number of potential users of the software;• amount of software that has to be produced;

Page 13: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) xiiiINTRODUCTION

• criticality of the software, as measured by the consequences of itsfailure;

• complexity of the software, as measured by the number of interfacesor a similar metric;

• completeness and stability of the user requirements;• risk values included with the user requirements.

Note that two man years or less is a small project, twenty manyears or more is a large project.

Software project management should define a life cycle approachand documentation plan that reflects these factors.

Projects which use any commercial software must procuresoftware against stated requirements. Projects are not expected toreproduce design and implementation documentation about commercialsoftware. The maintenance of such software is a responsibility of thesupplier.

The procedures for software production may be embedded in theinfrastructure of the working environment; once these procedures havebeen established, repetitive documentation of them is unnecessary.Groups running many projects should ensure that their working practicesconform to the ESA Software Engineering Standards. Projectdocumentation should reference these practices.

Large, complex software projects may need to extend theseStandards. Such projects may need to give more detailed considerationto the procedural and life cycle aspects when there are changingrequirements or multiple contractors working in parallel.

4.2 Applying the Standards in a system development

Software developed for ESA is frequently part of a larger system,satellite systems being an obvious example. In this situation a number ofactivities will already have been performed at ‘system level’ (as part of thesystem development life cycle) before the life cycle for any software partof the system can commence.

It is a ‘systems engineering’ function to define the overallrequirements for the system to be built, often expressed in a SystemRequirement Document (often referred to as an SRD, but not to be

Page 14: ESA Software Engineering Standards

xiv ESA PSS-05-0 Issue 2 (February 1991)INTRODUCTION

confused with a Software Requirements Document). From this systemrequirements document a decomposition into subsystems is oftenperformed with resulting subsystem specifications. Trade-offs are done topartition the system/subsystems into hardware and software, applyingcriteria specific to the system to be built (e.g. commonality, reliability,criticality etc). Once the need for a software item has been established, itslife cycle, as defined in this standard, can begin. Each of the softwareitems identified in the system will have its individual life cycle.

Many of the user requirements may well exist in the systemdocumentation. It is a false economy to assume that systemrequirements are sufficient input to the development of a softwaresubsystem. To ensure some consistency in the input to a softwareproject, a User Requirements Document (URD) should always beproduced. The URD should be traceable to the system and/orsubsystem documentation.

The responsibilities for the production and change control of theURD should be agreed between ‘system’ and ‘software’ projectmanagement, and recorded in the Software Project Management Plan.

4.3 Methods and tools

These standards do not make the use of any particular softwareengineering method or tool mandatory. The Standards describe themandatory practices, recommended practices and guidelines forsoftware engineering projects, and allow each project to decide the bestway of implementing them.

References to methods and tools appear, however, in thesestandards for two reasons. Firstly, terminology from particular methodsbecomes, with time, part of computing vocabulary. Secondly, examplesof possible ways of implementing the Standards are useful.

Page 15: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-1

Part 1 Product

Standards

Page 16: ESA Software Engineering Standards

PHASES

ITEMS

UR UR/R SR SR/R AD AD/R DD DD/R TR OMSOFTWARE

REQUIREMENTS

DEFINITION

USER

REQUIREMENTS

DEFINITION

ARCHITECTURAL

DESIGN

DETAILED

DESIGN AND

PRODUCTION

OPERATIONS

AND

MAINTENANCE

TRANSFER

MAJORACTIVITIES

DELIVERABLE

ITEMS

REVIEWS

MAJORMILESTONES

determinationof operationalenvironment

identificationof userrequirements

UserRequirementsDocument

SoftwareRequirementsDocument

ArchitecturalDesignDocument

DetailedDesignDocument

SoftwareUserManual

SoftwareTransferDocument

ProjectHistoryDocument

identificationof softwarerequirements

constructionof logicalmodel

constructionof physicalmodel

definitionof majorcomponents

moduledesign

codingunit testsintegration testssystem tests

installation

provisionalacceptancetests

finalacceptance

operations

maintenanceof code anddocumentation

URD SRD ADDDDD

SUM

STD PHDCode

. . . . . . . . . . . . . . .

walkthroughsinspections

technicalreviews

walkthroughsinspections

technicalreviews

walkthroughsinspections

technicalreviews

URDapproved

SRDapproved

ADDapproved

code/DDD/SUMapproved

provisionalacceptance

finalacceptance

Figure 1.2: The Software Life Cycle Model

STDdelivered

PHDdelivered

tests

change controlarrow implies under

technicalreviews

Page 17: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-3 THE SOFTWARE LIFE CYCLE

CHAPTER 1 THE SOFTWARE LIFE CYCLE

1.1 INTRODUCTION

This chapter defines the overall software life cycle. The individualphases of the life cycle are described in more detail in subsequentchapters.

In these Standards the term ‘software development project’ is oftenused. Clearly the development of software also involves computerhardware aspects. Trade-offs between hardware and software are part ofdesigning a computerised system, except when the hardwareconfiguration is predefined, and constrains the software design.

1.2 PHASES, ACTIVITIES AND MILESTONES

The software life cycle starts when a software product is conceivedand ends when it is no longer available for use, i.e. it contains the wholeof the development, operations and maintenance activities.

The products of a software development project shall be deliveredin a timely manner and be fit for their purpose. Software developmentactivities shall be systematically planned and carried out. A ‘life cyclemodel’ structures project activities into ‘phases’ and defines whatactivities occur in which phase. Figure 1.2 shows the life cycle modelused in these Standards.

A ‘life cycle approach’ is a combination of the basic phases of thelife cycle model. Section 1.3 describes three possible life cycleapproaches which cover most of the needs of the Agency.

All software projects shall have a life cycle approach whichincludes the basic phases shown in Figure 1.2:• UR phase - Definition of the user requirements• SR phase - Definition of the software requirements• AD phase - Definition of the architectural design• DD phase - Detailed design and production of the code• TR phase - Transfer of the software to operations• OM phase - Operations and maintenance

Page 18: ESA Software Engineering Standards

1-4 ESA PSS-05-0 Issue 2 (February 1991) THE SOFTWARE LIFE CYCLE

The first four phases end with a review, denoted by ‘/R’ (e.g. UR/R

is the User Requirements Review). These phases must occur whateverthe size, the application (e.g. scientific, administrative, real time, batch),the hardware, the operating system or programming language used, orwhether the project is carried out by in-house staff or by industry. Each ofthese factors, however, influences the development approach, and thestyle and content of the deliverable items.

In Figure 1.1 the heavy black line marks the boundary of thesoftware life cycle. This begins with the delivery of the User RequirementsDocument (URD) to the developer for review. The UR/R is the first activityof the software life cycle. Following the approval of the URD, three‘development’ phases must take place before the software is transferredto the users for operations. The deliverables of each phase must bereviewed and approved before proceeding to the next. After a period ofoperations, the software is retired. This event marks the end of thesoftware life cycle.

There are six major milestones that mark progress in the softwarelife cycle. These milestones, shown in Figure 1.1 as filled triangles, arethe:• approval of the User Requirements Document (URD);• approval of the Software Requirements Document (SRD);• approval of the Architectural Design Document (ADD);• approval of the Detailed Design Document (DDD), the Software User

Manual (SUM), the code, and the statement of readiness forprovisional acceptance testing;

• statement of provisional acceptance and the delivery of the SoftwareTransfer Document (STD);

• statement of final acceptance and the delivery of the Project HistoryDocument (PHD).

The last milestone does not fall at the end of a phase, but at theend of the warranty period.

These milestones have been selected as the minimum necessaryfor a workable contractual relationship. They must be present in allprojects. In long projects, additional milestones should be added tomeasure the progress of deliverables.

Page 19: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-5 THE SOFTWARE LIFE CYCLE

1.2.1 UR phase: user requirements definition

The UR phase is the ‘problem definition phase’ of a softwareproject. The scope of the system must be defined. The user requirementsmust be captured. This may be done by interview or survey, or by buildingprototypes. Specific user requirements must be identified anddocumented in the User Requirements Document (URD).

The involvement of the developers in this phase varies according tothe familiarity of the users with software. Some users can produce a highquality URD, while others may need help from the developers.

The URD must always be produced. The review of the URD is doneby the users, the software and hardware engineers and the managersconcerned. The approved URD is the input to the SR phase.

Before the completion of the User Requirements Review (UR/R), aSoftware Project Management Plan outlining the whole project must beproduced by the developer. This plan must contain a cost estimate forthe project. Detailed plans for the SR phase must also be produced.

1.2.2 SR phase: software requirements definition

The SR phase is the ‘analysis’ phase of a software project. A vitalpart of the analysis activity is the construction of a ‘model’ describing‘what’ the software has to do, and not ‘how’ to do it. Building prototypesto clarify the software requirements may be necessary.

The principal deliverable of this phase is the SoftwareRequirements Document (SRD). The SRD must always be produced forevery software project.

Implementation terminology should be omitted from the SRD. TheSRD must be reviewed formally by the users, by the computer hardwareand software engineers, and by the managers concerned, during theSoftware Requirements Review (SR/R).

During the SR phase, the section of the Software ProjectManagement Plan outlining the rest of the project must be updated. Theplan must contain an estimate of the total project cost, and best effortsshould be made to achieve an accuracy of at least 30%. Detailed plansfor the AD phase must also be produced.

Page 20: ESA Software Engineering Standards

1-6 ESA PSS-05-0 Issue 2 (February 1991) THE SOFTWARE LIFE CYCLE

1.2.3 AD phase: architectural design

The purpose of the AD phase is to define the structure of thesoftware. The model constructed in the SR phase is the starting point.This model is transformed into the architectural design by allocatingfunctions to software components and defining the control and data flowbetween them.

This phase may involve several iterations of the design. Technicallydifficult or critical parts of the design have to be identified. Prototyping ofthese parts of the software may be necessary to confirm the basic designassumptions. Alternative designs may be proposed, one of which mustbe selected.

The deliverable item which constitutes the formal output of thisphase is the Architectural Design Document (ADD). The ADD mustalways be produced for every software project. The ADD must bereviewed formally by the computer hardware and software engineers, bythe users, and by the management concerned, during the ArchitecturalDesign Review (AD/R).

During the AD phase, a Software Project Management Planoutlining the rest of the project must be produced. This plan must containan estimate of the project, and best efforts should be made to achieve anaccuracy of at least 10%. Detailed plans for the DD phase must also beproduced.

1.2.4 DD phase: detailed design and production

The purpose of the DD phase is to detail the design of thesoftware, and to code, document and test it.

The Detailed Design Document (DDD) and the Software UserManual (SUM) are produced concurrently with coding and testing.Initially, the DDD and SUM contain the sections corresponding to the toplevels of the system. As the design progresses to lower levels, relatedsubsections are added. At the end of the phase, the documents arecompleted and, with the code, constitute the deliverable items of thisphase.

During this phase, unit, integration and system testing activities areperformed according to verification plans established in the SR and ADphases. As well as these tests, there should be checks on softwarequality.

Page 21: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-7 THE SOFTWARE LIFE CYCLE

The three deliverable items (Code, DDD, SUM), which have

already been the subject of intermediate reviews during the DD phase,must be formally reviewed by the software engineers and themanagement concerned, during the Detailed Design Review (DD/R). Atthe end of the review process, the software can be declared ready forprovisional acceptance testing.

1.2.5 TR phase: transfer

The purpose of this phase is to establish that the software fulfils therequirements laid down in the URD. This is done by installing the softwareand conducting acceptance tests.

When the software has been demonstrated to provide the requiredcapabilities, the software can be provisionally accepted and operationsstarted.

The Software Transfer Document (STD) must be produced duringthe TR phase to document the transfer of the software to the operationsteam.

1.2.6 OM phase: operations and maintenance

Once the software has entered into operation, it should becarefully monitored to confirm that it meets all the requirements defined inthe URD. Some of the requirements, for example those for availability,may take a period of time to validate. When the software has passed allthe acceptance tests, it can be finally accepted.

The Project History Document (PHD) summarises the significantmanagerial information accumulated in the course of the project. Thisdocument must be issued after final acceptance. It should be reissued atthe end of the life cycle, with information gathered in the OM phase.

After final acceptance, the software may be modified to correcterrors undetected during earlier phases, or because new requirementsarise. This is called ‘maintenance’.

For the whole period of operation, particular attention should bepaid to keeping the documentation up-to-date. Information on faults andfailures should be recorded to provide the raw data for the establishmentof software quality metrics for subsequent projects. Tools should beused to facilitate the collection and analysis of quality data.

Page 22: ESA Software Engineering Standards

1-8 ESA PSS-05-0 Issue 2 (February 1991) THE SOFTWARE LIFE CYCLE

1.3 LIFE CYCLE APPROACHES

The software life cycle model, shown in Figure 1.2, summarises thephases and activities which must occur in any software project. A lifecycle approach, based upon this model, should be defined, for eachproject, in the Software Project Management Plan.

This section defines three common approaches. In the diagrams,the phases of Figure 1.2 have been reduced to boxes. Arrows connectingthe boxes represent permitted transitions.

1.3.1 The waterfall approach

UR

SR

AD

DD

TR

OM

Figure 1.3.1 The waterfall approach

The ‘waterfall’ approach, shown in Figure 1.3.1, is the simplestinterpretation of the model shown in Figure 1.2. The phases are executedsequentially, as shown by the heavy arrows. Each phase is executedonce, although iteration of part of a phase is allowed for error correction,as shown by the dashed arrows. Delivery of the complete system occursat a single milestone at the end of the TR phase. The approach allows thecontractual relationship to be simple.

Page 23: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-9 THE SOFTWARE LIFE CYCLE

1.3.2 The incremental delivery approach

UR

SR

AD

DD

TR

OM

DD

TR

OM

1

2

2

2

1

1

Figure 1.3.2 The incremental delivery approach

The ‘incremental delivery’ approach, shown in Figure 1.3.2, ischaracterised by splitting the DD, TR and OM phases into a number ofmore manageable units, once the complete architectural design hasbeen defined. The software is delivered in multiple releases, each withincreased functionality and capability. This approach is beneficial for largeprojects, where a single delivery would not be practical. This may occurfor a number of reasons such as:• certain functions may need to be in place before others can be

effective;• the size of the development team may necessitate subdivision of the

project into a number of deliveries;• budgeting considerations may only allow partial funding over a

number of years.

In all cases, each deliverable should be usable, and provide asubset of the required capabilities.

A disadvantage of the incremental delivery approach is thatregression testing is required to confirm that existing capabilities of thesoftware are not impaired by any new release. The increased amount oftesting required increases the cost of the software.

Page 24: ESA Software Engineering Standards

1-10 ESA PSS-05-0 Issue 2 (February 1991) THE SOFTWARE LIFE CYCLE

1.3.3 The evolutionary development approach

DEV

OM1

1

DEV

OM2

2

DEV

OM3

3

Figure 1.3.3 The evolutionary development approach

The ‘DEV’ box is equivalent to the UR, SR, AD, DD and TR phases shown inFigure 1.2.

The ‘evolutionary’ approach, shown in Figure 1.3.3, is characterisedby the planned development of multiple releases. All phases of the lifecycle are executed to produce a release. Each release incorporates theexperience of earlier releases. The evolutionary approach may be usedbecause, for example:• some user experience is required to refine and complete the

requirements (shown by the dashed line within the OM boxes);• some parts of the implementation may depend on the availability of

future technology;• some new user requirements are anticipated but not yet known;• some requirements may be significantly more difficult to meet than

others, and it is decided not to allow them to delay a usable delivery.

The dashed extensions to the boxes in Figure 1.3.3 show thatsome overlap of OM phases will occur until each new delivery is finallyaccepted.

In an evolutionary development, the developer should recognisethe user’s priorities and produce the parts of the software that are both

Page 25: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-11 THE SOFTWARE LIFE CYCLE

important to the user and, possible to develop with minimal technicalproblems or delays.

The disadvantage of the evolutionary approach is that if therequirements are very incomplete to start with, the initial softwarestructure may not bear the weight of later evolution. Expensive rewritesmay be necessary. Even worse, temporary solutions may becomeembedded in the system and distort its evolution. Further, users maybecome impatient with the teething troubles of each new release. In eachdevelopment cycle, it is important to aim for a complete statement ofrequirements (to reduce risk) and an adaptable design (to ensure latermodifiability). In an evolutionary development, all requirements do notneed to be fully implemented in each development cycle. However, thearchitectural design should take account of all known requirements.

1.4 PROTOTYPING

The use of prototypes to test customer reaction and design ideasis common to many engineering disciplines. A software prototypeimplements selected aspects of proposed software so that tests, themost direct kind of verification, can be carried out.

Prototyping is the process of building prototypes. Prototypingwithin a single phase is a useful means of reducing the risk in a projectthrough practical experience. The output of a prototyping exercise is theknowledge that is gained from implementing or using the prototypesoftware.

The objective of the prototyping activity should be clearly statedbefore the process starts. Prototyping to define requirements is called‘exploratory’ prototyping, while that for investigating the feasibility ofproposed solutions is called ‘experimental’ prototyping.

Prototypes usually implement high risk functional, performance oruser interface requirements and usually ignore quality, reliability,maintainability and safety requirements. Prototype software is therefore‘pre-operational’ and should never be delivered as part of an operationalsystem.

Page 26: ESA Software Engineering Standards

1-12 ESA PSS-05-0 Issue 2 (February 1991) THE SOFTWARE LIFE CYCLE

1.5 HANDLING REQUIREMENTS CHANGE

The URD and SRD must be ‘complete’ documents. This meansthat all known requirements must be included when they are produced.Nevertheless, it is possible that new requirements may arise after theURD and SRD have been approved. Procedures for handling newrequirements should be established at the beginning of the project.

Design integrity should not be compromised when newrequirements are incorporated. Such requirements, if accepted by bothuser and developer, should be handled in the same way as the originalrequirements. The procedure for handling a new user requirement istherefore to:• generate a new draft of the URD,• convene a UR review and, if the change is accepted, then• repeat the SR, AD and DD phases to incorporate the new requirement

and its consequences.

A new software requirement is handled in a similar way.

An alternative method for handling new requirements is to institutea Software Review Board after the UR/R instead of after the DD/R.Another method is to use the evolutionary development life cycleapproach. However, this merely defers the handling of new requirementsto the release following the one that is in preparation, and this may not besufficient.

The quality of the work done in the UR and SR phases can bemeasured by the number of requirements that appear in later phases.Especially important is the trend in the occurrence of new requirements.An upward trend is a sure sign that the software is unlikely to be asuccess.

The availability of software engineering tools may be critical to thesuccess of a project with frequently changing requirements. In projectswhere requirements are agreed and frozen at the end of the SR phase,the use of paper-based methods for requirements analysis and designspecification may be sufficient. In projects where the freezing ofrequirements is not possible, software engineering tools that allow newrequirements and design changes to be assimilated quickly may beessential to avoid serious delays.

Page 27: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-13 THE USER REQUIREMENTS DEFINITION PHASE

CHAPTER 2 THE USER REQUIREMENTS DEFINITION PHASE

2.1 INTRODUCTION

The UR phase can be called the ‘problem definition phase’ of thelife cycle. The purpose of this phase is to refine an idea about a task to beperformed, using computing equipment, into a definition of what isexpected from the computer system.

The definition of the user requirements shall be the responsibility ofthe user. The expertise of software engineers, hardware engineers andoperations personnel should be used to help define and review the userrequirements.

An output of the UR phase is the User Requirements Document(URD). This is a critical document for the whole software project becauseit defines the basis upon which the software is accepted.

The UR phase terminates with formal approval of the URD by theUser Requirements Review (UR/R).

2.2 INPUTS TO THE PHASE

No formal inputs are required, although the results of interviews,surveys, studies and prototyping exercises are often helpful in formulatingthe user requirements.

2.3 ACTIVITIES

The main activity of the UR phase is to capture the userrequirements and document them in the URD. The scope of the softwarehas to be established and the interfaces with external systems identified.

Plans of SR phase activities must be drawn up in the UR phase.These plans should cover project management, configurationmanagement, verification, validation and quality assurance. Theseactivities are described in more detail in Part 2.

Page 28: ESA Software Engineering Standards

1-14 ESA PSS-05-0 Issue 2 (February 1991) THE USER REQUIREMENTS DEFINITION PHASE

2.3.1 Capture of user requirements

While user requirements originate in the spontaneous perception ofneeds, user requirements should be clarified through the criticism andexperience of existing software and prototypes. The widest possibleagreement about the user requirements should be established throughinterviews and surveys. The knowledge and experience of the potentialdevelopment organisations should be used to advise on implementationfeasibility, and, perhaps, to build prototypes. User requirements definitionis an iterative process, and requirements capture activities may have tobe repeated a number of times before the URD is ready for review.

2.3.2 Determination of operational environment

Determining the operational environment should be the first step indefining the user requirements. A clear account should be developed ofthe real world in which the software is to operate. This narrativedescription may be supported by context diagrams, to summarise theinterfaces with external systems (often called ‘external interfaces’), andsystem block diagrams to show the role of the software in a largersystem.

The nature of exchanges with external systems should bespecified and controlled from the start of the project. The information mayreside in an Interface Control Document (ICD), or in the designdocumentation of the external system. If the external system alreadyexists, then the exchanges may already be defined in some detail, andconstrain the design. Alternatively, the definition of the external interfacesmay develop throughout the UR, SR and AD phases.

2.3.3 Specification of user requirements

When the operational environment has been established, specificuser requirements are extracted and organised. Implementationconsiderations are omitted, unless they are the essence of therequirement.

2.3.3.1 Classification of user requirements

User requirements fall into two categories:• capabilities needed by users to solve a problem or achieve an

objective;

Page 29: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-15 THE USER REQUIREMENTS DEFINITION PHASE

• constraints placed by users on how the problem is to be solved or the

objective achieved.

Capability requirements describe functions and operations neededby users. Quantitative statements that specify performance and accuracyattributes should form part of the specification of a capability.

Space and time dimensions can be useful for organising capabilityrequirements. It is often convenient to describe capability requirements interms of a sequence of operations.

Constraint requirements place restrictions on how software can bebuilt and operated. For example, definitions of external communications,hardware and software interfaces may already exist, either because thesoftware is a part of a larger system, or because the user requires thatcertain protocols, standards, computers, operating systems, library orkernel software be used.

The Human-Computer Interaction (HCI) requirements will varyaccording to the type of software under consideration. For interactivesystems, the users may wish to provide examples of the dialogue that isrequired, including the hardware to be used (e.g. keyboard, mouse,colour display etc), and assistance provided by the software (e.g. onlinehelp). For batch systems, it may be sufficient to indicate the parametersthat need to be varied and the output medium and format required.

Constraints that users may wish to place on the software includethe quality attributes of adaptability, availability, portability and security.The user shall describe the consequences of losses of availability, orbreaches of security, so that developers can fully appreciate the criticalityof each function.

The user may choose to make additional standards applicable;such requirements are additional constraints on the development.

2.3.3.2 Attributes of user requirements

Each user requirement must include the attributes listed below.a) Identifier - each user requirement shall include an identifier, to

facilitate tracing through subsequent phases.b) Need - essential user requirements shall be marked as such.

Essential user requirements are non-negotiable; others may be lessvitally important and subject to negotiation.

Page 30: ESA Software Engineering Standards

1-16 ESA PSS-05-0 Issue 2 (February 1991) THE USER REQUIREMENTS DEFINITION PHASE

c) Priority - for incremental delivery, each user requirement shall include

a measure of priority so that the developer can decide the productionschedule.

d) Stability - some user requirements may be known to be stable over theexpected life of the software; others may be more dependent onfeedback from the SR, AD and DD phases, or may be subject tochange during the software life cycle. Such unstable requirementsshould be flagged.

e) Source - the source of each user requirement shall be stated. Thismay be a reference to an external document (e.g. system requirementdocument) or the name of the user, or user group, that provided theuser requirement.

f) Clarity - a user requirement is clear if it has one, and only one,interpretation. Clarity implies lack of ambiguity. If a term used in aparticular context has multiple meanings, the term should be qualifiedor replaced with a more specific term.

g) Verifiability - each user requirement shall be verifiable. This meansthat it must be possible to:• check that the requirement has been incorporated in the design;• prove that the software will implement the requirement;• test that the software does implement the requirement.

2.3.4 Reviews

The outputs of the UR phase shall be formally reviewed during theUser Requirements Review (UR/R). This should be a technical review(see Part 2, Chapter 4). Participants should include the users, operators,developers (hardware and software engineers) and the managersconcerned.

User requirements which are rejected in the review process do nothave to be removed from the URD, especially if it is anticipated thatresources may be available at some later date to implement them. Non-applicable user requirements shall be clearly flagged in the URD.

2.4 OUTPUTS FROM THE PHASE

The main outputs of the phase are the URD and the plans for theSR phase.

Page 31: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-17 THE USER REQUIREMENTS DEFINITION PHASE

2.4.1 User Requirements Document

An output of the UR phase shall be the User RequirementsDocument (URD). The URD shall always be produced before a softwareproject is started. The recommended table of contents for the URD isprovided in Appendix C.

The URD shall provide a general description of what the userexpects the software to do. All known user requirements shall beincluded in the URD. The URD should state the specific userrequirements as clearly and consistently as possible.

The URD shall describe the operations the user wants to performwith the software system. The URD shall define all the constraints thatthe user wishes to impose upon any solution. The URD shall describethe external interfaces to the software system, or reference them in ICDsthat exist or are to be written.

Change control of the URD should be the responsibility of theuser. If a user requirement changes after the URD has been approved,then the user should ensure that the URD is changed and resubmitted tothe UR/R board for approval.

2.4.2 Acceptance test plans

Acceptance test plans must be defined in the acceptance testsection of the Software Verification and Validation Plan (SVVP/AT/Plans,see Part 2, Chapter 4). These plans outline the approach todemonstrating that the software will meet the user requirements.

2.4.3 Project management plan for the SR phase

The outline project plan, the estimate of the total project cost, andthe management plan for the SR phase, must be documented in the SRphase section of the Software Project Management Plan (SPMP/SR, seePart 2, Chapter 2).

2.4.4 Configuration management plan for the SR phase

The configuration management procedures for the documents,CASE tool products and prototype software, to be produced in the SRphase, must be documented in the Software Configuration Plan(SCMP/SR, see Part 2, Chapter 3).

Page 32: ESA Software Engineering Standards

1-18 ESA PSS-05-0 Issue 2 (February 1991) THE USER REQUIREMENTS DEFINITION PHASE

2.4.5 Verification and validation plan for the SR phase

The SR phase review and traceability procedures must bedocumented in the Software Verification and Validation Plan (SVVP/SR,see Part 2, Chapter 4).

2.4.6 Quality assurance plan for the SR phase

The SR phase quality monitoring procedures must be defined inthe Software Quality Assurance Plan (SQAP/SR, see Part 2, Chapter 5).

Page 33: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-19 THE SOFTWARE REQUIREMENTS DEFINITION PHASE

CHAPTER 3 THE SOFTWARE REQUIREMENTS DEFINITION PHASE

3.1 INTRODUCTION

The SR phase can be called the ‘problem analysis phase’ of the lifecycle. The purpose of this phase is to analyse the statement of userrequirements in the URD and produce a set of software requirements ascomplete, consistent and correct as possible.

The definition of the software requirements is the responsibility ofthe developer. Participants in this phase should include users, softwareengineers, hardware engineers and operations personnel. They all have adifferent concept of the end product, and these concepts must beanalysed, and then synthesised, into a complete and consistentstatement of requirements about which everyone can agree. Projectmanagement should ensure that all parties are consulted, so that the riskof incompleteness and error is minimised.

An output of this phase is the Software Requirements Document(SRD). As well as defining ‘what’ the product must do, it is also thereference against which both the design and the product will be verified.Although ‘how’ aspects may have to be addressed, they should beeliminated from the SRD, except those that constrain the software.

The software requirements definition phase terminates with formalapproval of the SR phase outputs by the Software Requirements Review(SR/R).

3.2 INPUTS TO THE PHASE

The inputs to the SR phase are the:• User Requirements Document (URD);• Software Project Management Plan for the SR phase (SPMP/SR);• Software Configuration Management Plan for the SR phase

(SCMP/SR);• Software Verification and Validation Plan for the SR phase (SVVP/SR);• Software Quality Assurance Plan for the SR phase (SQAP/SR).

Page 34: ESA Software Engineering Standards

1-20 ESA PSS-05-0 Issue 2 (February 1991) THE SOFTWARE REQUIREMENTS DEFINITION PHASE

3.3 ACTIVITIES

SR phase activities shall be carried out according to the plansdefined in the UR phase. Progress against plans should be continuouslymonitored by project management and documented at regular intervalsin progress reports.

The main SR phase activity is to transform the user requirementsstated in the URD into the software requirements stated in the SRD. Thisis achieved by analysing the problem, as stated in the URD, and buildinga coherent, comprehensive description of what the software is to do. TheSRD contains a developer’s view of the problem, rather than the user’s.This view should be based upon a model of the system, built accordingto a recognised, documented method.

Software requirements may require the construction of prototypesto clarify or verify them. Requirements which cannot be justified bymodelling, or whose correctness cannot be demonstrated in a formalway, may need to be prototyped. User interface requirements often needthis kind of ‘exploratory prototyping’

Plans of AD phase activities must be drawn up in the SR phase.These plans must cover project management, configurationmanagement, verification, validation and quality assurance. Theseactivities are described in more detail in Part 2.

3.3.1 Construction of the logical model

The developer shall construct an implementation-independentmodel of what is needed by the user. This is called a ‘logical model’, andit is used to produce the software requirements.

A recognised method for software requirements analysis shall beadopted and applied consistently in the SR phase. The logical model maybe constructed by top-down decomposition of the main function, asinferred from the URD, into a hierarchy of functions. Modelling is aniterative process. Parts of the model may need to be respecified manytimes before a complete, coherent and consistent description isachieved.

Walkthroughs, reviews and inspections should be used to ensurethat the specification of each level has been agreed before proceeding tothe next level of detail. A good quality logical model should satisfy therules listed below.

Page 35: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-21 THE SOFTWARE REQUIREMENTS DEFINITION PHASE

1. Functions should have a single definite purpose. Function names

should have a declarative structure (e.g. ‘Validate Telecommands’),and say ‘what’ is to be done rather than ‘how’. Good naming allowsdesign components with strong cohesion to be easily derived (seePart 1, Section 4.3.1.3).

2. Functions should be appropriate to the level at which they appear(e.g. ‘Calculate Checksum’ should not appear at the same level as‘Verify Telecommands’).

3. Interfaces should be minimised. This allows design components withweak coupling to be easily derived (see Part 1, Section 4.3.1.3).

4. Each function should be decomposed into no more than seven sub-functions.

5. The model should omit implementation information (e.g. file, record,task, module);

6. The performance attributes of each function (capacity, speed etc)should be stated;

7. Critical functions should be identified.

In all but the smallest projects, CASE tools should be used forbuilding a logical model. They make consistent models easier toconstruct and modify.

3.3.2 Specification of software requirements

The software requirements are obtained by examining the modeland classifying them in terms of:(a) Functional requirements(b) Performance requirements(c) Interface requirements(d) Operational requirements(e) Resource requirements(f) Verification requirements(g) Acceptance testing requirements(h) Documentation requirements(i) Security requirements(j) Portability requirements(k) Quality requirements(l) Reliability requirements(m) Maintainability requirements(n) Safety requirements

Page 36: ESA Software Engineering Standards

1-22 ESA PSS-05-0 Issue 2 (February 1991) THE SOFTWARE REQUIREMENTS DEFINITION PHASE

While other classifications of requirements can be conceived,

developers should use this classification, with the definitions describedin Section 3.3.2.1.

Software requirements should be rigorously described. Variousalternatives to natural language are available and their use is encouraged.Wherever possible, software requirements should be stated inquantitative terms to increase their verifiability.

As the requirements are compiled, they must include identifiers,references and measures of need, priority and stability. The requirementsmust be complete and consistent. Duplication is to be avoided.

3.3.2.1 Classification of software requirements(a) Functional requirements. These specify ‘what’ the software has to do.

They define the purpose of the software. The functional requirements arederived from the logical model, which is in turn derived from the user’scapability requirements. In order that they may be stated quantitatively,the functional requirements may include performance attributes.

(b) Performance requirements. These specify numerical values formeasurable variables (e.g. rate, frequency, capacity, and speed).Performance requirements may be incorporated in the quantitativespecification of each function, or stated as separate requirements.Qualitative statements are unacceptable (e.g. replace ‘quick response’with ‘response time must be less than x seconds for y% of the cases withan average response time of less than z seconds’). The performanceattributes may be presented as a range of values, for example the:• worst case that is acceptable;

• nominal value, to be used for planning;• best case value, to indicate where growth potential is needed.

(c) Interface requirements. These specify hardware, software or databaseelements with which the system, or system component, must interact orcommunicate. Interface requirements should be classified into software,hardware and communications interfaces. Software interfaces couldinclude operating systems, software environments, file formats, databasemanagement systems and other software applications. Hardwareinterface requirements may specify the hardware configuration.

Page 37: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-23 THE SOFTWARE REQUIREMENTS DEFINITION PHASE

Communications interface requirements constrain the nature of theinterface to other hardware and software. They may demand the use of aparticular network protocol, for example. External interface requirementsshould be described, or referenced in ICDs. User interface requirementsshould be specified under ‘Operational Requirements’ (see below).Interface requirements can be illustrated with system block diagrams(e.g. to show the hardware configuration).

(d) Operational requirements. These specify how the system will run andhow it will communicate with the human operators. Operationalrequirements include all user interface, usability and human-computerinteraction requirements as well as the logistical and organisationalrequirements. Examples are: the screen layout, the content of errormessages, help systems etc. It is often useful to define the semanticsand syntax of commands.

(e) Resource requirements. These specify upper limits on physicalresources such as processing power, main memory, disc space etc.These are especially needed when extension of processing hardware latein the life cycle becomes too expensive, as in many embedded systems.

(f) Verification requirements. These specify the constraints on how thesoftware is to be verified. The verification requirements constrain theSVVP. They might include requirements for simulation, emulation, livetests with simulated inputs, live tests with real inputs, and interfacing withthe testing environment.

(g) Acceptance testing requirements. These specify the constraints on howthe software is to be validated. The acceptance testing requirementsconstrain the SVVP.

(h) Documentation requirements. These specify project-specificrequirements for documentation in addition to those contained in theseStandards (e.g. the detailed format of the Software User Manual).

(i) Security requirements. These specify the requirements for securing thesystem against threats to confidentiality, integrity and availability.Examples of security requirements are interlocking operator commands,inhibiting of commands, read-only access, password system andcomputer virus protection. The level of physical protection needed of thecomputer facilities may also be stated (e.g. backups are to be kept in afire-proof safe off-site).

Page 38: ESA Software Engineering Standards

1-24 ESA PSS-05-0 Issue 2 (February 1991) THE SOFTWARE REQUIREMENTS DEFINITION PHASE

(j) Portability requirements These specify the ease of modifying the

software to execute on other computers and operating systems. Possiblecomputers and operating systems, other than those of the target system,should be stated.

(k) Quality requirements. These specify attributes of the software thatensure that it will be fit for its purpose (other than the major qualityattributes of reliability, maintainability and safety, which should always bespecified). Where appropriate, software quality attributes should bespecified in measurable terms (i.e. with the use of metrics).

(l) Reliability requirements. These specify the acceptable mean time intervalbetween failures of the software, averaged over a significant period(MTBF). They may also specify the minimum time between failures that isever acceptable. Reliability requirements may have to be derived from theuser’s availability requirements.

(m) Maintainability requirements. These specify how easy it is to repair faultsand adapt the software to new requirements. The ease of performingthese tasks should be stated in quantitative terms, such as mean time torepair a fault (MTTR). They may include constraints imposed by thepotential maintenance organisation. Maintainability requirements may bederived from the user’s availability and adaptability requirements.

(n) Safety requirements . These specify any requirements to reduce thepossibility of damage that can follow from software failure. Safetyrequirements may identify critical functions whose failure may behazardous to people or property.

3.3.2.2 Attributes of software requirements

Each software requirement must include the attributes listed below.a) Identifier - each software requirement shall include an identifier, to

facilitate tracing through subsequent phases.b) Need - essential software requirements shall be marked as such.

Essential software requirements are non-negotiable; others may beless vitally important and subject to negotiation.

c) Priority - for incremental delivery, each software requirement shallinclude a measure of priority so that the developer can decide theproduction schedule.

Page 39: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-25 THE SOFTWARE REQUIREMENTS DEFINITION PHASE

d) Stability - some requirements may be known to be stable over the

expected life of the software; others may be more dependent onfeedback from the design phase, or may be subject to change duringthe software life cycle. Such unstable requirements should be flagged.

e) Source - references that trace software requirements back to the URDshall accompany each software requirement.

f) Clarity - a requirement is clear if it has one, and only one,interpretation. Clarity implies lack of ambiguity. If a term used in aparticular context has multiple meanings, the term should be qualifiedor replaced with a more specific term.

g) Verifiability - each software requirement shall be verifiable. This meansthat it must be possible to:• check that the requirement has been incorporated in the design;• prove that the software will implement the requirement;• test that the software does implement the requirement.

3.3.2.3 Completeness of software requirements

Completeness has two aspects:• no user requirement has been overlooked;• an activity has been specified for every possible set of inputs.

For the SRD to be complete, each requirement in the URD must beaccounted for. A traceability matrix must be inserted in the SRD to provecompleteness.

The phrase ‘To Be Defined’ (TBD) indicates incompleteness. Theremust be no TBDs in the SRD.

3.3.2.4 Consistency of software requirements

A set of requirements is consistent if, and only if, no set ofindividual requirements conflict. There are a number of types ofinconsistency, for example:• different terms used for the same thing;• the same term used for different things;• incompatible activities happening at the same time;• activities happening in the wrong order.

Page 40: ESA Software Engineering Standards

1-26 ESA PSS-05-0 Issue 2 (February 1991) THE SOFTWARE REQUIREMENTS DEFINITION PHASE

The achievement of consistency is made easier by using methods

and tools.

3.3.2.5 Duplication of software requirements

Duplication of requirements should be avoided, although someduplication may be necessary if the SRD is to be understandable. Thereis always a danger that a requirement that overlaps or duplicates anotherwill be overlooked when the SRD is updated. This leads toinconsistencies. Where duplication occurs, cross-references should beinserted to enhance modifiability.

3.3.3 Reviews

The outputs of the SR phase shall be formally reviewed during theSoftware Requirements Review (SR/R). This should be a technical review(see Part 2, Chapter 4). Participants should include the users, theoperations personnel, the developers and the managers concerned.

3.4 OUTPUTS FROM THE PHASE

The main outputs of the phase are the SRD and the plans for theAD phase. Progress reports, configuration status accounts, and auditreports are also outputs of the phase. These should always be archivedby the project.

3.4.1 Software Requirements Document

An output of the SR phase shall be the Software RequirementsDocument (SRD).

The SRD shall be complete. The SRD shall cover all therequirements stated in the URD. A table showing how user requirementscorrespond to software requirements shall be placed in the SRD. Thisdemonstrates forwards traceability and can be used to provecompleteness.

The SRD shall be consistent. Software engineering methods andtools can help achieve consistency.

The functional requirements should be structured top-down in theSRD. Non-functional requirements should be attached to functionalrequirements and therefore can appear at all levels of the hierarchy, and

Page 41: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-27 THE SOFTWARE REQUIREMENTS DEFINITION PHASE

apply to all functional requirements below them (inheritance of familyattributes).

The SRD shall not include implementation details or terminology,unless it has to be present as a constraint. Descriptions of functions,therefore, shall say what the software is to do, and must avoid sayinghow it is to be done. The SRD shall avoid specifying the hardware,unless it is a constraint placed by the user.

The outputs of the analysis method, for example ‘data flowdiagrams’ in the case of Structured Analysis, should be included, so asto provide the overview needed to permit an understanding of the specificrequirements.

Each software requirement must have an identifier, and includemeasures of need and priority. Software requirements must reference theURD to facilitate backwards traceability.

The SRD may be written in a natural language. This has theimportant advantage that it presents no additional barriers between thepeople of different disciplines who are involved during this phase. On theother hand, natural languages have many properties that are undesirablein specifications (ambiguity, imprecision and inconsistency). Usingrequirements specification languages can eliminate many of theseproblems, and these range in rigor from structured english to formalmethods such as Z or VDM. Formal methods should be considered forthe specification of safety-critical systems. If a requirements specificationlanguage is used, explanatory text, written in natural language, should beincluded in the SRD to enable it to be reviewed by those not familiar withthe specification language.

The SRD shall be compiled according to the table of contentsprovided in Appendix C, which is derived from ANSI/IEEE Std 830-1984,Guide to Software Requirements Specifications.

3.4.2 System test plans

System test plans must be defined in the system test section of theSoftware Verification and Validation Plan (SVVP/ST/Plans, see Part 2,Chapter 4). These plans outline the approach to demonstrating that thesoftware will meet the software requirements.

Page 42: ESA Software Engineering Standards

1-28 ESA PSS-05-0 Issue 2 (February 1991) THE SOFTWARE REQUIREMENTS DEFINITION PHASE

3.4.3 Project management plan for the AD phase

The outline project plan, the cost estimate for the complete project(accurate to 30%), and the management plan for the AD phase, must bedocumented in the AD phase section of the Software ProjectManagement Plan (SPMP/AD, see Part 2, Chapter 2).

3.4.4 Configuration management plan for the AD phase

The configuration management procedures for the documents,CASE tool products and prototype software, to be produced in the ADphase, must be documented in the Software Configuration ManagementPlan (SCMP/AD, see Part 2, Chapter 3).

3.4.5 Verification and validation plan for the AD phase

The AD phase review and traceability procedures must bedocumented in the Software Verification and Validation Plan (SVVP/AD,see Part 2, Chapter 4).

3.4.6 Quality assurance plan for the AD phase

The AD phase quality monitoring procedures must be defined inthe Software Quality Assurance Plan (SQAP/AD, see Part 2, Chapter 5).

Page 43: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-29 THE ARCHITECTURAL DESIGN PHASE

CHAPTER 4 THE ARCHITECTURAL DESIGN PHASE

4.1 INTRODUCTION

The AD phase can be called the ‘solution phase’ of the life cycle.The purpose of this phase is to define a collection of softwarecomponents and their interfaces to establish a framework for developingthe software. This is the ‘Architectural Design’, and it must cover all therequirements in the SRD.

The architectural design definition is the responsibility of thesoftware engineers. Other kinds of engineers may be consulted duringthis phase, and representatives of users and operations personnelshould review the architectural design.

An output of this phase is the Architectural Design Document(ADD). This should document each component, and its relationship withother components. The ADD is complete when the level of definition ofcomponents and interfaces is sufficient to enable individuals or smallgroups to work independently in the DD phase.

The architectural design phase terminates with formal approval ofthe AD phase outputs by the Architectural Design Review (AD/R).

4.2 INPUTS TO THE PHASE

The inputs to the AD phase are the:• Software Requirements Document (SRD);• Software Project Management Plan for the AD phase (SPMP/AD);• Software Configuration Management Plan for the AD phase

(SCMP/AD);• Software Verification and Validation Plan for the AD phase (SVVP/AD);• Software Quality Assurance Plan for the AD phase (SQAP/AD).

Page 44: ESA Software Engineering Standards

1-30 ESA PSS-05-0 Issue 2 (February 1991) THE ARCHITECTURAL DESIGN PHASE

4.3 ACTIVITIES

AD phase activities shall be carried out according to the plansdefined in the SR phase. Progress against plans should be continuouslymonitored by project management and documented at regular intervalsin progress reports.

The principal activity of the AD phase is to develop the architecturaldesign of the software and document it in the ADD. This involves:• constructing the physical model;• specifying the architectural design;• selecting a programming language;• reviewing the design.

A recognised method for software design shall be adopted andapplied consistently in the AD phase. Where no single method providesall the capabilities required, a project-specific method may be adopted,which should be a combination of recognised methods.

Plans for the rest of the development must be drawn up in the ADphase. These plans cover project management, configurationmanagement, verification, validation and quality assurance. Theseactivities are described in more detail in Part 2.

4.3.1 Construction of the physical model

The developer shall construct a ‘physical model’ which describesthe design of the software, using implementation terminology. Thephysical model should be derived from the logical model, described inthe SRD. In transforming a logical model to a physical model, ‘designdecisions’ are made in which functions are allocated to components andtheir inputs and outputs defined. Design decisions should also satisfynon-functional requirements, design quality criteria and implementationtechnology considerations. Design decisions should be recorded.

Modelling is an iterative process. Each part of the model needs tobe specified and respecified until a coherent description of eachcomponent is achieved.

In all but the smallest projects, CASE tools should be used forbuilding the physical model. They make consistent models easier toconstruct and modify.

Page 45: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-31 THE ARCHITECTURAL DESIGN PHASE

4.3.1.1 Decomposition of the software into components

The software should be decomposed into a hierarchy ofcomponents according to a partitioning method. Examples of partitioningmethods are ‘functional decomposition’ and ‘correspondence with realworld objects’. There should be distinct levels within the hierarchy, witheach component occupying a well-defined place.

The method used to decompose the software into its componentparts shall permit a top-down approach. Starting from the top-levelcomponent, the software is decomposed into a hierarchy of components.The architectural design should specify the upper levels of the hierarchy,typically the top three or four.

Top-down decomposition is vital for controlling complexity becauseit enforces ‘information hiding’ by demanding that lower-levelcomponents behave as ‘black boxes’. Only the function and interfaces ofa lower-level component are required for the higher-level design. Theinformation necessary to the internal workings of a lower-level componentcan remain hidden.

Top-down decomposition also demands that each level of thedesign be described using terms that have an appropriate degree of‘abstraction’ (e.g. the terms ‘file’, ‘record’, and ‘byte’ ought to occur atdifferent levels in the design of a file-handling system). The use of theright degree of abstraction at each level assists information hiding.

The bottom-level components in the ADD should be sufficientlyindependent to allow their detailed design and coding to proceed inparallel to that of other components, with a minimum of interactionbetween programmers. In multi-tasking systems, the lowest level of theArchitectural Design should be the task level. At the task level, the timingrelationships (i.e. before, after or concurrent) between functions are usedto allocate them to tasks.

4.3.1.2 Implementation of non-functional requirements

The SRD contains a number of requirements in the non-functionalcategory. These are:

Performance requirements Interface requirements Operational requirements Resource requirements

Page 46: ESA Software Engineering Standards

1-32 ESA PSS-05-0 Issue 2 (February 1991) THE ARCHITECTURAL DESIGN PHASE

Verification requirements Acceptance testing requirements Documentation requirements Security requirements Portability requirements Quality requirements Reliability requirements Maintainability requirements Safety requirements

The design of each component should be reviewed against eachof these requirements. While some non-functional requirements mayapply to all components in the system, other non-functional requirementsmay affect the design of only a few components.

4.3.1.3 Design quality criteria

Designs should be adaptable, efficient and understandable.Adaptable designs are easy to modify and maintain. Efficient designsmake minimal use of available resources. Designs must beunderstandable if they are to be built, operated and maintainedeffectively.

Attainment of these goals is assisted by aiming for simplicity inform and function in every part of the design. There are a number ofmetrics that can be used for measuring complexity, (e.g. number ofinterfaces per component), and their use should be considered.

Simplicity of function is achieved by maximising the ‘cohesion’ ofindividual components (i.e. the degree to which the activities internal tothe component are related to one another).

Simplicity of form is achieved by:• minimising the ‘coupling’ between components (i.e. the number of

distinct items that are passed between components);• ensuring that the function a component performs is appropriate to its

level in the hierarchy;• matching the software and data structures;• maximising the number of components that use a given component;• restricting the number of child components to 7 or less;

Page 47: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-33 THE ARCHITECTURAL DESIGN PHASE

• removing duplication between components by making new

components.

Designs should be ‘modular’, with minimal coupling betweencomponents and maximum cohesion within each component. There isminimal duplication between components in a modular design.Components of a modular design are often described as ‘black boxes’because they hide internal information from other components. It is notnecessary to know how a black box component works to know what todo with it.

Understandable designs employ terminology in a consistent wayand always use the same solution to the same problem. Where teams ofdesigners collaborate to produce a design, understandability can beconsiderably impaired by permitting unnecessary variety. CASE tools,designs standards and design reviews all help to enforce consistencyand uniformity.

4.3.1.4 Trade-off between alternative designs

There is no unique design for any software system. Studies of thedifferent options may be necessary. A number of criteria will be needed tochoose the best option. The criteria depend on the type of system. Forexample, in a real-time situation, performance and response time couldbe important, whereas in an administrative system stability of the database might be more important.

Prototyping may be performed to verify assumptions in the designor to evaluate alternative design approaches. This is called ‘experimentalprototyping’. For example, if a program requires fast access to datastored on disc, then various methods of file access could be coded andmeasured. Different access methods could alter the design approachquite significantly, and prototyping the access method would become anessential part of the design process.

Only the selected design approach shall be reflected in the ADD(and DDD). However, the need for the prototyping, listing of code, trade-off criteria, and reasons for the chosen solution, should be documentedin the Project History Document.

4.3.2 Specification of the architectural design

The architectural design is the fully documented physical model.This should contain diagrams showing, at each level of the architectural

Page 48: ESA Software Engineering Standards

1-34 ESA PSS-05-0 Issue 2 (February 1991) THE ARCHITECTURAL DESIGN PHASE

design, the data flow and control flow between the components. Blockdiagrams, showing entities such as tasks and files, may also be used todescribe the design. The diagramming techniques used should bedocumented or referenced.

4.3.2.1 Functional definition of the components

The process of architectural design results in a set of componentshaving defined functions and interfaces. The functions of eachcomponent will be derived from the SRD. The level of detail in the ADDwill show which functional requirements are to be met by eachcomponent, but not necessarily how to meet them: this will only be knownwhen the detailed design is complete. Similarly, the interfaces betweencomponents will be restricted to a definition of the information toexchange, and not how to exchange it (unless this contributes to thesuccess or failure of the chosen design).

For each component the following information shall be defined inthe ADD:• data input;• functions to be performed;• data output.

Data inputs and outputs should be defined as data structures (seenext section).

4.3.2.2 Definition of the data structures

Data structures that interface components shall be defined in theADD. External interfaces may be separately documented in an ICD.

Data structure definitions shall include the:• description of each element (e.g. name, type, dimension);• relationships between the elements (i.e. the structure);• range of possible values of each element;• initial values of each element.

Page 49: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-35 THE ARCHITECTURAL DESIGN PHASE

4.3.2.3 Definition of the control flow

The definition of the control flow between components is essentialfor the understanding of the software’s operation. The control flowbetween the components shall be defined in the ADD.

Control flow definitions may describe:• sequential and parallel operations;• synchronous and asynchronous behaviour.

4.3.2.4 Definition of the computer resource utilisation

The computer resources (e.g. CPU speed, memory, storage,system software) needed in the development environment and theoperational environment shall be estimated in the AD phase and definedin the ADD. For many software projects, development environment andoperational environment will be the same. Any resource requirements inthe SRD will constrain the design.

4.3.3 Selection of programming languages

Programming languages should be selected that support top-down decomposition, structured programming and concurrentproduction and documentation. The programming language and the ADmethod should be compatible.

Non-functional requirements may influence the choice ofprogramming language. For example, portability and maintenanceconsiderations suggest that assembler should be selected only for veryspecific and justifiable reasons.

The availability of reliable compilers and effective debuggersconstrains the selection of a programming language.

4.3.4 Reviews

The architectural design should be reviewed and agreed layer bylayer as it is developed during the AD phase. The design of any levelinvariably affects upper layers: a number of review cycles may benecessary before the design of a level can be finalised. Walkthroughsshould be used to ensure that the architectural design is understood byall those concerned. Inspections of the design, by qualified softwareengineers, may be used to eliminate design defects.

Page 50: ESA Software Engineering Standards

1-36 ESA PSS-05-0 Issue 2 (February 1991) THE ARCHITECTURAL DESIGN PHASE

The outputs of the AD phase shall be formally reviewed during the

Architectural Design Review (AD/R). This should be a technical review(see Part 2, Chapter 4). Participants should include the users, theoperations personnel, the hardware engineers, software engineers, andthe managers concerned.

After the start of the DD phase, modifications to the architecturaldesign can increase costs substantially. The DD phase should not bestarted if there are still doubts, major open points, or uncertainties in thearchitectural design.

4.4 OUTPUTS FROM THE PHASE

The main outputs of the phase are the ADD and the plans for theDD phase. Progress reports, configuration status accounts, softwareverification reports and audit reports are also outputs of the phase. Theseshould always be archived by the project.

4.4.1 Architectural Design Document

The Architectural Design Document (ADD) is the key documentthat summarises the solution. It is the kernel from which the detaileddesign grows. The ADD shall define the major components of thesoftware and the interfaces between them. The ADD shall define orreference all external interfaces. The ADD shall be an output from the ADphase.

The ADD shall be complete, covering all the software requirementsdescribed in the SRD. To demonstrate this, a table cross-referencingsoftware requirements to parts of the architectural design shall be placedin the ADD.

The ADD shall be consistent. Software engineering methods andtools can help achieve consistency, and their output may be included inthe ADD.

The ADD shall be sufficiently detailed to allow the project leader todraw up a detailed implementation plan and to control the overall projectduring the remaining development phases. The ADD should be detailedenough to enable the cost of the remaining development to be estimatedto within10%.

Page 51: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-37 THE ARCHITECTURAL DESIGN PHASE

The ADD shall be compiled according to the table of contents

provided in Appendix C, which is derived from ANSI/IEEE Std 1016-1987,Software Design Descriptions. This table of contents implements theapproach described in Section 4.3.2.

4.4.2 Integration test plans

Integration test plans must be defined in the integration test sectionof the Software Verification and Validation Plan (SVVP/IT/Plans, see Part 2,Chapter 4). These plans outline the approach to demonstrating that thesoftware subsystems conform to the ADD.

4.4.3 Project management plan for the DD phase

The estimate of the total project cost (accurate to 10%), and themanagement plan for the DD phase, must be documented in the DDphase section of the Software Project Management Plan (SPMP/DD, seePart 2, Chapter 2). An outline plan for the TR and OM phases must alsobe included.

4.4.4 Configuration management plan for the DD phase

The configuration management procedures for the documents,deliverable code, CASE tool products and prototype software, to beproduced in the DD phase, must be documented in the SoftwareConfiguration Management Plan (SCMP/DD, see Part 2, Chapter 3).

4.4.5 Verification and validation plan for the DD phase

The DD phase review and traceability procedures must bedocumented in the Software Verification and Validation Plan (SVVP/DD,see Part 2, Chapter 4).

4.4.6 Quality assurance plan for the DD phase

The DD phase quality monitoring procedures must be defined inthe Software Quality Assurance Plan (SQAP/DD, see Part 2, Chapter 5).

Page 52: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-38 THE DETAILED DESIGN AND PRODUCTION PHASE

CHAPTER 5 THE DETAILED DESIGN AND PRODUCTION PHASE

5.1 INTRODUCTION

The DD phase can be called the ‘implementation phase’ of the lifecycle. The purpose of the DD phase is to detail the design outlined in theADD, and to code, document and test it.

The detailed design and production is the responsibility of thesoftware engineers. Other kinds of engineers may be consulted duringthis phase, and representatives of users and operations personnel mayobserve system tests. The software may be independently verified byengineers not responsible for detailed design and coding.

Important considerations before starting code production are theadequacy and availability of computer resources for softwaredevelopment. There is no point in starting coding and testing if thecomputer, operating system and system software are not available orsufficiently reliable and stable. Productivity can drop dramatically if theseresources are not adequate. Failure to invest in software tools anddevelopment hardware often leads to bigger development costs.

The principal output of this phase are the code, the DetailedDesign Document (DDD) and Software User Manual (SUM). The DDphase terminates with formal approval of the code, DDD and SUM by theDetailed Design Review (DD/R).

5.2 INPUTS TO THE PHASE

The inputs to the DD phase are the:• Architectural Design Document (ADD);• Integration test plans (SVVP/IT/Plans);• System test plans (SVVP/ST/Plans);• Software Project Management Plan for the DD phase (SPMP/DD);• Software Configuration Management Plan for the DD phase

(SCMP/DD);• Software Verification and Validation Plan for the DD phase (SVVP/DD);

Page 53: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-39 THE DETAILED DESIGN AND PRODUCTION PHASE

• Software Quality Assurance Plan for the DD phase (SQAP/DD).

5.3 ACTIVITIES

DD phase activities shall be carried out according to the plansdefined in the AD phase. Progress against plans should be continuouslymonitored by project management and documented at regular intervalsin progress reports.

The detailed design and production of software shall be based onthe following three principles:• top-down decomposition;• structured programming;• concurrent production and documentation.

These principles are reflected in both the software design and theorganisation of the work. They help ensure that the software is deliveredon time and within budget, because emphasis is placed on ‘getting itright first time’. They also have a positive effect on the quality of thesoftware, its reliability, maintainability and safety.

Top-down decomposition is vital for controlling complexity becauseit enforces ‘information hiding’ by demanding that lower-levelcomponents behave as ‘black boxes’. Only the function and interfaces ofa lower-level component are required for the higher-level design. Theinformation necessary to the internal workings of a lower-level componentcan remain hidden.

Structured programming aims to avoid making errors when amodule is designed and when code is written. Stepwise refinement of adesign into code, composed of the basic sequence, selection anditeration constructs is the main feature of the technique. By reducing thenumber of coding errors, structured programming dramatically reducestime spent in testing and correcting software. Structured programmingalso makes code more understandable, reducing time spent in inspectionand in maintenance.

Concurrent production and documentation of code is a side effectof stepwise refinement. Design information should be retained ascommentary in the source code.

Page 54: ESA Software Engineering Standards

1-40 ESA PSS-05-0 Issue 2 (February 1991) THE DETAILED DESIGN AND PRODUCTION PHASE

5.3.1 Detailed design

In detailed design, lower-level components of the architecturaldesign are decomposed until they can be expressed as modules in theselected programming language. A module is a program unit that isdiscrete and identifiable with respect to compiling, combining with otherunits, and loading.

Starting from the bottom-level components in the ADD, the designproceeds to lower levels via stepwise refinement of each modulespecification.

The guidelines for stepwise refinement are:• start from functional and interface specifications;• concentrate on the control flow;• defer data declarations until the coding phase;• keep steps of refinement small so that verification is easier;• review each step as it is made.

The review of each module may be by walkthrough or inspection.The review of a module is complete when it is approved for coding.

The methods and CASE tools used for architectural design shouldbe used in the DD phase for the detailed design work.

Although design should normally proceed downwards, some ofthe lowest level components may need to be designed (and coded) first.Examples are device drivers and utility libraries.

5.3.2 Production

5.3.2.1 Coding

When the design of each module is completed, reviewed andapproved, it can be coded.

Coding conventions should be established and documented inthe DDD. They should provide rules for:• presentation, (e.g. header information and comment layout);• naming programs, subprograms, files, variables and data;

Page 55: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-41 THE DETAILED DESIGN AND PRODUCTION PHASE

• limiting the size of modules;• using library routines, especially: - operating system routines; - commercial library routines (e.g. numerical analysis); - project specific utility routines;• defining constants;• using compiler specific features not in the language standard;• error handling.

The standard header (see Part 2, Chapter 3) should be madeavailable so that it can be edited, completed and then inserted at thehead of each module.

Code should be consistent, as this reduces complexity. Rigorousadherence to coding conventions is fundamental to ensuringconsistency. Further, consistency is enhanced by adopting the samesolutions to the same problems. To preserve code consistency, changesand modifications should follow the style of the original code, assumingit was produced to recognised standards.

Code should be structured, as this reduces errors and enhancesmaintainability. Generally, this means resolving it into the basic sequence,selection (i.e. condition) and iteration (i.e. loop) constructs. Practically, theideas of structured programming require that:• each module should have a single entry and exit point;• control flow should proceed from the beginning to the end;• related code should be blocked together rather than dispersed

around the module;• branching out of a module should only be performed under

prescribed conditions (e.g. error exit).

Production of consistent, structured code is made easier by usingtools such as language-sensitive editors, customised to suit the projectconventions.

The coding process includes compilation; not only does thisproduce the object code needed for testing the run-time behaviour of a

Page 56: ESA Software Engineering Standards

1-42 ESA PSS-05-0 Issue 2 (February 1991) THE DETAILED DESIGN AND PRODUCTION PHASE

module, it is the first step in verifying the code. Compilation normallyproduces statistics that can be used for the static analysis of the module.

Supplementary code included to assist the testing process shouldbe readily identifiable and easy to disable, or remove, after successfultesting. Care should be taken to ensure that such code does notobscure the module logic.

As the coding of a module proceeds, documentation of the designassumptions, function, structure, interface, internal data and resourceutilisation should proceed concurrently. This information should berecorded in Part 2 of the DDD. The inclusion of this information in thesource code is recommended. To avoid the maintenance problem ofhaving the same information in two places, tools to select informationrequired for the DDD from the source code are desirable.

When a module has been documented and successfully compiled,unit testing can begin.

5.3.2.2 Integration

Integration is the process of building a software system bycombining components into a working entity.

Integration of components should proceed in an orderly function-by-function sequence. This allows the software’s operational capabilitiesto be demonstrated early, increasing management confidence that theproject is progressing satisfactorily.

The integration process shall be controlled by the softwareconfiguration management procedures defined in the SCMP. Good SCMprocedures are essential for correct integration.

The top-down approach to integration is to use stubs to representlower-level modules. As modules are completed and tested, they replacethe stubs. In many projects, the need to make shared componentsavailable at an early stage forces the integration to be organised initiallyon a bottom-up, and later on a top-down, basis. Whatever approach tointegration is taken, it should minimise the time spent in testing, whileensuring that all source statements are verified.

Page 57: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-43 THE DETAILED DESIGN AND PRODUCTION PHASE

5.3.2.3 Testing

Procedures for developing and documenting the test approach aredescribed in Part 2, Chapter 4.

5.3.2.3.1 Unit testing

Unit tests verify the design and implementation of all componentsfrom the lowest level defined in the detailed design up to the lowest leveldefined in the architectural design (normally the task level). Modules thatdo not call other modules exist at the lowest level of the detailed design.

Unit tests verify that not only is a module doing what it is supposedto do (‘black box’ testing), but also that it is doing it in the way it wasintended ( ‘white box’ testing). The most probable paths through amodule should be identified and tests designed to ensure these pathsare executed. In addition, before a module can be accepted, everystatement shall be successfully executed at least once. Coverageanalysers and symbolic debuggers can be very useful in observing theinternal behaviour of a module. For larger systems, tools can ensure thatmodules are tested systematically.

Unit testing is normally carried out by the individuals or teamsresponsible for the components’ production.

Unit test plans, test designs, test cases, test procedures and testreports are documented in the Unit Test section of the SoftwareVerification and Validation Plan (SVVP/UT).

5.3.2.3.2 Integration testing

Integration testing is also done in the DD phase when the majorcomponents are assembled to build the system. These majorcomponents are identified in the ADD. Integration tests should bedirected at verifying that major components interface correctly. Integrationtesting should precede system testing and follow unit testing.

Integration testing shall check that all the data exchanged acrossan interface agree with the data structure specifications in the ADD.Integration testing shall confirm that the control flows defined in the ADDhave been implemented.

Integration test designs, test cases, test procedures and testreports are documented in the Integration Test section of the SoftwareVerification and Validation Plan (SVVP/IT).

Page 58: ESA Software Engineering Standards

1-44 ESA PSS-05-0 Issue 2 (February 1991) THE DETAILED DESIGN AND PRODUCTION PHASE

5.3.2.3.3 System testing

System testing is the process of testing an integrated softwaresystem. This testing can be done in the development or targetenvironment, or a combination of the two. System testing shall verifycompliance with system objectives, as stated in the SRD. System testingshould include such activities as:• passing data into the system, correctly processing and outputting it

(i.e. end-to-end system tests);• practice for acceptance tests (i.e. verification that user requirements

will be met);• stress tests (i.e. measurement of performance limits);• preliminary estimation of reliability and maintainability;• verification of the Software User Manual.

Trends in the occurrence of defects should be monitored insystem tests; the behaviour of such trends is important for the estimationof potential acceptability.

For most embedded systems, as well as systems using specialperipherals, it is often useful or necessary to build simulators for thehardware with which the deliverable system will interface. Such simulatorsare often required because of:• late availability of the final system hardware;• low available test time with the final system hardware;• desire to avoid damaging delicate and/or expensive hardware.

Simulators are normally a separate project in themselves. Effortshould be made to ensure that they are available in time, and that theyare certified as identical, from an interface point of view, with the targethardware.

System test designs, test cases, test procedures and test reportsare documented in the System Test section of the Software Verificationand Validation Plan (SVVP/ST).

5.3.3 Reviews

The detailed design should be reviewed and agreed layer by layeras it is developed during the DD phase. The design of any level invariably

Page 59: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-45 THE DETAILED DESIGN AND PRODUCTION PHASE

affects upper layers and a number of review cycles may be necessarybefore the design of a level can be finalised. Walkthroughs should beused to ensure that the detailed design is understood by all concerned.Inspections of the design, by qualified software engineers, should beused to reduce the occurrence of design defects.

When the detailed design of a major component is finished, acritical design review shall be convened to certify its readiness forimplementation. The project leader should participate in these reviews,together with the team leader and team members concerned.

After modules have been coded and successfully compiled,walkthroughs or inspections should be held to verify that theimplementation conforms to the design.

After production, the DD Review (DD/R) shall consider the reportsof the verification activities and decide whether to transfer the software.This should be a technical review (see Part 2, Chapter 4). Reviewparticipants should include engineers, user representatives andmanagers.

5.4 OUTPUTS FROM THE PHASE

The main outputs of the phase are the code, DDD, SUM and theplans for the TR phase.

Progress reports, configuration status accounts, softwareverification reports and audit reports are also outputs of the phase. Theseshould always be archived by the project.

5.4.1 Code

The developer should deliver all the items needed to execute andmodify any part of the software produced in the project, e.g:• source files;• command procedure files;• configuration management tools;• source files for test software;• test data;• build and installation procedures.

Page 60: ESA Software Engineering Standards

1-46 ESA PSS-05-0 Issue 2 (February 1991) THE DETAILED DESIGN AND PRODUCTION PHASE

All deliverable code shall be identified in a configuration item list.

5.4.2 Detailed Design Document

The DDD grows as the design proceeds to the lowest level ofdecomposition. Documentation should be produced concurrently withdetailed design, coding and testing. In large projects, it may beconvenient to organise the overall DDD into several volumes. The DDDshall be an output of the DD phase. A recommended table of contents ofthe DDD is provided in Appendix C.

Part 1 of the DDD defines design and coding standards and tools,and should be prepared as the first activity of the DD phase, before workstarts on detailed design and coding.

Part 2 of the DDD expands as the design develops. Part 2 of theDDD shall have the same structure and identification scheme as thecode itself, with a 1:1 correspondence between sections of thedocumentation and the software components.

The DDD shall be complete, accounting for all the softwarerequirements in the SRD. A table cross-referencing software requirementsto the detailed design components shall be placed in the DDD.

5.4.3 Software User Manual

A Software User Manual (SUM) shall be an output of the DDphase. The recommended table of contents for a SUM is provided inAppendix C. The rules for the style and content of the Software UserManual are based on ANSI/IEEE Std 1063-1987, ‘Software UserDocumentation’. Two styles of user documentation are useful: the‘instruction’, or ‘tutorial’, style and the ‘reference’ style.

While the instruction style is oriented towards helping new users,the reference style is more suited to more experienced users who needinformation about specific topics.

In the instruction section of the SUM, material is ordered accordingto the learning path, with the simplest, most necessary operationsappearing first and more advanced, complicated operations appearinglater. The size of this section depends on the intended readership; someusers may understand the software after a few examples (and can switchto using the reference section) whilst other users may require manyworked examples.

Page 61: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-47 THE DETAILED DESIGN AND PRODUCTION PHASE

The reference section of the SUM presents the basic operations,

ordered for easy reference (e.g. alphabetically). Reference documentationshould be more formal, rigorous and exhaustive than the instructionalsection. For example a command may be described in the instructionsection in concrete terms, with a specific worked example. Thedescription in the reference section should describe all the parameters,qualifiers and keywords, with several examples.

The development of the SUM should start as early as possible.Establishing the potential readership for the SUM should be the firststep. This information is critical for establishing the style of the document.Useful information may be found in the section ‘User Characteristics’ inthe URD.

The Software User Manual may be large, spread over severalvolumes. The SUM may be made available electronically, for example aspart of an online help facility. There should be specific softwarerequirements for such facilities.

5.4.4 Project management plan for the TR phase

The management plan for the TR phase must be documented inthe DD phase section of the Software Project Management Plan(SPMP/TR, see Part 2, Chapter 2). This plan may also cover the period upto final acceptance.

5.4.5 Configuration management plan for the TR phase

The TR phase procedures for the configuration management of thedeliverables, in the operational environment, must be documented in theSoftware Configuration Management Plan (SCMP/TR, see Part 2, Chapter3).

5.4.6 Acceptance test specification

Acceptance test designs, test cases and test procedures must bedocumented in the Software Verification and Validation Plan (SVVP/AT,see Part 2, Chapter 4).

5.4.7 Quality assurance plan for the TR phase

The TR phase quality monitoring procedures must be defined in theTR phase section of the Software Quality Assurance Plan (SQAP/TR, seePart 2, Chapter 5).

Page 62: ESA Software Engineering Standards

1-48 ESA PSS-05-0 Issue 2 (February 1991) THE DETAILED DESIGN AND PRODUCTION PHASE

This page is intentionally left blank.

Page 63: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-49 THE TRANSFER PHASE

CHAPTER 6 THE TRANSFER PHASE

6.1 INTRODUCTION

The TR phase can be called the ‘handover phase’ of the life cycle.The purpose of the TR phase is to install the software in the operationalenvironment and demonstrate to the initiator and users that the softwarehas all the capabilities described in the User Requirements Document(URD).

Installation and checkout of the software is the responsibility of thedeveloper. Representatives of users and operations personnel shallparticipate in acceptance tests. The Software Review Board (SRB) shallreview the software’s performance in the acceptance tests andrecommend, to the initiator, whether the software can be provisionallyaccepted or not.

The principal output of this phase is the Software TransferDocument (STD), which documents the acceptance testing activities.

The TR phase terminates with provisional acceptance of thesoftware and the start of operations.

6.2 INPUTS TO THE PHASE

The inputs to the TR phase are the:• code;• Detailed Design Document (DDD);• Software User Manual (SUM);• Acceptance Test specification (SVVP/AT);• Software Project Management Plan for the TR phase (SPMP/TR);• Software Configuration Management Plan for the TR phase

(SCMP/TR);• Software Quality Assurance Plan for the TR phase (SQAP/TR).

Page 64: ESA Software Engineering Standards

1-50 ESA PSS-05-0 Issue 2 (February 1991) THE TRANSFER PHASE

6.3 ACTIVITIES

TR phase activities shall be carried out according to the plansdefined in the DD phase.

6.3.1 Installation

The first activity of the TR phase is installation of the software. Thisis done by:• checking the deliverables against the configuration item list;• building a system executable in the target environment.

Procedures for building software may vary, depending on the typeof software, but the capability of building the system from thecomponents that are directly modifiable by the maintenance team shallbe established.

Maintenance staff should exercise the procedures for modifyingthe software, especially if any unfamiliar software development tools havebeen supplied.

6.3.2 Acceptance tests

Acceptance tests validate the software, i.e. they demonstrate thecapabilities of the software in its operational environment. Acceptancetests should be based on the user requirements, as stated in the URD.Acceptance tests plans, test designs, test cases and test procedures aredefined in the SVVP.

Acceptance tests are executed in the TR phase and the resultsrecorded in the SVVP. A summary of the acceptance test reports shouldbe inserted in the STD.

6.3.3 Provisional acceptance

Acceptance tests necessary for provisional acceptance shall beindicated in the SVVP. The criterion for provisional acceptance is whetherthe software is ready for operational use. A period of operations is usuallyrequired to show that the software meets all the requirements in the URD.

The provisional acceptance decision should be made by theinitiator after consultations with the SRB, end-user representatives andoperations staff.

Page 65: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-51 THE TRANSFER PHASE

6.4 OUTPUTS FROM THE PHASE

6.4.1 Statement of provisional acceptance

The statement of provisional acceptance shall be produced by theinitiator, on behalf of the users, and sent to the developer. Provisionalacceptance marks the end of the TR phase.

6.4.2 Provisionally accepted software system

The provisionally accepted software system shall consist of theoutputs of all previous phases and modifications found necessary in theTR phase.

6.4.3 Software Transfer Document

The purpose of the Software Transfer Document (STD) is to identifythe software that is being transferred and how to build and install it. Anoutput of the TR phase shall be the STD. The STD shall be handed overfrom the developer to the maintenance organisation at provisionalacceptance. The recommended table of contents for the STD ispresented in Appendix C.

The STD shall contain a summary of the acceptance test reportsand all documentation about software changes performed during the TRphase.

Page 66: ESA Software Engineering Standards

1-52 ESA PSS-05-0 Issue 2 (February 1991) THE TRANSFER PHASE

This page is intentionally left blank

Page 67: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-53 THE OPERATIONS AND MAINTENANCE PHASE

CHAPTER 7 THE OPERATIONS AND MAINTENANCE PHASE

7.1 INTRODUCTION

In the OM phase, the software first enters practical use. Theoperation of software is beyond the scope of these Standards, so thischapter only discusses maintenance.

The purpose of software maintenance is to ensure that the productcontinues to meet the real needs of the end-user. The available resourcesfor maintenance should reflect the importance of the product.

Unlike hardware maintenance, which aims to return a hardwareproduct to its original state, software maintenance always results in achange to a software product. Software maintenance staff shouldthoroughly understand the software which they have to alter. Training maybe necessary.

The principal output of this phase is the Project History Document(PHD), which summarises the development, operations and maintenanceof the product.

7.2 INPUTS TO THE PHASE

The inputs to the OM phase are the:• statement of provisional acceptance;• provisionally accepted software system;• Software Transfer Document.

7.3 ACTIVITIES

Until final acceptance, OM phase activities that involve thedeveloper shall be carried out according to the plans defined in theSPMP/TR.

The maintenance organisation may choose to adopt the SoftwareConfiguration Management Plan used in the development phases.Alternatively they may choose to produce a new one, specific to their

Page 68: ESA Software Engineering Standards

1-54 ESA PSS-05-0 Issue 2 (February 1991) THE OPERATIONS AND MAINTENANCE PHASE

needs. The effort to convert from one configuration management systemto another should not be underestimated, nor the risks involved ignored(e.g. the loss of configuration items or the incorrect attachment of labels).

The Software Project Management Plans and Software QualityAssurance Plans continue to apply to the activities of development staff,but not to operations and maintenance staff, who should develop theirown plans.

7.3.1 Final Acceptance

The early part of the OM phase should include a warranty period inwhich the developer should retain responsibility for correcting errors. Theend of the warranty period is marked by final acceptance.

The criterion for final acceptance is whether the software meets allrequirements stated in the URD. All the acceptance tests shall have beensuccessfully completed before the software is finally accepted.

The final acceptance decision should be made by the initiator afterconsultations with the SRB, end-user representatives and operationalstaff.

Even when no contractor is involved, there shall be a finalacceptance milestone to arrange the formal handover from softwaredevelopment to maintenance.

Whenever the handover is performed, the last document formallyreleased by the engineering (as opposed to maintenance) project leader,must be the first issue of the Project History Document (PHD).

7.3.2 Maintenance

After this warranty period, maintenance of the software may betransferred from the developer to a dedicated maintenance organisation.A maintenance organisation shall be designated for every softwareproduct in operational use. Resources shall be assigned to a product’smaintenance until it is retired.

Maintenance of software should be driven by the occurrence ofproblems and new requirements. The SRB controls problem handlingactivity, and shall authorise all modifications. Responsibility for minormodifications and emergency changes may be delegated tomaintenance staff, depending on the level of criticality.

Page 69: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 1-55 THE OPERATIONS AND MAINTENANCE PHASE

Procedures for software modification shall be defined. This is

normally done by carrying over the configuration management andverification procedures from the development phase. Consistencybetween code and documentation shall be maintained.

Some software problems can give rise to new requirements. Auser, after experience with the software, may propose a modification (inan SPR). The SRB, classifying the problem as a new or changedrequirement, drafts changes to the URD and reviews it with the usersconcerned. Users may also draft changes to the URD and put themforward to the SRB.

New requirements may also arise because the originalrequirements defined in the UR and SR phases were not appropriate, orbecause the user’s needs change. The maintenance budget shouldsupport a realistic level of requirement-change activity. Major newrequirements should be handled as a separate software developmentproject, and be separately budgeted.

Users should be kept informed of problems. If possible, softwareitems which are the subject of problem reports should be withdrawnfrom use while the problem is corrected. When withdrawal is not possible,temporary work-around solutions are permitted, provided the safety of thesystem is not impaired. All software modifications must be documented,even temporary ones.

When software is changed, regression tests should be performedto ensure that the change has not caused new faults. Regression testcases may be a subset of the acceptance test cases.

7.4 OUTPUTS FROM THE PHASE

7.4.1 Statement of final acceptance

The statement of final acceptance shall be produced by theinitiator, on behalf of the users, and sent to the developer. Its deliverymarks the formal handover of the software. The precondition of finalacceptance is that all the acceptance tests have been executedsatisfactorily.

Page 70: ESA Software Engineering Standards

1-56 ESA PSS-05-0 Issue 2 (February 1991) THE OPERATIONS AND MAINTENANCE PHASE

7.4.2 Project History Document

The Project History document (PHD) should be produced by thesoftware project manager. It summarises the main events and outcomeof the project. The PHD is useful to future projects for:• estimating the effort required;• setting up the organisation;• finding successful methods;• advising about problems and pitfalls.

The PHD is the place where estimates, made in the planningstages, are compared with actual events. The accuracy of predictions ofthe project schedule, software volume, manpower requirements,hardware requirements and cost should be measured. Productivityshould be estimated using the measurements of software volume,resources and time.

Preparation of the PHD forces a project manager to considerprogress carefully, and to draw personal and organisational conclusionswhen the events concerned are fresh in the project manager’s mind.Accordingly, the project manager should start making notes for the PHDat the beginning of the project. At the end of each phase, the plans madefor the previous phase should be compared with what actuallyhappened. This knowledge can also help in planning the next phase.

The PHD shall be delivered to the initiator after final acceptance,who should make it available to the maintenance organisation. Thechapter describing the performance of the system should be added bythe designated maintenance organisation during the OM phase andupdated when the product is retired. The recommended table of contentsfor the PHD is presented in Appendix C.

7.4.3 Finally accepted software system

This consists of one or more sets of documentation, source, objectand executable code corresponding to the current versions and releasesof the product.

Page 71: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-1

Part 2 Procedure Standards

Page 72: ESA Software Engineering Standards

ACTIVITY

PLAN

USERREQUIREMENTS

REVIEW

SOFTWAREREQUIREMENTS

DEFINITION

ARCHITECTURALDESIGN

DETAILEDDESIGN ANDPRODUCTION

SOFTWAREPROJECTMANAGEMENT

SOFTWARECONFIGURATIONMANAGEMENT

SOFTWAREVERIFICATIONANDVALIDATION

SOFTWAREQUALITYASSURANCE

SPMP/SR SPMP/AD SPMP/DD

SPMP/TR

SCMP/SR

SVVP/SR

SVVP/AT

SQAP/SR

SCMP/AD SCMP/DD

SVVP/AD

SVVP/ST

SQAP/AD

SVVP/DD

SVVP/IT

SCMP/TR

SVVP/AT

SVVP/ST

SVVP/IT

SVVP/UT

SQAP/DD

Output Output OutputOutputActivity Activity Activity Activity

SQAP/TR

Figure 1.1: Software Management Plans

Estimateproject cost

Plan SR phaseWBS & staffing

Estimateproject costto 30% accuracy

Plan AD phaseWBS & staffing

Estimateproject costto 10% accuracy

Plan DD phaseWBS & Staffing

DetailDD phase WBS

Plan TR phaseWBS & Staffing

SPMP/DDupdates

Define SR phaseprocedures for:

- documents

- CASE toolproducts

- prototype code

Define AD phaseprocedures for:

- documents

- CASE tool

products

- prototype code

Define DD phaseprocedures for:

- documents

- CASE tool

products

- deliverable code

operational

- documents

- deliverable code

environmentprocedures for:

Define

Define SR phasereview andtraceabilityprocedures

Define AD phasereview andtraceabilityprocedures

Define DD phasereview andtraceabilityprocedures

Plan acceptancetests

Plan systemtests

Plan integrationtests

Define acceptancetests

Define systemtests

Define integrationtests

Plan and defineunit tests

Plan SR phasemonitoringactivities

Plan AD phasemonitoringactivities

Plan DD phasemonitoringactivities

monitoringactivities

Plan TR phase

updates

updates

updates

Outline plan forwhole project

Outline plan forwhole project

Page 73: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-3

CHAPTER 1 MANAGEMENT OF THE SOFTWARE LIFE CYCLE

1.1 INTRODUCTION

Part 2 of these Standards describes the activities that are essentialfor managing the software life cycle. Whereas Part 1 describes theactivities and products of each phase of the life cycle, Part 2 discussesmanagement activities which are performed throughout all developmentphases.

The goal of the management activities is to build the product withinbudget, according to schedule, with the required quality. To achieve this,plans must be established for:• software project management;• software configuration management;• software verification and validation;• software quality assurance.

Figure 1.1 summarises how these plans must be documented.Plans for project management, configuration management, verification,validation and quality assurance are split into sections. Each sectionplans the activities for subsequent phases. While the same structure maybe repeated in each section of a document, the actual contents may vary.Titles of documents are separated from their section names by ‘/’ (e.g.SPMP/SR is the SR phase section of the Software Project ManagementPlan).

Figure 1.1 does not include the plans required to manage themaintenance of the software in the period between final acceptance andretirement. The maintenance organisation may choose to reusedevelopment plans or produce new plans.

1.2 SOFTWARE PROJECT MANAGEMENT

A project manager, or project management team, has to plan,organise, staff, monitor, control and lead a software project. The projectmanager is responsible for writing the Software Project Management Plan

Page 74: ESA Software Engineering Standards

2-4 ESA PSS-05-0 Issue 2 (February 1991) MANAGEMENT OF THE SOFTWARE LIFE CYCLE

(SPMP). The project manager leads the development team and is theprincipal point of contact between them and the initiator, end-users andother parties.

1.3 SOFTWARE CONFIGURATION MANAGEMENT

Proper configuration management is essential for control of asoftware product. A component may function perfectly well, but either afault in integration or a mistake in identification can result in obscureerrors. This standard defines requirements for identifying, controlling,releasing and changing software items and recording their status.Procedures for managing the software configuration must be defined inthe Software Configuration Management Plan (SCMP).

1.4 SOFTWARE VERIFICATION AND VALIDATION

These standards adopt a general definition of verification as theprocess of reviewing, inspecting, testing, checking and auditing softwareproducts. Verification is essential to assure that the product will be fit forits purpose. The verification approach should be planned by projectmanagement and carried out by development staff.

In these Standards, ‘validation’ is the evaluation of software at theend of the development process to ensure compliance with userrequirements. Validation is done in the TR phase.

All verification and validation activities must be documented in theSoftware Verification and Validation Plan (SVVP).

1.5 SOFTWARE QUALITY ASSURANCE

The quality assurance activity is the process of verifying that theseStandards are being applied. In small projects this is done by the projectmanager, but in large projects specific staff should be allocated to therole. The Software Quality Assurance Plan is the document whichdescribes how adherence to the Standards is to be verified.

Where ESA PSS-01-series documents are applicable, and as aconsequence ESA PSS-01-21, ‘Software Product AssuranceRequirements for ESA Space Systems’ is also applicable, Part 2, Chapter5 of these Standards, ‘Software Quality Assurance’, ceases to apply.

Page 75: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-5 SOFTWARE PROJECT MANAGEMENT

CHAPTER 2 SOFTWARE PROJECT MANAGEMENT

2.1 INTRODUCTION

Software Project Management (SPM) is ‘the process of planning,organising, staffing, monitoring, controlling and leading a softwareproject’ (ANSI/IEEE Std 1058.1-1987). The Software Project ManagementPlan (SPMP) is the controlling document for managing a software project.The SPMP defines the technical and managerial project functions,activities and tasks necessary to satisfy the requirements of a softwareproject.

The SPMP is updated and refined, throughout the life cycle, asmore accurate estimates of the effort involved become possible, andwhenever changes in requirements or design occur. A number ofmethods and tools are available for software project planning and theiruse is recommended.

During all the phases of the life cycle, project management shouldreview how a plan works out in practice. Important deviations betweenestimates and actuals have to be explained and documented in theProject History Document (PHD), which is issued in the OM phase.

2.2 ACTIVITIES

2.2.1 Organising the project

A key responsibility of software project management is organisingall project activities. There are several possible models for theorganisation of a software project (e.g. functional and matrix).

Once tasks have been defined, project management must definethe team structure to carry them out. Positions in that structure should beadequately defined so that each team member has clear responsibilitiesand lines of authority. These responsibilities should be documented interms of reference and work package descriptions.

Page 76: ESA Software Engineering Standards

2-6 ESA PSS-05-0 Issue 2 (February 1991) SOFTWARE PROJECT MANAGEMENT

2.2.2 Leading the project

Project management decide the objectives and priorities at eachstage. They should document the assumptions, dependencies andconstraints that influence their decisions in the SPMP.

2.2.3 Risk management

Risks threaten a project’s success. Project management shouldidentify the risks to a project and assess the threat they pose. This iscalled ‘risk analysis’. Examples of potential risk areas are:• quality and stability of user requirements;• level of definition and stability of external interfaces;• adequacy and availability of resources;• availability and quality of tools;• staff training and experience;• definition of responsibilities;• short time scales;• technical novelty of the project.

Risks may be quantified by combining the probability of an eventwith the cost to the project if it does happen. The total risk to the projectis the sum of all risks. Probabilities can be estimated from historical data(e.g. sickness rates of employees) or manufacturer’s data (e.g. the meantime between failure of a disk drive).

Project management should devise a plan for reducing risk levelsand ensure that it is carried out. Achievements should be measured andthe risks reevaluated throughout the project.

Decisions about priorities should be supported by risk analysis.Accurate assessment of the impact of decisions relies on quantitativeestimation of the factors that should change when action is taken.

2.2.4 Technical management

There are many methods and tools that can be applied throughoutthe software life cycle, which can greatly enhance the quality of the endproduct and their use is strongly recommended. Project management isresponsible for selecting methods and tools, and for enforcing their use.

Page 77: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-7 SOFTWARE PROJECT MANAGEMENT

Technical management includes organising software configuration

management, and verification, validation and test activities.

2.2.5 Planning, scheduling and budgeting the work

Estimating the resources and time scales required for activities is akey part of planning their execution. The basic approach to estimation isto analyse the project into tasks that are small enough for their costs tobe evaluated easily and accurately. Estimates for the time scales andresources for the whole project are then synthesised from the estimatesfor the individual tasks. Each task should be linked to an appropriate partof the deliverable for that phase. For example, tasks in the SR phasemight be based on requirements, whereas in the AD phase they might bebased on components. Traditionally, estimates for detailed design andproduction have been based on lines of code. Other factors that affectestimates are the experience of the development team, the novelty of thetechnical area and the availability of software engineering tools.

The Work Breakdown Structure (WBS) is one of the fundamentaltools for the planning and control of project activities. The WBS describesthe hierarchy of tasks (grouped into ‘work packages’) to be carried out ina project. The WBS corresponds to the structure of the work to beperformed, and reflects the way in which the project costs will besummarised and reported.

A work package defines a set of tasks to be performed in a project.Work package descriptions should define tasks in sufficient detail toallow individuals, or small groups of people, to work independently of therest of the project. The start and end dates of a work package should bespecified. The duration of a product oriented work package should besufficiently short to maintain visibility of the production process (e.g. amonth in the DD phase). Procedure-oriented work packages, for exampleproject management, may extend over the entire length of the project.

The work schedule should show when the work packages are tobe started and finished. A milestone chart shows key events in theproject; these should be related to work package completion dates.

An estimate of the cost of the whole project should be included inthe SR phase section of the SPMP. Pending definition of the softwarerequirements and the design, it will be difficult to provide accurateestimates. The actual costs of similar projects help in making initialestimates.

Page 78: ESA Software Engineering Standards

2-8 ESA PSS-05-0 Issue 2 (February 1991) SOFTWARE PROJECT MANAGEMENT

Cost models may be used for estimating the time required for

detailed design and production. Careful consideration should be given tothe applicability of any cost model. Parameter values attributed in makinga cost model estimate should be clearly documented.

2.2.6 Reporting project progress

Project reporting is essential for the proper control of a softwareproject. Carried out by project management, it provides visibility ofdevelopment activity at regular intervals during the project. Reports arenecessary to assure people outside the development team that theproject is proceeding satisfactorily.

Project management should ensure that material presented atprogress reviews is sufficiently detailed, and in a consistent format thatenables the PHD to be compiled simply from progress-review data.

Contracts for software procurement should require that progressdata be collected and progress reports be generated during thedevelopment. When it is necessary to keep information confidentialbetween ESA and the contractor, the progress reports and PHD may bemarked ‘For ESA use only’.

2.3 THE SOFTWARE PROJECT MANAGEMENT PLAN

All software project management activities shall be documented inthe Software Project Management Plan (SPMP). The SPMP is thecontrolling document for managing a software project. The SPMP isdivided into four sections which contain the management plans for theSR, AD, DD and TR phases. The table of contents for each section of theSPMP is described in Appendix C. This table of contents is derived fromthe IEEE Standard for Software Project Management Plans (ANSI/IEEEStd 1058.1-1987).

2.4 EVOLUTION OF THE SPMP THROUGHOUT THE LIFE CYCLE

2.4.1 UR phase

By the end of the UR review, the SR phase section of the SPMPshall be produced (SPMP/SR). The SPMP/SR describes, in detail, theproject activities to be carried out in the SR phase. As part of itsintroduction, the SPMP/SR shall outline a plan for the whole project.

Page 79: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-9 SOFTWARE PROJECT MANAGEMENT

A rough estimate of the total cost of the software project should be

included in the SPMP/SR. Technical knowledge and experience gainedon similar projects should be used in arriving at the cost estimate.

A precise estimate of the effort involved in the SR phase shall beincluded in the SPMP/SR. Specific factors affecting estimates for the workrequired in the SR phase are the:• number of user requirements;• level of user requirements;• stability of user requirements;• level of definition of external interfaces;• quality of the URD.

An estimate based simply on the number of user requirementsmight be very misleading - a large number of detailed low-level userrequirements might be more useful, and save more time in the SR phase,than a few high-level user requirements. A poor quality URD might implythat a lot of requirements analysis is required in the SR phase.

2.4.2 SR phase

During the SR phase, the AD phase section of the SPMP shall beproduced (SPMP/AD). The SPMP/AD describes, in detail, the projectactivities to be carried out in the AD phase.

An estimate of the total project cost shall be included in theSPMP/AD. Every effort should be made to arrive at estimates with anaccuracy better than 30%. Technical knowledge and experience gainedon similar projects should be used in arriving at an estimate of the totalproject cost.

When no similar projects exist, it may be useful to build aprototype, to get a more precise idea of the complexity of the software.Prototyping activities should be properly planned and resourced.

A precise estimate of the effort involved in the AD phase shall beincluded in the SPMP/AD. Specific factors that affect estimates for thework required in the AD phase are:• number of software requirements;• level of software requirements;

Page 80: ESA Software Engineering Standards

2-10 ESA PSS-05-0 Issue 2 (February 1991) SOFTWARE PROJECT MANAGEMENT

• stability of software requirements;• level of definition of external interfaces;• quality of the SRD.

If an evolutionary development life cycle approach is to be used,then this should be stated in the SPMP/AD.

2.4.3 AD phase

During the AD phase, the DD phase section of the SPMP shall beproduced (SPMP/DD). The SPMP/DD describes, in detail, the projectactivities to be carried out in the DD phase.

An estimate of the total project cost shall be included in theSPMP/DD. An accuracy of 10% should be aimed at. The number of linesof code should be estimated for each software component. This shouldbe used to estimate the time required to write the software, and thereforeits cost.

The SPMP/DD shall contain a WBS that is directly related to thedecomposition of the software into components.

The SPMP/DD shall contain a planning network showing therelationships between the coding, integration and testing activities. Toolsare available for this kind of planning.

2.4.4 DD phase

As the detailed design work proceeds to lower levels, the WBS andjob schedule need to be refined to reflect this. To achieve the necessarylevel of visibility, no software production work packages in the SPMP/DDshall last longer than 1 man-month.

During the DD phase, the TR phase section of the SPMP shall beproduced (SPMP/TR). The SPMP/TR describes, in detail, project activitiesuntil final acceptance, in the OM phase.

Page 81: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-11 SOFTWARE PROJECT MANAGEMENT

This page is intentionally left blank.

Page 82: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-12

CHAPTER 3 SOFTWARE CONFIGURATION MANAGEMENT

3.1 INTRODUCTION

As defined in ANSI/IEEE Std 729-1983, software configurationmanagement (SCM) is process of:• identifying and defining the configuration items in a system;• controlling the release and change of these items throughout the

system life cycle;• recording and reporting the status of configuration items and change

requests;• verifying the completeness and correctness of configuration items.

Software configuration management is both a managerial and atechnical activity, and is essential for proper software quality control. Thesoftware configuration management activities for a project must bedefined in the Software Configuration Management Plan (SCMP).

All software items, for example documentation, source code,executable code, files, tools, test software and data, shall be subjectedto configuration management procedures. The configurationmanagement procedures shall establish methods for identifying, storingand changing software items through development, integration andtransfer. In large developments, spread across multiple hardwareplatforms, configuration management procedures may differ in physicaldetails. However, a common set of configuration managementprocedures shall be used.

Tools for software configuration management are widely available.Their use is strongly recommended to ensure that SCM procedures areapplied consistently and efficiently.

3.2 ACTIVITIES

3.2.1 Configuration identification

A ‘configuration item’ (CI) is a collection of software elements,treated as a unit, for the purpose of configuration management. Several

Page 83: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-13 SOFTWARE CONFIGURATION MANAGEMENT

factors may be relevant in deciding where to draw the boundaries of aconfiguration item. A configuration item may be any kind of software item,for example: a module, a document, or a set of CIs.

The key to effective software configuration management isunambiguous identification of the parts of the software. Everyconfiguration item shall have an identifier that distinguishes it from otheritems with different:• requirements, especially functionality and interfaces;• implementation.

Each component defined in the design process shall bedesignated as a CI and possess an identifier. The identifier shall includea number or a name related to the purpose of the CI. The identifier shallinclude an indication of the type of processing the CI is intended for (e.g.filetype information).

The term ‘version’ is used to define a stage in the evolution of a CI.Each stage is marked by a ‘version number’. When the CI changes, theversion number changes. The identifier of a CI shall include a versionnumber.

The identifier of documents shall include an issue number and arevision number. Issue numbers are used to mark major changes andrevision numbers are used to mark minor changes. Major changesusually require formal approval. The issue number and revision numbertogether mark the version of the document.

The configuration identification method shall be capable ofaccommodating new CIs, without requiring the modification of theidentifiers of any existing CIs.

A ‘baseline’ is a document or a product that has been formallyreviewed and agreed upon, and is a basis for further development. Abaseline is an assembly of configuration items. Formal change controlprocedures are required to modify a baseline.

Integration of software should be coordinated by the identificationand control of baselines. Figure 3.2.1 illustrates the relationship betweenunits of modules, baselines and releases. Modules, after successful unittesting, are integrated into existing baselines. Incorporation into abaseline can only occur after successful integration tests. Baselines mustbe system tested before being transferred to users as a ‘release’ of the

Page 84: ESA Software Engineering Standards

2-14 ESA PSS-05-0 Issue 2 (February 1991) SOFTWARE CONFIGURATION MANAGEMENT

software. After delivery and installation, releases of the software undergoacceptance testing by users.

UNIT 1

UNIT 2

UNIT 3

BASELINE 1

BASELINE 2 RELEASE 1

DevelopmentLibraries

MasterLibraries

StaticLibraries

Integrate

Transfer

CI state

Key

Figure 3.2.1 Baselines and releases

In the TR phase, a list of configuration items in the first releaseshall be included in the STD. In the OM phase, a list of changedconfiguration items shall be included in each Software Release Note(SRN). An SRN shall accompany each release made in the OM phase.

As part of the configuration identification method, a module shallhave a header that includes:• configuration item identifier (name, type, version);• original author;• creation date;• change history (version/date/author/description).

Note that a module header may also contain other componentinformation from the DDD, Part 2.

Page 85: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-15 SOFTWARE CONFIGURATION MANAGEMENT

All documentation and storage media shall be clearly labelled in a

standard format, with at least the following data:• project name;• configuration item identifier (name, type, version);• date;• content description.

3.2.2 Configuration item storage

A software library is a controlled collection of configuration items. Itmay be a single file or a collection of files. A library may exist on variousmedia (e.g. paper, magnetic disk).

To ensure the security and control of the software, as a minimum,the following software libraries shall be implemented for storing all thedeliverable components (e.g. documentation, source and executablecode, test files, command procedures):• Development (or Dynamic) library;• Master (or Controlled) library;• Static (or Archive) library.

Tools for handling software libraries are essential for efficientconfiguration control. Such tools should allow CI identification, changetracking, and CI cross-referencing.

Software is coded and tested as a set of modules in thedevelopment library. After unit tests, modules are transferred to masterlibraries for integration testing and system testing. When changes tomaster library modules are necessary, the appropriate modules aretransferred back to a development library from the master library.

A baseline includes a set of master libraries. When a baseline isreleased, copies of all master libraries should be taken. These copies,called ‘static’ libraries, shall not be modified.

Up-to-date security copies of master and static libraries shallalways be available. Procedures for the regular backup of developmentlibraries shall be established. This is called ‘media control’.

Page 86: ESA Software Engineering Standards

2-16 ESA PSS-05-0 Issue 2 (February 1991) SOFTWARE CONFIGURATION MANAGEMENT

3.2.3 Configuration change control

Software configuration control is the process of evaluatingproposed changes to configuration items and coordinating theimplementation of approved changes. Software configuration control ofan item can only occur after formal establishment of its configurationidentification and inclusion in a baseline.

Proper software configuration control demands the definition of the:• level of authority required to change each CI;• methods for handling proposals for changing any CI.

At the top level, the software configuration control processidentifies the procedures for handling changes to known baselines.

In addition, project management has the responsibility for theorganisation of SCM activities, the definition of SCM roles (e.g.configuration control board and software librarian), and the allocation ofstaff to those roles. In the TR and OM phases of a software project,ultimate responsibility for software configuration control lies with theSoftware Review Board (SRB). The SRB should be composed ofmembers with sufficient authority and expertise to resolve any problemsor nonconformances of the software.

When a software item does not conform to its specification, itshould be identified as non-conforming, and held for review action.Nonconformance should be classified as minor or major depending onits severity and its urgency. Minor nonconformances can be processed ata level below SRB. Depending on the size and the management structureof the software development, a further classification should be performedin relation to the software life cycle, i.e. against the user requirements,software requirements, design, etc.

Changes to external interfaces or to software packages used bythe system should be handled like changes to ordinary CIs.

3.2.3.1 Levels of change control

As a configuration item passes through unit, integration, systemand acceptance tests, a higher level of authority is needed to approvechanges. This is called the promotionof a CI. Just as programmers signoff unit tests, team leaders sign off integration tests, and project leaders

Page 87: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-17 SOFTWARE CONFIGURATION MANAGEMENT

sign off system tests, so change approval demands a level of authoritycorresponding to the verification status of the CI.

3.2.3.2 Change control procedures

Changes occur naturally in the evolution of the software system.This evolution should be planned and implemented using controlledsoftware libraries. Changes can also occur because of problems indevelopment or operation. Such changes require backtracking throughthe life cycle to ensure that the corrections are carried out with the samedegree of quality control as was used for the original development. Thelevel of authority for each change depends on the part to be changed,and on the phase in the life cycle that has been reached.

3.2.3.2.1 Documentation change procedures

The change procedure described below shall be observed whenchanges are needed to a delivered document. 1. A draft of a document is produced and submitted for review. If the

document is not new, all changed text must be identified. 2. The reviewers record their comments about the draft document on

Review Item Discrepancy (RID) forms. A recommended solution maybe inserted on the RID form. The RIDs are then returned to theauthor(s) of the document.

3. The author(s) of the document record their response on the RID form. 4. Each RID is processed at a formal review meeting and an action

decided (see Section 4.2.1). 5. The draft document and the approved RIDs are used to make the next

revision, or issue if there major changes, of the document. 6. Each revision or issue of a document must be accompanied by a

Document Change Record (DCR) and an updated Document StatusSheet (DSS).

Up to the end of the TR phase, the formal review meeting is aUR/R, SR/R, AD/R or DD/R, depending on the document. In the OMphase the formal review is conducted by the SRB. Templates for RID,DCR and DSS forms are provided in Appendix E.

Page 88: ESA Software Engineering Standards

2-18 ESA PSS-05-0 Issue 2 (February 1991) SOFTWARE CONFIGURATION MANAGEMENT

3.2.3.2.2 Problem reporting procedures

Software problems can be reported at any stage in the life cycle.Problems can fall into a number of categories according to the degree ofregression in the life cycle.

Problem categories are:• operations error;• user documentation does not conform to code;• code does not conform to design;• design does not conform to requirements;• new or changed requirements.

Selection of the problem category defines the phase of the lifecycle at which corrective action needs to start.

Software problems and change proposals shall be handled by theprocedure described below. This change procedure requires a formalreview to be held (see Section 4.2.1).1. A Software Problem Report (SPR) must be completed for each

detected problem, giving all information about the symptoms, theoperating environment and the software under test. Evidence, such aslistings of results, may be attached. A problem does not formally existuntil an SPR has been written.

2. The SPR is passed to the Software Review Board (SRB) who willassign it to the relevant authority for analysis. A Software ChangeRequest form (SCR) must be completed for each software changefound necessary. This describes the changes required and includesan assessment of the cost and schedule impacts.

3. The Software Review Board then reviews each SCR and, ifappropriate, assigns someone to carry out the change.

4. Each software modification is documented in detail in a SoftwareModification Report (SMR), complete with items such as:• source code changes;• test reports;• documentation changes;• verification reports.

Page 89: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-19 SOFTWARE CONFIGURATION MANAGEMENT

Templates of SPR, SCR and SMR forms are given in Appendix E.

3.2.4 Configuration status accounting

Software configuration status accounting is the administrativetracking and reporting of all configuration items.

The status of all configuration items shall be recorded.Configuration status accounting continues, as do all other configurationmanagement activities, throughout the life cycle.

To perform software status accounting, each software project shallrecord the:• date and version/issue of each baseline;• date and status of each RID and DCR;• date and status of each SPR, SCR and SMR;• summary description of each Configuration Item.

Configuration status accounts should be produced at projectmilestones, and may be produced periodically between projectmilestones.

Information in configuration status accounts should be used togenerate the SRNs and CI lists that must accompany each delivery of asoftware baseline.

3.2.5 Release

The first release of the software must be documented in the STD.Subsequent releases of software must be accompanied by a SoftwareRelease Note (SRN) that lists the CIs included in the release, and theprocedures for installing them, so that they can be made available for use(see Appendix E). As a minimum, the SRN shall record the faults thathave been repaired and the new requirements that have beenincorporated.

For each release, documentation and code shall be consistent.Further, old releases shall be retained, for reference. Where possible, theprevious release should be kept online during a change-over period, toallow comparisons, and as a fallback. Older releases may be archived.The number of releases in operational use at any time should beminimised.

Page 90: ESA Software Engineering Standards

2-20 ESA PSS-05-0 Issue 2 (February 1991) SOFTWARE CONFIGURATION MANAGEMENT

Some form of software protection is desirable for controlled source

and binary code to avoid use of an incorrect release. The strength of thisprotection depends on the criticality of use of the product. In general,each release should be self-identifying (e.g. checksum, operatordialogue or printed output).

Modified software shall be retested before release. Tests shouldbe selected from the SVVP to demonstrate its operational capability.

While it is usually not necessary to repeat all the acceptance testsafter a software change is made, a standard subset of the acceptancetests (often called ‘regression tests’) should be run when any newrelease is made. These tests are required to demonstrate that amodification has introduced no unexpected side effects.

3.3 THE SOFTWARE CONFIGURATION MANAGEMENT PLAN

All software configuration management activities shall bedocumented in the Software Configuration Management Plan (SCMP).The SCMP is divided into four sections which contain the configurationmanagement plans for the SR, AD, DD and TR phases. The table ofcontents for each section of SCMP is described in Appendix C. This tableof contents is derived from the IEEE Standard for Software ConfigurationManagement Plans (ANSI/IEEE Std 828-1983).

Additional information on configuration management may befound in ANSI/IEEE Std 1042-1987, Guide to Software ConfigurationManagement.

3.4 EVOLUTION OF THE SCMP THROUGHOUT THE LIFE CYCLE

Configuration management procedures shall be in place beforesoftware production (code and documentation) starts. SCM proceduresshould be simple and efficient. Wherever possible, procedures shouldbe capable of reuse in later phases. Instability in SCM procedures can bea major cause of poor progress in a software project.

3.4.1 UR phase

By the end of the UR review, the SR phase section of the SCMPshall be produced (SCMP/SR). The SCMP/SR shall cover the

Page 91: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-21 SOFTWARE CONFIGURATION MANAGEMENT

configuration management procedures for all documentation, CASE tooloutputs or prototype code, to be produced in the SR phase.

3.4.2 SR phase

During the SR phase, the AD phase section of the SCMP shall beproduced (SCMP/AD). The SCMP/AD shall cover the configurationmanagement procedures for documentation, CASE tool outputs orprototype code, to be produced in the AD phase. Unless there is a goodreason to change (e.g. different CASE tool used), SR phase proceduresshould be reused.

3.4.3 AD phase

During the AD phase, the DD phase section of the SCMP shall beproduced (SCMP/DD). The SCMP/DD shall cover the configurationmanagement procedures for documentation, deliverable code, CASE tooloutputs or prototype code, to be produced in the DD phase. Unless thereis a good reason to change, AD or SR phase procedures should bereused.

3.4.4 DD phase

During the DD phase, the TR phase section of the SCMP shall beproduced (SCMP/TR). The SCMP/TR shall cover the procedures for theconfiguration management of the deliverables in the operationalenvironment.

Page 92: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-22

CHAPTER 4 SOFTWARE VERIFICATION AND VALIDATION

4.1 INTRODUCTION

In ANSI/IEEE Std 729-1983, three definitions of verification aregiven. Verification can mean the:• act of reviewing, inspecting, testing, checking, auditing, or otherwise

establishing and documenting whether or not items, processes,services or documents conform to specified requirements(ANSI/ASQC A3-1978);

• process of determining whether or not the products of a given phaseof the software development life cycle fulfil the requirementsestablished during the previous phase;

• formal proof of program correctness.

The first definition of verification in the list above is the most generaland includes the other two. In these Standards, the first definitionapplies.

Validation is, according to its ANSI/IEEE definition, ‘the evaluationof software at the end of the software development process to ensurecompliance with the user requirements’. Validation is, therefore, ‘end-to-end’ verification.

Verification is essential for assuring the quality of a product.Software verification is both a managerial and a technical function, sincethe verification programme needs to be both defined and implemented. Aproject’s verification activities should reflect the software’s criticality, andthe quality required of it. Verification can be the most time-consuming andexpensive part of a project; verification activities should appear in theSPMP. Figure 4.1 shows the life cycle verification approach.

Page 93: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-23 SOFTWARE VERIFICATION AND VALIDATION

URD

SRD

ADD

DDD

INTEGRATION

TESTS

SYSTEMTESTS

ACCEPTANCE

TESTS

UNIT

TESTS

Product

Key

Activity

Verify

1

2

3

4

5

6

7

8

CODE

DETAILED

DESIGN

ARCHITECTURAL

DESIGN

SOFTWARE

REQUIREMENTS

DEFINITION

USER

REQUIREMENTS

DEFINITION 9

Accepted

Software

Tested

System

TestedSubsystems

TestedModules

Compiled

Modules

Project

Request

SVVP/AT

SVVP/SR

SVVP/AD

SVVP/DD

SVVP/UT

SVVP/IT

SVVP/ST

SVVP/DD

Figure 4.1 Life cycle verification approach

4.2 ACTIVITIES

Verification activities include:• technical reviews, walkthroughs and software inspections;• checking that software requirements are traceable to user

requirements;• checking that design components are traceable to software

requirements;• checking formal proofs and algorithms;• unit testing;• integration testing;• system testing;• acceptance testing;• audits.

The actual activities to be conducted in a project are described inthe Software Verification and Validation Plan (SVVP).

Page 94: ESA Software Engineering Standards

2-24 ESA PSS-05-0 Issue 2 (February 1991) SOFTWARE VERIFICATION AND VALIDATION

As well as demonstrating that assertions about the software are

true, verification can also show that assertions are false. The skill andingenuity needed to identify defects should not be underestimated.Users will have more confidence in a product that has been through arigorous verification programme than one subjected to minimalexamination and testing before release.

4.2.1 Reviews

The procedures for software reviews are based closely on theANSI/IEEE Std 1028-1988. A software review is an evaluation of asoftware element to ascertain discrepancies from planned results and torecommend improvements.

Three kinds of reviews can be used for software verification:• technical review• walkthrough;• software inspection.

The three kinds of reviews are all ‘formal reviews’ in the sense thatall have specific objectives and procedures. All kinds of review seek toidentify defects and discrepancies of the software against specifications,plans and standards.

The software problem reporting procedure and document changeprocedure defined in Part 2, Section 3.2.3.2, call for a formal reviewprocess for all changes to code and documentation. Any of the threekinds of formal review procedure can be applied for change control. TheSRB, for example, may choose to perform a technical review, softwareinspection or walkthrough as necessary.

4.2.1.1 Technical reviews

Technical reviews should be used for the UR/R, SR/R, AD/R, andDD/R. Technical reviews evaluate specific software elements to verifyprogress against plan.

The objective of a technical review is to evaluate a specific softwareelement (e.g. document, source module), and provide management withevidence that:• it conforms to specifications made in previous phases;

Page 95: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-25 SOFTWARE VERIFICATION AND VALIDATION

• the software element has been produced according to the project

standards and procedures;• changes have been properly implemented, and affect only those

systems areas identified by the change specification (described in aDCR or SCR).

4.2.1.2 Walkthroughs

Walkthroughs should be used for the early evaluation ofdocuments, models, designs and code in the SR, AD and DD phases.

The objective of a walkthrough is to evaluate a specific softwareelement (e.g. document, source module). A walkthrough should attemptto identify defects and consider possible solutions. In contrast with otherforms of review, secondary objectives are to educate, and to resolvestylistic problems.

4.2.1.3 Software inspections

Software inspections should be used for the evaluation ofdocuments and code in the SR, AD and DD phases, before technicalreview or testing.

The objective of a software inspection is to detect and identifydefects. A software inspection is a rigorous peer examination that:• identifies nonconformances with respect to specifications and

standards;• uses metrics to monitor progress;• ignores stylistic issues;• does not discuss solutions.

4.2.2 Tracing

Forwards traceability requires that each input to a phase shall betraceable to an output of that phase. Forwards traceability demonstratescompleteness. Forwards tracing is normally done by constructing cross-reference matrices. Holes in the matrix demonstrate incompleteness quitevividly.

Backwards traceability requires that each output of a phase shallbe traceable to an input to that phase. Outputs that cannot be traced toinputs are superfluous, unless it is acknowledged that the inputs

Page 96: ESA Software Engineering Standards

2-26 ESA PSS-05-0 Issue 2 (February 1991) SOFTWARE VERIFICATION AND VALIDATION

themselves were incomplete. Backwards tracing is normally done byincluding with each item a statement of why it exists (e.g. the descriptionof the function of a component may be a list of functional requirements).

During the software life cycle it is necessary to trace:• user requirements to software requirements and vice-versa;• software requirements to component descriptions and vice versa;• integration tests to major components of the architecture and vice-

versa;• system tests to software requirements and vice-versa;• acceptance tests to user requirements and vice-versa.

To support traceability, all components and requirements areidentified. The SVVP should define how tracing is to be done. Referencesto components and requirements should include identifiers.

4.2.3 Formal proof

Where practical, formal deductive proof of the correctness ofsoftware may be attempted.

Formal Methods, such as Z and VDM, possess an agreed notation,with well-defined semantics, and a calculus, which allows proofs to beconstructed.

The first property is shared with other methods for softwarespecification, but the second sets them apart. If a Formal Method candemonstrate with certainty that something is correct, then separateverification is not necessary. However, human errors are still possible,and ways should be sought to avoid them, for example by ensuring thatall proofs are checked independently. CASE tools are available thatsupport Formal Methods, and their use is recommended.

Refinement of a formal specification into executable code isgenerally not a deductive process; other forms of verification (e.g.testing), are necessary to verify the refinement.

4.2.4 Testing

Testing is the process of exercising or evaluating a system orsystem component, by manual or automated means, to:

Page 97: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-27 SOFTWARE VERIFICATION AND VALIDATION

• confirm that it satisfies specified requirements;• identify differences between expected and actual results.

The amount of time spent in testing software is frequentlyunderestimated. Testing activities must be carefully specified so that theycan be adequately budgeted for. The expense of testing increases withthe number of errors present before it begins. Cheaper methods ofremoving errors, such as inspection, walkthrough and formal proof,should always be tried before testing is started.

Software testing includes the following activities:• planning the general approach and allocating resources;• detailing the general approach for various kinds of tests in a test

design;• defining the inputs, predicted results and execution conditions in a

test case specification;• stating the sequence of actions to be carried out by test personnel in

a test procedure;• logging the execution of a test procedure in a test report.

Four kinds of testing have been identified in these Standards: unittesting, integration testing, system testing and acceptance testing.

Test software should be produced to the same standards as thedeliverable software. Test software and data must therefore bedocumented and subjected to configuration control procedures. Thisallows monitoring of the testing process, and permits test software anddata to be reused later, to verify that the software’s functionality andperformance have not been impaired by modifications. This is called‘regression’ testing.

Test plans, test designs, test cases, test procedures and testreports for unit, integration, system and acceptance tests must bedocumented in the SVVP.

Figure 4.2.4 summarises when and where testing is documented inthe SVVP.

Page 98: ESA Software Engineering Standards

2-28 ESA PSS-05-0 Issue 2 (February 1991) SOFTWARE VERIFICATION AND VALIDATION

INTEGRATIONTESTS

SYSTEMTESTS

ACCEPTANCETESTS

UNITTESTS

SVVP

SECTION

PHASE USER

REQUIREMENTS

REVIEW

SOFTWARE

REQUIREMENTS

DEFINITION

ARCHITECTURAL

DESIGN

DETAILED

DESIGN AND

PRODUCTION

TRANSFER

PlansDesigns

Cases

ProceduresReports

Plans

Designs

Cases

Procedures

Reports

Plans

Designs

Cases

Procedures

Reports

Plans

Designs

Cases

Procedures

Reports

Figure 4.2.4 Life cycle production of test documentation

In Figure 4.2.4, the entry ‘Plans’ in the System Tests row, forexample, means that the System Test Plans are drawn up in the SRphase and placed in the SVVP section ‘System Tests’, subsection ‘Plans’.This is abbreviated as ‘SVVP/ST/Plans’.

Reports of Acceptance Tests must also be summarised in theSoftware Transfer Document.

4.2.5 Auditing

Audits are independent reviews that assess compliance withsoftware requirements, specifications, baselines, standards, procedures,instructions, codes and contractual and licensing requirements. A‘physical audit’ checks that all items identified as being part of theconfiguration are present in the product baseline. A ‘functional audit’checks that unit, integration and system tests have been carried out andrecords their success or failure. Functional and physical audits shall beperformed before the release of the software.

Page 99: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-29 SOFTWARE VERIFICATION AND VALIDATION

4.3 THE SOFTWARE VERIFICATION AND VALIDATION PLAN

All software verification and validation activities shall bedocumented in the Software Verification and Validation Plan (SVVP). TheSVVP is divided into seven sections which contain the verification plansfor the SR, AD, DD phases and the unit, integration, system andacceptance test specifications. The table of contents for each section ofSVVP is described in Appendix C. This table of contents is derived fromthe IEEE Standard for Verification and Validation Plans (IEEE Std 1012-1986) and the IEEE Standard for Software Test Documentation (IEEE Std829-1983).

The SVVP shall ensure that the verification activities:• are appropriate for the degree of criticality of the software;• meet the verification and acceptance testing requirements (stated in

the SRD);• verify that the product will meet the quality, reliability, maintainability

and safety requirements (stated in the SRD);• are sufficient to assure the quality of the product.

4.4 EVOLUTION OF THE SVVP THROUGHOUT THE LIFE CYCLE

4.4.1 UR phase

By the end of the UR review, the SR phase section of the SVVPshall be produced (SVVP/SR). The SVVP/SR shall define how to traceuser requirements to software requirements, so that each softwarerequirement can be justified. It should describe how the SRD is to beevaluated by defining the review procedures. It may include specificationsof the tests to be performed with prototypes.

The initiator(s) of the user requirements should lay down theprinciples upon which the acceptance tests should be based. Thedeveloper shall construct an acceptance test plan in the UR phase anddocument it in the SVVP. This plan should define the scope, approach,resources and schedule of acceptance testing activities.

4.4.2 SR phase

During the SR phase, the AD phase section of the SVVP shall beproduced (SVVP/AD). The SVVP/AD shall define how to trace software

Page 100: ESA Software Engineering Standards

2-30 ESA PSS-05-0 Issue 2 (February 1991) SOFTWARE VERIFICATION AND VALIDATION

requirements to components, so that each software component can bejustified. It should describe how the ADD is to be evaluated by definingthe review procedures. It may include specifications of the tests to beperformed with prototypes.

During the SR Phase, the developer analyses the user requirementsand may insert ‘Acceptance testing requirements’ in the SRD. Theserequirements constrain the design of the acceptance tests. This must berecognised in the statement of the purpose and scope of the acceptancetests.

The planning of the system tests should proceed in parallel withthe definition of the software requirements. The developer may identify‘Verification requirements’ for the software - these are additionalconstraints on the unit, integration and system testing activities. Theserequirements are also stated in the SRD.

The developer shall construct a system test plan in the SR phaseand document it in the SVVP. This plan should define the scope,approach, resources and schedule of system testing activities.

4.4.3 AD phase

During the AD phase, the DD phase section of the SVVP shall beproduced (SVVP/DD). The SVVP/AD shall describe how the DDD andcode are to be evaluated by defining the review and traceabilityprocedures.

The developer shall construct an integration test plan in the ADphase and document it in the SVVP. This plan should describe thescope, approach, resources and schedule of intended integration tests.Note that the items to be integrated are the software componentsdescribed in the ADD.

4.4.4 DD phase

In the DD phase, the SVVP sections on testing are developed asthe detailed design and implementation information become available.

The developer shall construct a unit test plan in the DD phase anddocument it in the SVVP. This plan should describe the scope,approach, resources and schedule of intended unit tests. The items to betested are the software components described in the DDD.

Page 101: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-31 SOFTWARE VERIFICATION AND VALIDATION

The unit, integration, system and acceptance test designs shall

be described in the SVVP. These should specify the details of the testapproach for a software feature, or combination of software features, andidentify the associated tests.

The unit integration, system and acceptance test cases shall bedescribed in the SVVP. These should specify the inputs, predicted resultsand execution conditions for a test item.

The unit, integration, system and acceptance test procedures shallbe described in the SVVP. These should be provide a step-by-stepdescription of how to carry out each test case.

The unit, integration, system and acceptance test reports shall becontained in the SVVP.

Page 102: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-32

CHAPTER 5 SOFTWARE QUALITY ASSURANCE

NOTE: Where ESA PSS-01-series documents are applicable, and as aconsequence ESA PSS-01-21, ‘Software Product AssuranceRequirements for ESA Space Systems’ is also applicable, Part 2, Chapter5 of these Standards, ‘Software Quality Assurance’, ceases to apply.

5.1 INTRODUCTION

Software Quality Assurance (SQA) is ‘a planned and systematicpattern of all actions necessary to provide adequate confidence that theitem or product conforms to established technical requirements’(ANSI/IEEE Std 730-1989). Software Quality Assurance is synonymouswith Software ‘Product Assurance’ and the terms are usedinterchangeably in these Standards.

The quality assurance activity is the process of verifying that theseStandards are being applied. In small projects this could be done by thedevelopment team, but in large projects specific staff should beallocated to the role.

The Software Quality Assurance Plan (SQAP) defines howadherence to these Standards will be monitored. The SQAP contents listis a checklist for activities that have to be carried out to assure the qualityof the product.

For each activity, those with responsibility for SQA should describethe plans for monitoring it.

5.2 ACTIVITIES

Objective evidence of adherence to these Standards should besought during all phases of the life cycle. Documents called for by thisstandard should be obtained and examined. Source code should bechecked for adherence to coding standards. Where possible, aspects ofquality (e.g. complexity, reliability, maintainability, safety, number ofdefects, number of problems, number of RIDs) should be measuredquantitatively, using well-established metrics.

Page 103: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-33 SOFTWARE QUALITY ASSURANCE

Subsequent sections list activities derived from ANSI/IEEE Std

730-1989 that are necessary if a software item is to be fit for its purpose.Each section discusses how the activity can be verified.

5.2.1 Management

Analysis of the managerial structure that influences and controlsthe quality of the software is an SQA activity. The existence of anappropriate organisational structure should be verified. It should beconfirmed that the individuals defined in that structure have defined tasksand responsibilities. The organisation, tasks and responsibilities will havebeen defined in the SPMP.

5.2.2 Documentation

The documentation plan that has been defined in the SPMPshould be analysed. Any departures from the documentation plandefined in these Standards should be scrutinised and discussed withproject management.

5.2.3 Standards, practices, conventions and metrics

Adherence to all standards, practices and conventions should bemonitored. Deviations and non-conformance should be noted andbrought to the attention of project management. SQA personnel mayassist project management with the correct interpretation of standards,practices and conventions.

A ‘metric’ is a quantitative measure of the degree to which asystem, component, or process possesses a given attribute. Metrics areessential for effective management. Metrics need to be simple tounderstand and apply to be useful.

Metrics for measuring quality, particularly reliability, andmaintainability, should be specified in the SRD. These metrics should bemeaningful to users, and reflect their requirements. Additional metricsmay be defined by the project. Values of complexity metrics may bedefined in the design standards to limit design complexity, for example.Metrics may be defined in the SPMP to guide decision-making (e.g. if asoftware component exhibits more than three failures in integrationtesting then it will be reinspected).

Metrics should relate to project objectives, so that they can beused for controlling it. All objectives should have metrics attached to

Page 104: ESA Software Engineering Standards

2-34 ESA PSS-05-0 Issue 2 (February 1991) SOFTWARE QUALITY ASSURANCE

them, otherwise undue weight can be given to those for which metricshave been defined. A project that counts the number of lines of codewritten, but not the failure rate, is likely to concentrate on producing alarge volume of code, and not reliability, for example.

5.2.4 Reviews and audits

These Standards call for reviews of the URD, the SRD, the ADD, theDDD, the SVVP and the SCMP. It also calls for the review and audit of thecode during production. The review and audit arrangements described inthe SVVP should be examined. Many kinds of reviews are possible (e.g.technical, inspection and walkthrough). It should be verified that thereview mechanisms are appropriate for the type of project. SQApersonnel should participate in the review process.

5.2.5 Testing activities

Unit, integration, system and acceptance testing of executablesoftware is essential to assure its quality. Test plans, test designs, testcase, test procedures and test reports are described in the SVVP. Theseshould be reviewed by SQA personnel. They should monitor the testingactivities carried out by the development team, including test execution.Additionally, other tests may be proposed in the SQAP. These may becarried out by SQA personnel.

5.2.6 Problem reporting and corrective action

The problem handling procedure described in these Standards isdesigned to report and track problems from identification until solution.SQA personnel should monitor the execution of the procedures,described in the SCMP, and examine trends in problem occurrence.

5.2.7 Tools, techniques and methods

These Standards call for tools, techniques and methods forsoftware production to be defined at the project level. It is an SQA activityto check that appropriate tools, techniques and methods are selectedand to monitor their correct application.

SQA personnel may decide that additional tools, techniques andmethods are required to support their monitoring activity. These shouldbe described in the SQAP.

Page 105: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-35 SOFTWARE QUALITY ASSURANCE

5.2.8 Code and media control

These Standards require that the procedures for the methods andfacilities used to maintain, store, secure and document controlledversions of the identified software, be defined in the SCMP. SQApersonnel should check that appropriate procedures have been definedin the SCMP and carried out.

5.2.9 Supplier control

Software items acquired from external suppliers must always bechecked against the standards for the project. An SQAP shall beproduced by each contractor developing software. An SQAP is notrequired for commercial software.

5.2.10 Records collection, maintenance and retention

These standards define a set of documents that must be producedin any project. Additional documents, for example minutes of meetingsand review records, may also be produced. SQA personnel shouldcheck that appropriate methods and facilities are used to assemble,safeguard, and maintain all this documentation for at least the life of theproject. Documentation control procedures are defined in the SCMP.

5.2.11 Training

SQA personnel should check that development staff are properlytrained for their tasks and identify any training that is necessary. Trainingplans are documented in the SPMP.

5.2.12 Risk management

All projects must identify the factors that are critical to their successand control these factors. This is called ‘risk management’. Projectmanagement must always analyse the risks that affect the project. Theirfindings are documented in the SPMP. SQA personnel should monitorthe risk management activity, and advise project management on themethods and procedures to identify, assess, monitor, and control areasof risk.

Page 106: ESA Software Engineering Standards

2-36 ESA PSS-05-0 Issue 2 (February 1991) SOFTWARE QUALITY ASSURANCE

5.3 THE SOFTWARE QUALITY ASSURANCE PLAN

All software quality assurance activities shall be documented in theSoftware Quality Assurance Plan (SQAP). The recommended table ofcontents for the SQAP is presented in Appendix C. This table of contentsis derived from the IEEE Standard for Software Quality Assurance Plans(ANSI/IEEE Std 730-1989).

5.4 EVOLUTION OF THE SQAP THROUGHOUT THE LIFE CYCLE

5.4.1 UR phase

By the end of the UR review, the SR phase section of the SQAPshall be produced (SQAP/SR). The SQAP/SR shall describe, in detail,the quality assurance activities to be carried out in the SR phase. TheSQAP/SR shall outline the quality assurance plan for the rest of theproject.

5.4.2 SR phase

During the SR phase, the AD phase section of the SQAP shall beproduced (SQAP/AD). The SQAP/AD shall cover in detail all the qualityassurance activities to be carried out in the AD phase.

In the SR phase, the SRD should be analysed to extract anyconstraints that relate to software quality assurance (e.g. standards anddocumentation requirements).

5.4.3 AD phase

During the AD phase, the DD phase section of the SQAP shall beproduced (SQAP/DD). The SQAP/DD shall cover in detail all the qualityassurance activities to be carried out in the DD phase.

5.4.4 DD phase

During the DD phase, the TR phase section of the SQAP shall beproduced (SQAP/TR). The SQAP/TR shall cover in detail all the qualityassurance activities to be carried out from the start of the TR phase untilfinal acceptance in the OM phase.

Page 107: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 2-37 SOFTWARE QUALITY ASSURANCE

This page is intentionally left blank.

Page 108: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-1 SOFTWARE QUALITY ASSURANCE

Part 3 Appendices

Page 109: ESA Software Engineering Standards

3-2 ESA PSS-05-0 Issue 2 (February 1991) SOFTWARE QUALITY ASSURANCE

This page is intentionally left blank.

Page 110: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-A1 GLOSSARY

APPENDIX A GLOSSARY

A.1 LIST OF TERMS

The terminology used in the ESA Software Engineering Standardsconforms to ANSI/IEEE Std 729-1983, ‘IEEE Standard Glossary ofSoftware Engineering Terminology’. This section contains definitions ofterms:• that are not contained in ANSI/IEEE Std 729-1983;• that have multiple definitions in ANSI/IEEE Std 729-1983, one of

which is used in these Standards (denoted by );• whose ISO definition is preferred (denoted by ISO and taken from ISO

2382, ‘Glossary of Terms used in Data Processing’).Component

General term for a part of a software system. Components may beassembled and decomposed to form new components. They areimplemented as modules, tasks or programs, any of which may beconfiguration items. This usage of the term is more general than inANSI/IEEE parlance, which defines a component as a ‘basic part of asystem or program’; in ESA PSS-05-0, components may not be ‘basic’ asthey can be decomposed.Concurrent Production and Documentation

A technique of software development where the documentation of amodule proceeds in parallel with its production, i.e. detailed designspecifications, coding and testing.Decomposition

The breaking down into parts.Development

The period from URD delivery to final acceptance, during which thesoftware is produced.

Page 111: ESA Software Engineering Standards

3-B2 ESA PSS-05-0 Issue 2 (February 1991) GLOSSARY

Developer

The person or organisation responsible for developing the software froma specification of the user requirements to an operational system.Environment

Either: Physical conditions such as temperature, humidity andcleanliness, within which a computer system operates; or, the supportand utility software and hardware.Formal Method

A mathematically precise means of specifying software, enablingspecifications to be proven consistent.Interface Control Document

A specification of an interface between a system and an external system.Layer

One hierarchical level in the decomposition of a system.Logical model

An implementation-independent representation of a real world process;contrast physical model.Model

A representation of a real world process. A software model is composedof symbols organised according to some convention.Maintainability

The ease with which software can be maintained ().Nonconformance

A statement of conflict between descriptive documentation and the objectdescribed.Physical model

An implementation-dependent representation of a real world process;contrast logical model.

Page 112: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-A3 GLOSSARY

Product Assurance

The totality of activities, standards, controls and procedures in the lifetimeof a product which establishes confidence that the delivered product willconform adequately to customer requirements (Product Assurance andSafety Policy and Basic Requirements for ESA Space Systems, ESA PSS-01-0, May 1990).Production

The process of writing and testing software, as opposed to design andoperations.Project

Either the development process or the organisation devoted to thedevelopment process.Prototype

An executable model of selected aspects of a proposed system.Recognised method

Describes the characteristics of an agreed process or procedure used inthe engineering of a product or performing a service.Release

A baseline made available for use.Reliability

The probability that software will not cause the failure of a system for aspecified time under specified conditions.

Response Time

The elapsed time between the end of an inquiry or demand on acomputer system and the beginning of the response (ISO).Review

An activity to verify that the outputs of a phase in the software life cycleconform to their specifications (contrast IEEE: ‘Design Review’.)

Page 113: ESA Software Engineering Standards

3-B4 ESA PSS-05-0 Issue 2 (February 1991) GLOSSARY

Stability

Quality of not changing. Either in the short term, not breaking down, or inthe longer term, not subject to design and/or requirements changes(Contrast IEEE (1) the ability to continue unchanged despite disturbing ordisruptive events, (2) the ability to return to an original state afterdisturbing or disruptive events).Software

The programs, procedures, rules and all associated documentationpertaining to the operation of a computerised system (ISO).Task

A software component that executes sequentially. A task may execute inparallel with other tasks.Traceability

The ability to relate an input to a phase of the software life cycle to anoutput from that phase. The item may be code or documentation.Traceability Matrix

Matrix showing which outputs correspond to which inputs. More oftenshowing which parts of a design satisfy which requirements.Trade-off

Comparison of alternatives to determine the optimal solution.Turnaround

The elapsed time between submission of a job and the return of thecomplete output (ISO).User

Any individual or group ultimately making use of the output from acomputer system, distinct from personnel responsible for building,operating or maintaining the system.Verification

The act of reviewing, inspecting, testing, checking, auditing, or otherwiseestablishing and documenting whether items, processes, services ordocuments conform to specified requirements (ANSI/ASQC A3-1978).

Page 114: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-A5 GLOSSARY

Version

A stage in the evolution of a configuration item.Work-Around

A temporary solution to a problem.Work Breakdown Structure

The WBS describes the hierarchy of tasks (grouped into ‘work packages’)to be carried out in a project. The WBS corresponds to the structure ofthe work to be performed, and reflects the way in which the project costswill be summarised and reported.Work Package

A detailed, short span, unit of work to be performed in a project.

Page 115: ESA Software Engineering Standards

3-B6 ESA PSS-05-0 Issue 2 (February 1991) GLOSSARY

A.2 LIST OF ACRONYMS

AD Architectural DesignADD Architectural Design DocumentAD/R Architectural Design ReviewANSI American National Standards InstituteAT Acceptance TestBSSC Board for Software Standardisation and ControlCASE Computer Aided Software EngineeringDCR Document Change RecordDD Detailed Design and productionDDD Detailed Design DocumentDD/R Detailed Design and production ReviewDSS Document Status SheetESA European Space AgencyIEEE Institute of Electrical and Electronics EngineersICD Interface Control DocumentISO International Standards OrganisationIT Integration TestPA Product AssurancePHD Project History DocumentPSS Procedures, Specifications and StandardsQA Quality AssuranceRID Review Item DiscrepancySCM Software Configuration ManagementSCMP Software Configuration Management PlanSCR Software Change RequestSPM Software Project ManagementSPMP Software Project Management PlanSMR Software Modification ReportSPR Software Problem ReportSQA Software Quality AssuranceSQAP Software Quality Assurance PlanSR Software RequirementSRB Software Review BoardSRD Software Requirements DocumentSRN Software Release NoteSR/R Software Requirements ReviewST System TestSTD Software Transfer DocumentSUM Software User ManualSVVP Software Verification and Validation PlanUR User RequirementsURD User Requirements DocumentUR/R User Requirements ReviewUT Unit Tests

Page 116: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-B1SOFTWARE PROJECT DOCUMENTS

APPENDIX BSOFTWARE PROJECT DOCUMENTS

This Appendix summarises the documents that can be produced ina software engineering project. Table B.1 summarises technicaldocuments. Table B.2 summarises the plans. Table B.3 summarisesreports, forms and other documentation.

Acronym Name Purpose

URD User RequirementsDocument

To state the needs of the users of thesoftware system.

SRD Software RequirementsDocument

To specify the requirements of thesoftware system from the developer’spoint of view. The SRD incorporates theuser requirements described in the URD

ADD Architectural DesignDocument

To specify the top-level components ofthe software. The ADD fulfils thesoftware

requirements stated in the SRD.

DDD Detailed DesignDocument

To specify the lower-level components ofthe software. The DDD fulfils therequirements laid down in the SRD,following the top-level design describedin the ADD.

SUM Software User Manual To state what the software does andhow to operate the software.

STD Software TransferDocument

To contain the checked configurationitem list and SPRs, SCRs, SMRsgenerated in the TR phase.

PHD Project HistoryDocument

To record significant information aboutthe specification, design, productionand operation of the software.

Table B.1 Summary of technical documents

Page 117: ESA Software Engineering Standards

3-B2 ESA PSS-05-0 Issue 2 (February 1991) SOFTWARE PROJECT DOCUMENTS

Acronym Name Purpose

SPMP Software ProjectManagement Plan

To state the organisation, WBS,schedule and budget, for eachdevelopment phase.

SCMP Software ConfigurationManagement Plan

To state the procedures for identifying,controlling, and recording the status ofsoftware items.

SVVP Software Verificationand Validation Plan

To state the procedures for testing thesoftware and for verifying that theproducts of each phase are consistentwith their inputs.

SQAP Software QualityAssurance Plan

To state the procedures for assuring thequality of the software products.

Table B.2 Summary of plans required

Acronym Name Purpose

DCR Document ChangeRecord

To record a set of changes to a document

DSS Document StatusSheet

To summarise the issues and revisions ofa document.

RID Review ItemDiscrepancy

To state issues to be addressed in areview and record decisions made

SCR Software ChangeRequest

To describe changes required in thesoftware and its documentation in the TRand OM phases, including newrequirements.

SMR Software ModificationReport

To describe changes made to thesoftware and its documentation in the TRand OM phases.

SPR Software ProblemReport

To record a problem reported in the useor test of software and its documentation

SRN Software ReleaseNote

To summarise changes made to softwarewith respect to the previous release.

Table B.3 Summary of reports and forms

Page 118: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-C1DOCUMENT TEMPLATES

APPENDIX CDOCUMENT TEMPLATES

All documents should contain the following service information:

a - Abstract b - Table of contents c - Document Status Sheet d - Document Change Records made since last issue

If there is no information pertinent to a section, the followingshould appear below the section heading: ‘This section not applicable’,with the appropriate reasons for this exclusion.

Guidelines on the contents of document sections are given initalics. Section titles which are to be provided by document authors areenclosed in square brackets.

Page 119: ESA Software Engineering Standards

3-C2 ESA PSS-05-0 Issue 2 (February 1991)DOCUMENT TEMPLATES

C.1 URD TABLE OF CONTENTS

1 Introduction1.1 Purpose of the document1.2 Scope of the software1.3 Definitions, acronyms and abbreviations1.4 References1.5 Overview of the document

2 General Description2.1 Product perspective2.2 General capabilities2.3 General constraints2.4 User characteristics2.5 Operational environment2.6 Assumptions and dependencies

3 Specific Requirements3.1 Capability requirements3.2 Constraint requirements

Page 120: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-C3DOCUMENT TEMPLATES

C.2 SRD TABLE OF CONTENTS

1 Introduction1.1 Purpose of the document1.2 Scope of the software1.3 Definitions, acronyms and abbreviations1.4 References1.5 Overview of the document

2 General Description2.1 Relation to current projects2.2 Relation to predecessor and successor projects2.3 Function and purpose2.4 Environmental considerations2.5 Relation to other systems2.6 General constraints2.7 Model description

3 Specific Requirements3.1 Functional requirements3.2 Performance requirements3.3 Interface requirements3.4 Operational requirements3.5 Resource requirements3.6 Verification requirements3.7 Acceptance testing requirements3.8 Documentation requirements3.9 Security requirements3.10 Portability requirements3.11 Quality requirements3.12 Reliability requirements3.13 Maintainability requirements3.14 Safety requirements

4 User Requirements vs Software Requirements Traceability matrix

Page 121: ESA Software Engineering Standards

3-C4 ESA PSS-05-0 Issue 2 (February 1991)DOCUMENT TEMPLATES

C.3 ADD TABLE OF CONTENTS

1 Introduction1.1 Purpose of the document1.2 Scope of the software1.3 Definitions, acronyms and abbreviations1.4 References1.5 Overview of the document

2 System Overview

3 System Context

4 System Design4.1 Design method4.2 Decomposition description

5 Component Description5.n [Component identifier]

5.n.1 Type5.n.2 Purpose5.n.3 Function5.n.4 Subordinates5.n.5 Dependencies5.n.6 Interfaces5.n.7 Resources5.n.8 References5.n.9 Processing5.n.10 Data

6 Feasibility and Resource Estimates

7 Software Requirements vs Components Traceability matrix

Page 122: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-C5DOCUMENT TEMPLATES

C.4 DDD TABLE OF CONTENTS

Part 1 - General Description

1 Introduction1.1 Purpose of the document1.2 Scope of the software1.3 Definitions, acronyms and abbreviations1.4 References1.5 Overview of the document

2 Project Standards, Conventions and Procedures2.1 Design standards2.2 Documentation standards2.3 Naming conventions2.4 Programming standards2.5 Software development tools

Part 2 - Component Design Specificationsn [Component identifier]

n.1 Typen.2 Purposen.3 Functionn.4 Subordinatesn.5 Dependenciesn.6 Interfacesn.7 Resourcesn.8 Referencesn.9 Processingn.10 Data

Appendix A Source code listingsAppendix B Software Requirements vs Component Traceability

matrix

Page 123: ESA Software Engineering Standards

3-C6 ESA PSS-05-0 Issue 2 (February 1991)DOCUMENT TEMPLATES

C.5 SUM TABLE OF CONTENTS

1 Introduction1.1 Intended readership1.2 Applicability statement1.3 Purpose1.4 How to use this document1.5 Related documents1.6 Conventions1.7 Problem reporting instructions

2 [Overview section]

3 [Instruction section](a) Functional description(b) Cautions and warnings(c) Procedures(d) Probable errors and possible causes

4 [Reference section](a) Functional description(b) Cautions and warnings(c) Formal description(d) Examples(e) Possible error messages and causes(f) Cross references to other operations

Appendix A Error messages and recovery proceduresAppendix B GlossaryAppendix C Index (for manuals of 40 pages or more)

Page 124: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-C7DOCUMENT TEMPLATES

C.6 STD TABLE OF CONTENTS

1 Introduction1.1 Purpose of the document1.2 Scope of the software1.3 Definitions, acronyms and abbreviations1.4 References1.5 Overview of the document

2 Installation Procedures

3 Build Procedures

4 Configuration Item List

5 Acceptance Test Report Summary

6 Software Problem Reports

7 Software Change Requests

8 Software Modification Reports

Page 125: ESA Software Engineering Standards

3-C8 ESA PSS-05-0 Issue 2 (February 1991)DOCUMENT TEMPLATES

C.7 PHD TABLE OF CONTENTS

1 Description of the project

2 Management of the project2.1 Contractual approach (if applicable)2.2 Project organisation2.3 Methods used2.4 Planning

3 Software Production3.1 Estimated vs. actual amount of code produced3.2 Documentation3.3 Estimated vs. actual effort3.4 Computer resources3.5 Analysis of productivity factors

4 Quality Assurance Review

5 Financial Review

6 Conclusions

7 Performance of the system in OM phase

Page 126: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-C9DOCUMENT TEMPLATES

C.8 SPMP TABLE OF CONTENTS

Software Project Management Plan for the SR Phase

1 Introduction1.1 Project overview1.2 Project deliverables1.3 Evolution of the SPMP1.4 Reference materials1.5 Definitions and acronyms

2 Project Organisation2.1 Process model2.2 Organisational structure2.3 Organisational boundaries and interfaces2.4 Project responsibilities

3 Managerial process3.1 Management objectives and priorities3.2 Assumptions, dependencies and constraints3.3 Risk management3.4 Monitoring and controlling mechanisms3.5 Staffing plan

4 Technical Process4.1 Methods, tools and techniques4.2 Software documentation4.3 Project support functions

5 Work Packages, Schedule, and Budget5.1 Work packages5.2 Dependencies5.3 Resource requirements5.4 Budget and resource allocation5.5 Schedule

Software Project Management Plan for the AD Phase SameSoftware Project Management Plan for the DD Phase structure asSoftware Project Management Plan for the TR Phase SPMP/SR

Page 127: ESA Software Engineering Standards

3-C10 ESA PSS-05-0 Issue 2 (February 1991)DOCUMENT TEMPLATES

C.9 SCMP TABLE OF CONTENTS

Software Configuration Management Plan for the SR Phase

1 Introduction1.1 Purpose of the plan1.2 Scope of the plan1.3 Glossary1.4 References

2 Management

3 Configuration Identification

4 Configuration Control4.1 Code control4.2 Media control4.3 Change control

5 Configuration Status Accounting

6 Tools, Techniques and Methods for SCM

7 Supplier Control

8 Records Collection and Retention

Software Configuration Management Plan for the AD Phase SameSoftware Configuration Management Plan for the DD Phase structure asSoftware Configuration Management Plan for the TR Phase SCMP/SR

Page 128: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-C11DOCUMENT TEMPLATES

C.10 SVVP TABLE OF CONTENTS

Software Verification and Validation Plan for the SR Phase

1 Purpose

2 Reference documents

3 Definitions

4 Verification overview4.1 Organisation4.2 Master schedule4.3 Resources summary4.4 Responsibilities4.5 Tools, techniques and methods

5 Verification Administrative Procedures5.1 Anomaly reporting and resolution5.2 Task iteration policy5.3 Deviation policy5.4 Control procedures5.5 Standards, practices and conventions

6 Verification Activities6.1 Traceability matrix template6.2 Formal proofs6.3 Reviews

7 Software Verification reporting

Software Verification and Validation Plan for the AD Phase Same structure asSoftware Verification and Validation Plan for the DD Phase SVVP/SR

Page 129: ESA Software Engineering Standards

3-C12 ESA PSS-05-0 Issue 2 (February 1991)DOCUMENT TEMPLATES

Acceptance Test Specification

1 Test Plan1.1 Introduction1.2 Test items1.3 Features to be tested1.4 Features not to be tested1.5 Approach1.6 Item pass/fail criteria1.7 Suspension criteria and resumption requirements1.8 Test deliverables1.9 Testing tasks1.10 Environmental needs1.11 Responsibilities1.12 Staffing and training needs1.13 Schedule1.14 Risks and contingencies1.15 Approvals

2 Test Designs (for each test design...)2.n.1 Test design identifier2.n.2 Features to be tested2.n.3 Approach refinements2.n.4 Test case identification2.n.5 Feature pass/fail criteria

3 Test Case Specifications (for each test case...)3.n.1 Test case identifier3.n.2 Test items3.n.3 Input specifications3.n.4 Output specifications3.n.5 Environmental needs3.n.6 Special procedural requirements3.n.7 Intercase dependencies

4 Test Procedures (for each test case...)4.n.1 Test procedure identifier4.n.2 Purpose4.n.3 Special requirements4.n.4 Procedure steps

5 Test Reports (for each execution of a test procedure ...)5.n.1 Test report identifier5.n.2 Description5.n.3 Activity and event entries

System Test Specification SameIntegration Test Specification structure asUnit Test Specification (may be in DDD) SVVP/AT

Page 130: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-C13DOCUMENT TEMPLATES

C.11 SQAP TABLE OF CONTENTS

Software Quality Assurance Plan for the SR phase

1 Purpose of the plan2 Reference documents3 Management4 Documentation5 Standards, practices, conventions and metrics

5.1 Documentation standards5.2 Design standards5.3 Coding standards5.4 Commentary standards5.5 Testing standards and practices5.6 Selected software quality assurance metrics5.7 Statement of how compliance will be monitored

6 Review and audits7 Test8 Problem reporting and corrective action9 Tools, techniques and methods10 Code control11 Media control12 Supplier control13 Records collection, maintenance and retention14 Training15 Risk Management16 Outline of the rest of the project

Software Quality Assurance Plan for the AD Phase SameSoftware Quality Assurance Plan for the DD Phase structure asSoftware Quality Assurance Plan for the TR Phase SQAP/SR

Page 131: ESA Software Engineering Standards

3-C14 ESA PSS-05-0 Issue 2 (February 1991)DOCUMENT TEMPLATES

This page is intentionally left blank

Page 132: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-D1SUMMARY OF MANDATORY PRACTICES

APPENDIX DSUMMARY OF MANDATORY PRACTICES

These mandatory practices have been extracted for ease ofreference, and can be used as a review checklist. They are labelled withthe chapter they come from, suffixed with a sequence number. Thisnumber corresponds to the order of the requirement in the chapter.

D.1 SOFTWARE LIFE CYCLESLC01 The products of a software development project shall be delivered in a

timely manner and be fit for their purpose.SLC02 Software development activities shall be systematically planned and

carried out.SLC03 All software projects shall have a life cycle approach which includes the

basic phases shown in Part 1, Figure 1.2:UR phase - Definition of the user requirementsSR phase - Definition of the software requirementsAD phase - Definition of the architectural designDD phase - Detailed design and production of the codeTR phase - Transfer of the software to operationsOM phase - Operations and maintenance

D.2 UR PHASEUR01 The definition of the user requirements shall be the responsibility of the

user.UR02 Each user requirement shall include an identifier.UR03 Essential user requirements shall be marked as such.UR04 For incremental delivery, each user requirement shall include a measure

of priority so that the developer can decide the production schedule.UR05 The source of each user requirement shall be stated.UR06 Each user requirement shall be verifiable.

Page 133: ESA Software Engineering Standards

3-D2 ESA PSS-05-0 Issue 2 (February 1991)SUMMARY OF MANDATORY PRACTICES

UR07 The user shall describe the consequences of losses of availability, orbreaches of security, so that developers can fully appreciate the criticalityof each function.

UR08 The outputs of the UR phase shall be formally reviewed during the UserRequirements Review.

UR09 Non-applicable user requirements shall be clearly flagged in the URD.UR10 An output of the UR phase shall be the User Requirements Document

(URD).UR11 The URD shall always be produced before a software project is started.UR12 The URD shall provide a general description of what the user expects the

software to do.UR13 All known user requirements shall be included in the URD.UR14 The URD shall describe the operations the user wants to perform with the

software system.UR15 The URD shall define all the constraints that the user wishes to impose

upon any solution.UR16 The URD shall describe the external interfaces to the software system or

reference them in ICDs that exist or are to be written.

D.3 SR PHASESR01 SR phase activities shall be carried out according to the plans defined in

the UR phase.SR02 The developer shall construct an implementation-independent model of

what is needed by the user.SR03 A recognised method for software requirements analysis shall be

adopted and applied consistently in the SR phase.SR04 Each software requirement shall include an identifier.SR05 Essential software requirements shall be marked as such.SR06 For incremental delivery, each software requirement shall include a

measure of priority so that the developer can decide the productionschedule.

SR07 References that trace software requirements back to the URD shallaccompany each software requirement.

SR08 Each software requirement shall be verifiable.

Page 134: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-D3SUMMARY OF MANDATORY PRACTICES

SR09 The outputs of the SR phase shall be formally reviewed during theSoftware Requirements Review.

SR10 An output of the SR phase shall be the Software RequirementsDocument (SRD).

SR11 The SRD shall be complete.SR12 The SRD shall cover all the requirements stated in the URD.SR13 A table showing how user requirements correspond to software

requirements shall be placed in the SRD.SR14 The SRD shall be consistent.SR15 The SRD shall not include implementation details or terminology, unless

it has to be present as a constraint.SR16 Descriptions of functions ... shall say what the software is to do, and must

avoid saying how it is to be done.SR17 The SRD shall avoid specifying the hardware or equipment, unless it is a

constraint placed by the user.SR18 The SRD shall be compiled according to the table of contents provided

in Appendix C.

D.4 AD PHASEAD01 AD phase activities shall be carried out according to the plans defined in

the SR phase.AD02 A recognised method for software design shall be adopted and applied

consistently in the AD phase.AD03 The developer shall construct a ‘physical model’, which describes the

design of the software using implementation terminology.AD04 The method used to decompose the software into its component parts

shall permit a top-down approach.AD05 Only the selected design approach shall be reflected in the ADD.

For each component the following information shall be detailed in theADD:

AD06 • data input;AD07 • functions to be performed;AD08 • data output.AD09 Data structures that interface components shall be defined in the ADD.

Page 135: ESA Software Engineering Standards

3-D4 ESA PSS-05-0 Issue 2 (February 1991)SUMMARY OF MANDATORY PRACTICES

Data structure definitions shall include the:AD10 • description of each element (e.g. name, type, dimension);AD11 • relationships between the elements (i.e. the structure);AD12 • range of possible values of each element;AD13 • initial values of each element.AD14 The control flow between the components shall be defined in the ADD.AD15 The computer resources (e.g. CPU speed, memory, storage, system

software) needed in the development environment and the operationalenvironment shall be estimated in the AD phase and defined in the ADD.

AD16 The outputs of the AD phase shall be formally reviewed during theArchitectural Design Review.

AD17 The ADD shall define the major components of the software and theinterfaces between them.

AD18 The ADD shall define or reference all external interfaces.AD19 The ADD shall be an output from the AD phase.AD20 The ADD shall be complete, covering all the software requirements

described in the SRD.AD21 A table cross-referencing software requirements to parts of the

architectural design shall be placed in the ADD.AD22 The ADD shall be consistent.AD23 The ADD shall be sufficiently detailed to allow the project leader to draw

up a detailed implementation plan and to control the overall projectduring the remaining development phases.

AD24 The ADD shall be compiled according to the table of contents providedin Appendix C.

D.5 DD PHASEDD01 DD phase activities shall be carried out according to the plans defined in

the AD phase.The detailed design and production of software shall be based on thefollowing three principles:

DD02 • top-down decomposition;DD03 • structured programming;

Page 136: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-D5SUMMARY OF MANDATORY PRACTICES

DD04 • concurrent production and documentation.DD05 The integration process shall be controlled by the software configuration

management procedures defined in the SCMP.DD06 Before a module can be accepted, every statement in a module shall be

executed successfully at least once.DD07 Integration testing shall check that all the data exchanged across an

interface agrees with the data structure specifications in the ADD.DD08 Integration testing shall confirm that the control flows defined in the ADD

have been implemented.DD09 System testing shall verify compliance with system objectives, as stated

in the SRD.DD10 When the design of a major component is finished, a critical design

review shall be convened to certify its readiness for implementation.DD11 After production, the DD Review (DD/R) shall consider the results of

the verification activities and decide whether to transfer the software.DD12 All deliverable code shall be identified in a configuration item list.DD13 The DDD shall be an output of the DD phase.DD14 Part 2 of the DDD shall have the same structure and identification

scheme as the code itself, with a 1:1 correspondence between sectionsof the documentation and the software components.

DD15 The DDD shall be complete, accounting for all the software requirementsin the SRD.

DD16 A table cross-referencing software requirements to the detailed designcomponents shall be placed in the DDD.

DD17 A Software User Manual (SUM) shall be an output of the DD phase.

D.6 TR PHASETR01 Representatives of users and operations personnel shall participate in

acceptance tests.TR02 The Software Review Board (SRB) shall review the software’s

performance in the acceptance tests and recommend, to the initiator,whether the software can be provisionally accepted or not.

TR03 TR phase activities shall be carried out according to the plans defined inthe DD phase.

Page 137: ESA Software Engineering Standards

3-D6 ESA PSS-05-0 Issue 2 (February 1991)SUMMARY OF MANDATORY PRACTICES

TR04 The capability of building the system from the components that areirectlymodifiable by the maintenance team shall be established.

TR05 Acceptance tests necessary for provisional acceptance shall beindicated in the SVVP.

TR06 The statement of provisional acceptance shall be produced by theinitiator, on behalf of the users, and sent to the developer.

TR07 The provisionally accepted software system shall consist of the outputs ofall previous phases and modifications found necessary in the TR phase.

TR08 An output of the TR phase shall be the STD.TR09 The STD shall be handed over from the developer to the maintenance

organisation at provisional acceptance.TR10 The STD shall contain the summary of the acceptance test reports, and

all documentation about software changes performed during the TRphase.

D.7 OM PHASEOM01 Until final acceptance, OM phase activities that involve the developer

shall be carried out according to the plans defined in the SPMP/TR.OM02 All the acceptance tests shall have been successfully completed before

the software is finally accepted.OM03 Even when no contractor is involved, there shall be a final acceptance

milestone to arrange the formal hand-over from software development tomaintenance.

OM04 A maintenance organisation shall be designated for every softwareproduct in operational use.

OM05 Procedures for software modification shall be defined.OM06 Consistency between code and documentation shall be maintained.OM07 Resources shall be assigned to a product’s maintenance until it is retired.OM08 The SRB ... shall authorise all modifications to the software.OM09 The statement of final acceptance shall be produced by the initiator, on

behalf of the users, and sent to the developer.OM10 The PHD shall be delivered to the initiator after final acceptance.

Page 138: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-D7SUMMARY OF MANDATORY PRACTICES

D.8 SOFTWARE PROJECT MANAGEMENTSPM01 All software project management activities shall be documented in the

Software Project Management Plan (SPMP).SPM02 By the end of the UR review, the SR phase section of the SPMP shall be

produced (SPMP/SR).SPM03 The SPMP/SR shall outline a plan for the whole project.SPM04 A precise estimate of the effort involved in the SR phase shall be

included in the SPMP/SR.SPM05 During the SR phase, the AD phase section of the SPMP shall be

produced (SPMP/AD).SPM06 An estimate of the total project cost shall be included in the SPMP/AD.SPM07 A precise estimate of the effort involved in the AD phase shall be included

in the SPMP/AD.SPM08 During the AD phase, the DD phase section of the SPMP shall be

produced (SPMP/DD).SPM09 An estimate of the total project cost shall be included in the SPMP/DD.SPM10 The SPMP/DD shall contain a WBS that is directly related to the

decomposition of the software into components.SPM11 The SPMP/DD shall contain a planning network showing relationships of

coding, integration and testing activities.SPM12 No software production work packages in the SPMP/DD shall last longer

than 1 man-month.SPM13 During the DD phase, the TR phase section of the SPMP shall be

produced (SPMP/TR).

D.9 SOFTWARE CONFIGURATION MANAGEMENTSCM01 All software items, for example documentation, source code, object or

relocatable code, executable code, files, tools, test software and data,shall be subjected to configuration management procedures.

SCM02 The configuration management procedures shall establish methods foridentifying, storing and changing software items through development,integration and transfer.

SCM03 A common set of configuration management procedures shall be used.

Page 139: ESA Software Engineering Standards

3-D8 ESA PSS-05-0 Issue 2 (February 1991)SUMMARY OF MANDATORY PRACTICES

Every configuration item shall have an identifier that distinguishes it fromother items with different:

SCM04 • requirements, especially functionality and interfaces;SCM05 • implementation.SCM06 Each component defined in the design process shall be designated as a

CI and include an identifier.SCM07 The identifier shall include a number or a name related to the purpose of

the CI.SCM08 The identifier shall include an indication of the type of processing the CI is

intended for (e.g. filetype information).SCM09 The identifier of a CI shall include a version number.SCM10 The identifier of documents shall include an issue number and a revision

number.SCM11 The configuration identification method shall be capable of

accommodating new CIs, without requiring the modification of theidentifiers of any existing CIs.

SCM12 In the TR phase, a list of configuration items in the first release shall beincluded in the STD.

SCM13 In the OM phase, a list of changed configuration items shall be includedin each Software Release Note (SRN).

SCM14 An SRN shall accompany each release made in the OM phase.As part of the configuration identification method, a software module shallhave a standard header that includes:

SCM15 • configuration item identifier (name, type, version);SCM16 • original author;SCM17 • creation date;SCM18 • change history (version/date/author/description).

All documentation and storage media shall be clearly labelled in astandard format, with at least the following data:

SCM19 • project name;SCM20 • configuration item identifier (name, type, version);SCM21 • date;SCM22 • content description.

Page 140: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-D9SUMMARY OF MANDATORY PRACTICES

To ensure security and control of the software, at a minimum, thefollowing software libraries shall be implemented for storing all thedeliverable components (e.g. documentation, source and executablecode, test files, command procedures):

SCM23 • Development (or Dynamic) library;SCM24 • Master (or Controlled) library;SCM25 • Static (or Archive) library.SCM26 Static libraries shall not be modified.SCM27 Up-to-date security copies of master and static libraries shall always be

available.SCM28 Procedures for the regular backup of development libraries shall be

established.SCM29 The change procedure described (in Part 2, Section 3.2.3.2.1) shall be

observed when changes are needed to a delivered document.SCM30 Software problems and change proposals shall be handled by the

procedure described (in Part 2, Section 3.2.3.2).SCM31 The status of all configuration items shall be recorded.

To perform software status accounting, each software project shallrecord:

SCM32 • the date and version/issue of each baseline;SCM33 • the date and status of each RID and DCR;SCM34 • the date and status of each SPR, SCR and SMR;SCM35 • a summary description of each Configuration Item.SCM36 As a minimum, the SRN shall record the faults that have been repaired

and the new requirements that have been incorporated.SCM37 For each release, documentation and code shall be consistent.SCM38 Old releases shall be retained, for reference.SCM39 Modified software shall be retested before release.SCM40 All software configuration management activities shall be documented in

the Software Configuration Management Plan (SCMP).SCM41 Configuration management procedures shall be in place before the

production of software (code and documentation) starts.

Page 141: ESA Software Engineering Standards

3-D10 ESA PSS-05-0 Issue 2 (February 1991)SUMMARY OF MANDATORY PRACTICES

SCM42 By the end of the UR review, the SR phase section of the SCMP shall be produced (SCMP/SR).

SCM43 The SCMP/SR shall cover the configuration management procedures fordocumentation, and any CASE tool outputs or prototype code, to beproduced in the SR phase.

SCM44 During the SR phase, the AD phase section of the SCMP shall beproduced (SCMP/AD).

SCM45 The SCMP/AD shall cover the configuration management procedures fordocumentation, and CASE tool outputs or prototype code, to beproduced in the AD phase.

SCM46 During the AD phase, the DD phase section of the SCMP shall beproduced (SCMP/DD).

SCM47 The SCMP/DD shall cover the configuration management procedures fordocumentation, deliverable code, and any CASE tool outputs orprototype code, to be produced in the DD phase.

SCM48 During the DD phase, the TR phase section of the SCMP shall beproduced (SCMP/TR).

SCM49 The SCMP/TR shall cover the procedures for the configurationmanagement of the deliverables in the operational environment.

D.10 SOFTWARE VERIFICATION AND VALIDATIONSVV01 Forwards traceability requires that each input to a phase shall be

traceable to an output of that phase.SVV02 Backwards traceability requires that each output of a phase shall be

traceable to an input to that phase.SVV03 Functional and physical audits shall be performed before the release of

the software.SVV04 All software verification and validation activities shall be documented in

the Software Verification and Validation Plan (SVVP).The SVVP shall ensure that the verification activities:

SVV05 • are appropriate for the degree of criticality of the software;SVV06 • meet the verification and acceptance testing requirements (stated in

the SRD);SVV07 • verify that the product will meet the quality, reliability, maintainability

and safety requirements (stated in the SRD);

Page 142: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-D11SUMMARY OF MANDATORY PRACTICES

SVV08 • are sufficient to assure the quality of the product.SVV09 By the end of the UR review, the SR phase section of the SVVP shall be

produced (SVVP/SR).SVV10 The SVVP/SR shall define how to trace user requirements to software

requirements, so that each software requirement can be justified.SVV11 The developer shall construct an acceptance test plan in the UR phase

and document it in the SVVP.SVV12 During the SR phase, the AD phase section of the SVVP shall be

produced (SVVP/AD).SVV13 The SVVP/AD shall define how to trace software requirements to

components, so that each software component can be justified.SVV14 The developer shall construct a system test plan in the SR phase and

document it in the SVVP.SVV15 During the AD phase, the DD phase section of the SVVP shall be

produced (SVVP/DD).SVV16 The SVVP/AD shall describe how the DDD and code are to be evaluated

by defining the review and traceability procedures.SVV17 The developer shall construct an integration test plan in the AD phase

and document it in the SVVP.SVV18 The developer shall construct a unit test plan in the DD phase and

document it in the SVVP.SVV19 The unit, integration, system and acceptance test designs shall be

described in the SVVP.SVV20 The unit integration, system and acceptance test cases shall be

described in the SVVP.SVV21 The unit, integration, system and acceptance test procedures shall be

described in the SVVP.SVV22 The unit, integration, system and acceptance test reports shall be

described in the SVVP.

D.11 SOFTWARE QUALITY ASSURANCESQA01 An SQAP shall be produced by each contractor developing software.SQA02 All software quality assurance activities shall be documented in the

Software Quality Assurance Plan (SQAP).

Page 143: ESA Software Engineering Standards

3-D12 ESA PSS-05-0 Issue 2 (February 1991)SUMMARY OF MANDATORY PRACTICES

SQA03 By the end of the UR review, the SR phase section of the SQAP shall beproduced (SQAP/SR).

SQA04 The SQAP/SR shall describe, in detail, the quality assurance activities tobe carried out in the SR phase.

SQA05 The SQAP/SR shall outline the quality assurance plan for the rest of theproject.

SQA06 During the SR phase, the AD phase section of the SQAP shall beproduced (SQAP/AD).

SQA07 The SQAP/AD shall cover in detail all the quality assurance activities to becarried out in the AD phase.

SQA08 During the AD phase, the DD phase section of the SQAP shall beproduced (SQAP/DD).

SQA09 The SQAP/DD shall cover in detail all the quality assurance activities to becarried out in the DD phase.

SQA10 During the DD phase, the TR phase section of the SQAP shall beproduced (SQAP/TR).

SQA11 The SQAP/TR shall cover in detail all the quality assurance activities to becarried out from the start the TR phase until final acceptance in the OMphase.

Page 144: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-E1FORM TEMPLATES

APPENDIX EFORM TEMPLATES

Template forms are provided for:

DCR Document Change Record DSS Document Status Sheet RID Review Item Discrepancy SCR Software Change Request SMR Software Modification Report SPR Software Problem Report SRN Software Release Note

Page 145: ESA Software Engineering Standards

3-E2 ESA PSS-05-0 Issue 2 (February 1991)FORM TEMPLATES

DOCUMENT CHANGE RECORD DCR NO

DATE

ORIGINATOR

APPROVED BY

1. DOCUMENT TITLE:

2. DOCUMENT REFERENCE NUMBER:

3. DOCUMENT ISSUE/REVISION NUMBER:

4. PAGE 5. PARAGRAPH 6. REASON FOR CHANGE

Page 146: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-E3FORM TEMPLATES

DOCUMENT STATUS SHEET

1. DOCUMENT TITLE:

2. DOCUMENT REFERENCE NUMBER:

3. ISSUE 4. REVISION 5. DATE 6. REASON FOR CHANGE

Page 147: ESA Software Engineering Standards

3-E4 ESA PSS-05-0 Issue 2 (February 1991)FORM TEMPLATES

REVIEW ITEM DISCREPANCY RID NODATEORIGINATOR

1. DOCUMENT TITLE:

2. DOCUMENT REFERENCE NUMBER:3. DOCUMENT ISSUE/REVISION NUMBER:4. PROBLEM LOCATION:

5. PROBLEM DESCRIPTION:

6. RECOMMENDED SOLUTION;

7. AUTHOR’S RESPONSE:

8. REVIEW DECISION: CLOSE/UPDATE/ACTION/REJECT (underline choice)

Page 148: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-E5FORM TEMPLATES

SOFTWARE CHANGE REQUEST SCR NODATEORIGINATORRELATED SPRs

1. SOFTWARE ITEM TITLE:

2. SOFTWARE ITEM VERSION/RELEASE NUMBER:3. PRIORITY: CRITICAL/URGENT/ROUTINE (underline choice)4. CHANGES REQUIRED:

5. RESPONSIBLE STAFF:

6. ESTIMATED START DATE, END DATE AND MANPOWER EFFORT

7. ATTACHMENTS:

8. REVIEW DECISION: CLOSE/UPDATE/ACTION/REJECT (underline choice)

Page 149: ESA Software Engineering Standards

3-E6 ESA PSS-05-0 Issue 2 (February 1991)FORM TEMPLATES

SOFTWARE MODIFICATION REPORT SMR NODATEORIGINATORRELATED SCRs

1. SOFTWARE ITEM TITLE:

2. SOFTWARE ITEM VERSION/RELEASE NUMBER:

3. CHANGES IMPLEMENTED:

4. ACTUAL START DATE, END DATE AND MANPOWER EFFORT:

5. ATTACHMENTS (tick as appropriate)

Page 150: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-E7FORM TEMPLATES

SOFTWARE PROBLEM REPORT SPR NODATEORIGINATOR

1. SOFTWARE ITEM TITLE:

2. SOFTWARE ITEM VERSION/RELEASE NUMBER:

3. PRIORITY: CRITICAL/URGENT/ROUTINE (underline choice)

4. PROBLEM DESCRIPTION:

5. DESCRIPTION OF ENVIRONMENT:

6. RECOMMENDED SOLUTION;

7. REVIEW DECISION: CLOSE/UPDATE/ACTION/REJECT (underline choice)

8. ATTACHMENTS:

Page 151: ESA Software Engineering Standards

3-E8 ESA PSS-05-0 Issue 2 (February 1991)FORM TEMPLATES

SOFTWARE RELEASE NOTE SRN NODATEORIGINATOR

1. SOFTWARE ITEM TITLE:

2. SOFTWARE ITEM VERSION/RELEASE NUMBER:

3. CHANGES IN THIS RELEASE:

4. CONFIGURATION ITEMS INCLUDED IN THIS RELEASE:

5. INSTALLATION INSTRUCTIONS:

Page 152: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-E9APPENDIX F

APPENDIX F

INDEX

Page 153: ESA Software Engineering Standards

3-E10 ESA PSS-05-0 Issue 2 (February 1991)INDEX

abstraction, 31acceptance test, 52

case, 49participation, 51plan, 30practice for, 45reports, 53requirement, 23, 31

AD phase, 6, 29AD/R, 29, 36adaptability requirement, 15ADD, 6, 29, 36ANSI, 1architectural design, 29

phase, 29specification, 34

audit, 35definition, 29functional audit, 29physical audit, 29

availability, 7availability requirement, 15backup, 16baseline, 14, 20block diagram

SR phase, 23UR phase, 14

block diagramsAD phase, 34

budgeting, 7capability requirement, 15CASE tool

AD phase, 31DD phase, 41SR phase, 21

change controldocumentation, 18general. Seelevels of, 17procedures, 18software, 19

checkout, 51CI

definition, 13identifier, 14promotion, 17

code, 46

conventions, 42documentation, 43production, 43quality assurance of, 36

coding, 41cohesion, 32component

function of, 34interface, 34

computer resource utilisation, 35computer virus, 23configuration item list, 52configuration management, 43configuration status accounting, 20constraint requirement, 15control flow, 35cost model, 8coupling, 33coverage analyser, 44critical design review, 46criticality, 15, 30data structure, 34DCR, 18, 20DD phase, 6DD/R, 39, 46DDD, 6, 47debugger, 44decomposition, xiv, 20

functional, 31of components, 31top-down, 31

defect, 45design quality criteria, 32detailed design, 41

phase, 39documentation, 18

labelling, 16documentation plan

quality assurance of, 34documentation requirement, 23DSS, 18embedded system, 45estimating, 7

AD phase, 10for future projects, 58SR phase, 9UR phase, 9

Page 154: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-F11INDEX

evolutionary development, 10external interface, 14, 17, 23, 34, 9, 10, 17feasibility, 14final acceptance, 7, 56formal method, 27formal proof, 27functional audit, 29functional decomposition, 31functional requirement, 22handover phase, 51HCI, 15ICD, 14, 23, 34IEEE

Std 1012-1986, 30Std 1016-1987, 37Std 1028-1988, 25Std 1042-1987, 21Std 1058.1-1987, 5Std 1063-1987, 47Std 729-1983, 13, 1Std 730-1989, 33, 34Std 828-1983, 21Std 830-1984, 27

implementation, 39incremental delivery, 9information hiding, 31, 40initiator, 56installation, 51integration, 43integration test, 44integration test plan, 31integrity, 23interface requirement, 22interview, 14library

archive, 16controlled, 16definition, 16development, 16dynamic, 16master, 16static, 16

life cycle, 3logical model, 20maintainability requirement, 24maintenance, 56

phase, 55purpose, 55

medialabelling, 16

media controlprocedure, 16quality assurance of, 36

method, 30quality assurance of, 35

metric, 24, 32, 26complexity, xiiidata collection, 7definition, 34for specifying requirements, 34quality assurance of, 34tools for, 7use in inspection, 26

milestone, 4module

definition, 41header, 15review, 41

module header, 42MTBF, 24MTTR, 24non-functional requirement, 32OM phase, 7, 55operational environment, 14, 52operational requirement, 23operations phase, 55PA, 33partitioning, 31password, 23performance requirement, 22PERT chart. See planning networkphase, 3PHD, 7, 34, 58physical audit, 29physical model, 30planning, 7planning network, 10portability requirement, 15, 24problem

categories, 19problem reporting

procedures, 19quality assurance of, 35

product assurance, 33production, 41

procedures, xiiiprogramming language, 35progress report, 8prototype, 10prototyping, 11

Page 155: ESA Software Engineering Standards

3-E12 ESA PSS-05-0 Issue 2 (February 1991)INDEX

AD phase, 33experimental, 33exploratory, 20SR phase, 20UR phase, 14

provisional acceptance, 7, 52quality assurance, 33quality requirement, 24regression test, 57, 21release, 20reliability requirement, 24requirement. See software requirement. Seeuser requirement

change, 12requirements

capture, 14requirements.

OM phase, 57resource requirement, 23review, 41, 25, 35. See software inspection. Seewalkthrough. See technical review

AD/R, 36DD phase, 46SR/R, 26UR/R, 16

RID, 18, 20risk analysis, 6risk management, 6

in quality assurance, 36safety requirement, 24safety-critical system, 27scheduling, 7SCM, 4, 13SCMP, 21

AD phase, 28, 22DD phase, 37, 22SR phase, 17, 21TR phase, 48, 22

SCR, 19, 20security requirement, 15, 23simulator, 45SMR, 19, 20software configuration management. See SCMsoftware inspection, 26software project management. See SPMsoftware quality assurance, 33software requirement

attribute, 24clarity, 25classification, 22

completeness, 25consistency, 25definition phase, 19duplication, 26identifier, 24need, 24priority, 24source, 25specification, 21stability, 25verifiability, 25

software verification and validation. See SVVSPM, 3, 5SPMP, 8

AD phase, 5, 28, 9DD phase, 6, 37, 10SR phase, 5, 17, 9TR phase, 48, 11

SPR, 19, 20SQA, 4, 33

SR phase, 37SQAP, 37

AD phase, 28, 37DD phase, 38, 37SR phase, 18TR phase, 49, 37

SR phase, 5, 19SR/R, 19, 26SRB, 51, 56, 19SRD, 5, 19, 26, 27SRN, 15, 20static analysis, 43STD, 7, 53, 20structured english, 27structured programming, 40, 42stub, 43SUM, 6, 47supplier control, 36survey, 14SVV, 23SVVP, 30

acceptance test, 17, 49acceptance test plan, 30AD phase, 28, 30DD phase, 37, 31integration test, 37, 45integration test plan, 31SR phase, 18, 30system test, 27, 45system test plan, 31

Page 156: ESA Software Engineering Standards

ESA PSS-05-0 Issue 2 (February 1991) 3-F13INDEX

unit test plan, 31unit tests, 44

system test, 45system test plan, 31systems engineering, xiiitailoring, xiitechnical management, 6technical review, 25test

black box, 44coverage, 44DD phase, 44regression, 57unit test, 44white box, 44

testing, 27integration testing, 44quality assurance of, 35system test, 45

toolneed for. See CASE toolplanning, 10quality assurance of, 35SCM, 13

top-downdecomposition, 31, 40design, 31integration, 43

TR phase, 7, 51traceability

backwards, 27, 26forwards, 26

traceability matrix, 26trade-off, 33training, 55, 6

quality assurance of, 36transfer

phase, 51unit test, 44unit test plan, 31UR phase, 5, 13UR/R, 13, 16URD, 5, 13, 17user requirement

attribute, 15clarity, 16classification, 14definition phase, 13identifier, 15need, 15priority, 16source, 16specification, 14verifiability, 16

validation, 23verification requirement, 23virus. See computer virusV-model, 23walkthrough, 26waterfall, 8WBS, 7work breakdown structure, 7work-around, 57

Page 157: ESA Software Engineering Standards

ESA PSS-05-01 Issue 1 Revision 1March 1995

european space agency / agence spatiale européenne8-10, rue Mario-Nikis, 75738 PARIS CEDEX, France

Guideto thesoftwareengineeringstandardsPrepared by:ESA Board for SoftwareStandardisation and Control(BSSC)

Page 158: ESA Software Engineering Standards

ii ESA PSS-05-01 Issue 1 Revision 1 (March 1995)DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

1. DOCUMENT TITLE: ESA PSS-05-01 Guide to the software engineering standards

2. ISSUE 3. REVISION 4. DATE 5. REASON FOR CHANGE

1 0 1991 First issue

1 1 1995 Minor updates for publication

Issue 1 Revision 1 approved, May 1995Board for Software Standardisation and ControlM. Jones and U. Mortensen, co-chairmen

Issue 1 approved, 1st February 1992Telematics Supervisory Board

Issue 1 approved by:The Inspector General, ESA

Published by ESA Publications Division,ESTEC, Noordwijk, The Netherlands.Printed in the Netherlands.ESA Price code: E2ISSN 0379-4059

Copyright © 1995 by European Space Agency

Page 159: ESA Software Engineering Standards

ESA PSS-05-01 Issue 1 Revision 1 (March 1995) iiiTABLE OF CONTENTS

TABLE OF CONTENTS

CHAPTER 1 INTRODUCTION..................................................................................11.1 PURPOSE .................................................................................................................11.2 OVERVIEW................................................................................................................1

CHAPTER 2 THE DOCUMENT TREE......................................................................32.1 INTRODUCTION.......................................................................................................32.1 THE DOCUMENT IDENTIFIER.................................................................................32.2 THE STRUCTURE OF THE DOCUMENT TREE ......................................................4

2.2.1 The Level 1 Document......................................................................................42.2.2 The Level 2 Documents....................................................................................42.2.3 The Level 3 Documents....................................................................................5

APPENDIX A GLOSSARY ....................................................................................A-1

Page 160: ESA Software Engineering Standards

iv ESA PSS-05-01 Issue 1 Revision 1 (March 1995)TABLE OF CONTENTS

This page is intentionally left blank.

Page 161: ESA Software Engineering Standards

ESA PSS-05-01 Issue 1 Revision 1 (March 1995) 1INTRODUCTION

CHAPTER 1INTRODUCTION

1.1 PURPOSE

ESA PSS-05-0 describes the software engineering standards to beapplied for all deliverable software implemented for the European SpaceAgency, either in house or by industry.

ESA PSS-05-0 is the top-level software engineering standard and isthe root of a document tree. Lower level standards that branch from this rootcontain advisory and optional material. The purpose of this document is todefine and explain the principles and conventions of the document tree.

1.2 OVERVIEW

Chapter 2 explains the document identification scheme and thestructure of the document tree.

Page 162: ESA Software Engineering Standards

2 ESA PSS-05-01 Issue 1 Revision 1 (March 1995)INTRODUCTION

This page is intentionally left blank.

Page 163: ESA Software Engineering Standards

ESA PSS-05-01 Issue 1 Revision 1 (March 1995) 3THE DOCUMENT TREE

CHAPTER 2THE DOCUMENT TREE

2.1 INTRODUCTION

The document tree currently planned is shown in figure 1. Furtherdocuments may be added in the future.

Guide to theSoftware Engineering

Guide to theUser Requirements

Definition Phase

Guide toSoftware Project

Management

PSS-05-01

PSS-05-02 UR GuidePSS-05-03 SR Guide

PSS-05-04 AD GuidePSS-05-05 DD Guide

PSS-05-06 TR GuidePSS-05-07 OM Guide

PSS-05-08 SPM GuidePSS-05-09 SCM Guide

PSS-05-10 SVV GuidePSS-05-11 SQA Guide

ESASoftware

EngineeringStandards

PSS-05-0

PSS-05-05-02 AdaPSS-05-05-01 Fortran

Fortran CodingGuide to

Standards

Level 1

Level 2

Level 3

PSS-05-05-03 C

Figure 1: The ESA PSS-05-0 Document Tree

2.1 THE DOCUMENT IDENTIFIER

Each document is given an identifier that follows the protocol:

ESA PSS-05[-level 2 number][-level 3 number]

PSS stands for ‘Procedures, Specifications and Standards'. Thefirst field, ‘05', defines the subject area as ‘software engineering'.

Page 164: ESA Software Engineering Standards

4 ESA PSS-05-01 Issue 1 Revision 1 (March 1995)THE DOCUMENT TREE

The number of numeric fields in the identifier defines the level (e.g.ESA PSS-05-05-02 is a level 3 document). A leading zero is added tonumbers in the range 1 to 9 to allow even tabulation of document lists.Numbers are allocated sequentially from 1.

The exception to this protocol is the identifier for the level 1document ESA PSS-05-0. The second numeric field ‘-0' exists to retaincompatibility with earlier PSS numbering schemes.

2.2 THE STRUCTURE OF THE DOCUMENT TREE

2.2.1 The Level 1 Document

Level 1 contains one and only one document, ESA PSS-05-0, whichis a mandatory standard for all ESA projects.

2.2.2 The Level 2 Documents

Level 2 documents offer guidance in carrying out the standardpractices described in ESA PSS-05-0. They repeat mandatory practicesstated in ESA PSS-05-0 (with cross-references) and do not define any newmandatory practices.

The first level 2 document, ESA PSS-05-01 (this document) exists todefine and explain the document tree.

ESA PSS-05-0 divides the software engineering activity into twoparts: the products themselves and the procedures used to make them.

The process of production is partitioned into six phases:• User Requirements Definition;• Software Requirements Definition;• Architectural Design;• Detailed Design and Production;• Transfer;• Operations and Maintenance.

Page 165: ESA Software Engineering Standards

ESA PSS-05-01 Issue 1 Revision 1 (March 1995) 5THE DOCUMENT TREE

The related level 2 documents are:

Identifier TitleESA PSS-05-02 Guide to the User Requirements Definition PhaseESA PSS-05-03 Guide to the Software Requirements Definition PhaseESA PSS-05-04 Guide to the Architectural Design PhaseESA PSS-05-05 Guide to the Detailed Design and Production PhaseESA PSS-05-06 Guide to the Transfer PhaseESA PSS-05-07 Guide to the Operations and Maintenance Phase

Each document contains:• an overview of the phase;• guidelines on methods that can be used in the phase;• guidelines on tools that can be used in the phase;• instructions for documentation.

The procedures to be followed in the life cycle are governed byplans. The various types of planning envisaged are:• project management;• configuration management;• verification and validation;• quality assurance.

The related level 2 documents are:

Identifier TitleESA PSS-05-08 Guide to Software Project ManagementESA PSS-05-09 Guide to Software Configuration ManagementESA PSS-05-10 Guide to Software Verification and ValidationESA PSS-05-11 Guide to Software Quality Assurance

2.2.3 The Level 3 Documents

Level 3 documents exist to:• contain optional material;• partition material in a higher level document into more manageable

quantities.

Page 166: ESA Software Engineering Standards

6 ESA PSS-05-01 Issue 1 Revision 1 (March 1995)THE DOCUMENT TREE

The level 3 documents that have been defined to date are:

Identifier TitleESA PSS-05-05-01 Guide to Fortran codingESA PSS-05-05-02 Guide to Ada codingESA PSS-05-05-03 Guide to C coding

Level 3 documents can be made applicable. If Ada is selected asthe programming language to be used for a system, for example, theproject's management may choose to make ESA PSS-05-05-02, ‘Guide toAda Coding', applicable.

Page 167: ESA Software Engineering Standards

ESA PSS-05-01 Issue 1 Revision 1 (March 1995) A-1GLOSSARY

APPENDIX AGLOSSARY

AD Architectural DesignBSSC Board for Software Standardisation and ControlDD Detailed Design and productionESA European Space AgencyOM Operations and MaintenancePSS Procedures, Specifications and StandardsSA Structured AnalysisSCM Software Configuration ManagementSPM Software Project ManagementSQA Software Quality AssuranceSR Software RequirementsSVV Software Verification and ValidationTR TransferUR User Requirements

Page 168: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1March 1995

european space agency / agence spatiale européenne8-10, rue Mario-Nikis, 75738 PARIS CEDEX, France

Guideto theuser requirementsdefinitionphase

Prepared by:ESA Board for SoftwareStandardisation and Control(BSSC)

Page 169: ESA Software Engineering Standards

ii ESA PSS-05-02 Issue 1 Revision 1 (March 1995)DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

1. DOCUMENT TITLE: ESA PSS-05-01 Guide to the user requirements definition phase

2. ISSUE 3. REVISION 4. DATE 5. REASON FOR CHANGE

1 0 1991 First issue

1 1 1995 Minor updates for public version

Issue 1 Revision 1 approved, May 1995Board for Software Standardisation and ControlM. Jones and U. Mortensen, co-chairmen

Issue 1 approved, 1st February 1992Telematics Supervisory Board

Issue 1 approved by:The Inspector General, ESA

Published by ESA Publications Division,ESTEC, Noordwijk, The Netherlands.Printed in the Netherlands.ESA Price code: E0ISSN 0379-4059

Copyright © 1995 by European Space Agency

Page 170: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) iiiTABLE OF CONTENTS

TABLE OF CONTENTS

CHAPTER 1 INTRODUCTION..................................................................................11.1 PURPOSE .................................................................................................................11.2 OVERVIEW................................................................................................................1

CHAPTER 2 THE USER REQUIREMENTS DEFINITION PHASE...........................32.1 INTRODUCTION......................................................................................................32.2 CAPTURE OF USER REQUIREMENTS ..................................................................32.3 DETERMINATION OF OPERATIONAL ENVIRONMENT........................................42.4 SPECIFICATION OF USER REQUIREMENTS........................................................5

2.4.1 Capability requirements....................................................................................52.4.1.1 Capacity..................................................................................................62.4.1.2 Speed ...................................................................................................62.4.1.3 Accuracy.................................................................................................7

2.4.2 Constraint requirements...................................................................................72.4.2.1 Communications interfaces ...................................................................72.4.2.2 Hardware interfaces ...............................................................................82.4.2.3 Software interfaces.................................................................................82.4.2.4 Human-Computer Interaction ................................................................82.4.2.5 Adaptability .............................................................................................82.4.2.6 Availability ...............................................................................................92.4.2.7 Portability ................................................................................................92.4.2.8 Security .................................................................................................102.4.2.9 Safety .................................................................................................102.4.2.10 Standards ...........................................................................................102.4.2.11 Resources...........................................................................................112.4.2.12 Timescales...........................................................................................11

2.5 ACCEPTANCE TEST PLANNING...........................................................................112.6 PLANNING THE SOFTWARE REQUIREMENTS DEFINITION PHASE................112.7 THE USER REQUIREMENTS REVIEW..................................................................12

CHAPTER 3 METHODS FOR USER REQUIREMENTS DEFINITION...................133.1 INTRODUCTION....................................................................................................133.2 METHODS FOR USER REQUIREMENTS CAPTURE ..........................................13

3.2.1 Interviews and surveys ....................................................................................133.2.2 Studies of existing software.............................................................................133.2.3 Study of system requirements ........................................................................133.2.4 Feasibility studies ............................................................................................143.2.5 Prototyping .................................................................................................14

3.3 METHODS FOR REQUIREMENTS SPECIFICATION...........................................143.3.1 Natural language ............................................................................................14

Page 171: ESA Software Engineering Standards

iv ESA PSS-05-02 Issue 1 Revision 1 (March 1995)PREFACE

3.3.2 Mathematical formalism.................................................................................143.3.3 Structured English ..........................................................................................143.3.4 Tables .................................................................................................153.3.5 System block diagrams..................................................................................153.3.6 Timelines .................................................................................................163.3.7 Context diagrams ............................................................................................16

CHAPTER 4 TOOLS FOR USER REQUIREMENTS DEFINITION ........................174.1 INTRODUCTION....................................................................................................174.2 TOOLS FOR USER REQUIREMENTS CAPTURE ................................................174.3 TOOLS FOR USER REQUIREMENTS SPECIFICATION......................................17

4.3.1 User requirements management ....................................................................174.3.2 Document production .....................................................................................18

CHAPTER 5 THE USER REQUIREMENTS DOCUMENT......................................195.1 INTRODUCTION....................................................................................................195.2 STYLE.....................................................................................................................20

5.2.1 Clarity .................................................................................................205.2.2 Consistency .................................................................................................205.2.3 Modifiability .................................................................................................21

5.3 EVOLUTION...........................................................................................................215.4 RESPONSIBILITY...................................................................................................225.5 MEDIUM.................................................................................................................235.6 CONTENT..............................................................................................................23

CHAPTER 6 LIFE CYCLE MANAGEMENT ACTIVITIES .......................................316.1 INTRODUCTION....................................................................................................316.2 PROJECT MANAGEMENT PLAN FOR THE SR PHASE ......................................316.3 CONFIGURATION MANAGEMENT PLAN FOR THE SR PHASE ........................326.4 VERIFICATION AND VALIDATION PLAN FOR THE SR PHASE ..........................326.5 QUALITY ASSURANCE PLAN FOR THE SR PHASE ...........................................326.6 ACCEPTANCE TEST PLANS ................................................................................33

APPENDIX A GLOSSARY ....................................................................................A-1APPENDIX B REFERENCES................................................................................B-1APPENDIX C UR PHASE MANDATORY PRACTICES........................................C-1APPENDIX D INDEX.............................................................................................D-1

Page 172: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) vPREFACE

PREFACE

This document is one of a series of guides to software engineering produced bythe Board for Software Standardisation and Control (BSSC), of the European SpaceAgency. The guides contain advisory material for software developers conforming toESA's Software Engineering Standards, ESA PSS-05-0. They have been compiled fromdiscussions with software engineers, research of the software engineering literature,and experience gained from the application of the Software Engineering Standards inprojects.

Levels one and two of the document tree at the time of writing are shown inFigure 1. This guide, identified by the shaded box, provides guidance aboutimplementing the mandatory requirements for the user requirements definition phasedescribed in the top level document ESA PSS-05-0.

Guide to theSoftware Engineering

Guide to theUser Requirements

Definition Phase

Guide toSoftware Project

Management

PSS-05-01

PSS-05-02 UR GuidePSS-05-03 SR Guide

PSS-05-04 AD GuidePSS-05-05 DD Guide

PSS-05-07 OM Guide

PSS-05-08 SPM GuidePSS-05-09 SCM Guide

PSS-05-11 SQA Guide

ESASoftware

EngineeringStandards

PSS-05-0

Standards

Level 1

Level 2

PSS-05-10 SVV Guide

PSS-05-06 TR Guide

Figure 1: ESA PSS-05-0 document tree

The Guide to the Software Engineering Standards, ESA PSS-05-01, containsfurther information about the document tree. The interested reader should consult thisguide for current information about the ESA PSS-05-0 standards and guides.

The following past and present BSSC members have contributed to theproduction of this guide: Carlo Mazza (chairman), Gianfranco Alvisi, Michael Jones,Bryan Melton, Daniel de Pablo and Adriaan Scheffer.

Page 173: ESA Software Engineering Standards

vi ESA PSS-05-02 Issue 1 Revision 1 (March 1995)PREFACE

The BSSC wishes to thank Jon Fairclough for his assistance in the developmentof the Standards and Guides, and to all those software engineers in ESA and Industrywho have made contributions.

Requests for clarifications, change proposals or any other comment concerningthis guide should be addressed to:

BSSC/ESOC Secretariat BSSC/ESTEC SecretariatAttention of Mr C Mazza Attention of Mr B MeltonESOC ESTECRobert Bosch Strasse 5 Postbus 299D-64293 Darmstadt NL-2200 AG NoordwijkGermany The Netherlands

Page 174: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) 1INTRODUCTION

CHAPTER 1INTRODUCTION

1.1 PURPOSE

ESA PSS-05-0 describes the software engineering standards to beapplied for all deliverable software implemented for the European SpaceAgency (ESA), either in house or by industry [Ref 1].

ESA PSS-05-0 defines a preliminary phase to the softwaredevelopment life cycle called the ‘User Requirements Definition Phase' (URphase). Activities and products are examined in the ‘UR review' (UR/R) at theend of the phase.

The UR phase can be called the ‘problem definition phase' of the lifecycle. The phase refines an idea about a task to be performed usingcomputing equipment, into a definition of what is expected from thecomputer system.

This document provides a definition of what user requirements are,suggests how they can be captured and gives guidelines on how theyshould be stated in a URD. This guide should be read by all activeparticipants in the user requirements phase, i.e. initiators, userrepresentatives, software project managers and authors and reviewers ofthe URD.

1.2 OVERVIEW

Chapter 2 discusses the UR phase. Chapters 3 and 4 discussmethods and tools for user requirements definition. Chapter 5 describeshow to write the URD, in particular how to fill out the document template.Chapter 6 summarises the life cycle management activities, which arediscussed at greater length in other guides.

All the mandatory practices in ESA PSS-05-0 relevant to the URphase are repeated in this document. The identifier of the practice is addedin parentheses to mark a repetition. This document contains no newmandatory practices.

Page 175: ESA Software Engineering Standards

2 ESA PSS-05-02 Issue 1 Revision 1 (March 1995)INTRODUCTION

This page is intentionally left blank.

Page 176: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) 3THE USER REQUIREMENTS DEFINITION PHASE

CHAPTER 2THE USER REQUIREMENTS DEFINITION PHASE

2.1 INTRODUCTION

The UR phase can be called the ‘concept' or ‘problem definition'phase of the ESA PSS-05-0 life cycle. User requirements often follow directlyfrom a spontaneous idea or thought. Even so, wide agreement andunderstanding of the user requirements is more likely if these guidelines areapplied. The definition of user requirements is an iterative process.

User requirements are documented in the User RequirementsDocument (URD). The URD gives the user's view of the problem, not thedeveloper's. A URD may have to go through several revisions before it isacceptable to everyone.

The main outputs of the UR phase are the:• User Requirements Document (URD);• Software Project Management Plan for the SR phase (SPMP/SR);• Software Configuration Management Plan for the SR phase (SCMP/SR);• Software Verification and Validation Plan for the SR Phase (SVVP/SR);• Software Quality Assurance Plan for the AD phase (SQAP/SR);• Acceptance Test Plan (SVVP/AT).

2.2 CAPTURE OF USER REQUIREMENTS

The capture of user requirements is the process of gatheringinformation about user needs. ESA PSS-05-0 recommends that:• user requirements should be clarified through criticism and experience

of existing software and prototypes;• wide agreement should be established through interviews and surveys;• knowledge and experience of the potential development organisations

should be used to help decide on implementation feasibility, and,perhaps to build prototypes.

Page 177: ESA Software Engineering Standards

4 ESA PSS-05-02 Issue 1 Revision 1 (March 1995)THE USER REQUIREMENTS DEFINITION PHASE

Above all, user requirements should be realistic [Ref 5]. Realisticuser requirements are:• clear;• verifiable;• complete;• accurate;• feasible.

Clarity and verifiability help ensure that delivered systems will meetuser requirements. Completeness and accuracy imply that the URD statesthe user's real needs. A URD is inaccurate if it requests something that usersdo not need, for example a superfluous capability or an unnecessary designconstraint (see Section 2.4).

Realistic user requirements must be feasible. If the resources andtimescales available for its implementation are insufficient, it may beunrealistic to put them in a URD.

When a system is to replace an existing one, the best way to makethe user requirements realistic is to describe the current way of doing thingsand then define the user requirements in terms of the changes needed. Thedescription of the current system should use the concrete, physical termsfamiliar to the user [Ref 7, 8, 9].

Methods for capturing user requirements are discussed in Chapter3.

2.3 DETERMINATION OF OPERATIONAL ENVIRONMENT

A clear description of the real world that the software will operate inshould be built up, as the user requirements are captured. Chapter 3describes several user requirements definition methods that can be used.This description of the operational environment must clearly establish theproblem context.

In a system development, each subsystem will have interfaces toother, external, systems. The nature of these exchanges with externalsystems should be specified and controlled from the start of the project. Theinformation may reside in an Interface Control Document (ICD), or in thedesign documentation of the external system.

Page 178: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) 5THE USER REQUIREMENTS DEFINITION PHASE

The roles and responsibilities of the users and operators of softwareshould be established by defining the:• characteristics of each group (e.g. experience, qualifications);• operations they perform (e.g. the user of the data may not operate the

software).

2.4 SPECIFICATION OF USER REQUIREMENTS

The specification of user requirements is the process of organisinginformation about user needs and expressing them in a document.

A requirement is a ‘condition or capability needed by a user to solvea problem or achieve an objective' [Ref 2]. This definition leads to twoprincipal categories of requirements: ‘capability requirements' and‘constraint requirements'. These categories and their subcategories aredescribed in detail in this chapter. Unless otherwise indicated, all types ofrequirements can be stated in the User Requirements Document.

2.4.1 Capability requirements

Capability requirements describe the process to be supported bysoftware. Simply stated, they describe ‘what' the users want to do.

A capability requirement should define an operation, or sequenceof related operations, that the software will be able to perform. If thesequence contains more than approximately five related operations, thecapability requirement should be split.

The operations should be organised to describe the overall processfrom start to finish. Where there are many operations to describe, it isrecommended that they are grouped hierarchically to help manage thecomplexity.

Operations may be routine, (e.g. normal tasks) or non-routine (e.g.error handling, interruptions). Non-routine operations may be groupedseparately from those related to the normal processing.

In the Software Requirements Definition Phase, capabilityrequirements will be analysed to produce a set of functional requirements. Ifduplication of capability requirements occurs, the analyst may be able toreplace them with a single functional requirement. A single function maysupport a process at many different times, therefore a function can map tomany capability requirements.

Page 179: ESA Software Engineering Standards

6 ESA PSS-05-02 Issue 1 Revision 1 (March 1995)THE USER REQUIREMENTS DEFINITION PHASE

Quantitative statements that specify performance and accuracyattributes should form part of the specification of capability. This means thata capability requirement should be qualified with values of:• capacity;• speed;• accuracy.

The performance attribute is the combination of the capacity andspeed attributes.

2.4.1.1 Capacity

The capacity attribute states ‘how much' of a capability is needed atany moment in time. Each capability requirement should be attached with aquantitative measure of the capacity required. For example the:• number of users to be supported;• number of terminals to be supported;• number of satellites that can be controlled simultaneously;• amount of data to be stored.

2.4.1.2 Speed

The speed attribute states how fast the complete operation, orsequence of operations, is to be performed. Each capability requirementshould be attached with a quantitative measure of the speed required. Thereare various ways to do this, for example the:• number of operations done per unit time interval;• time taken to perform an operation.

For example: ‘95% of the transactions shall be processed in lessthan 1 second', is acceptable whilst, ‘95% of the transactions will be done assoon as possible' is not.

Note that a system may react quickly to a command but take quitea long time to complete the operations requested. Such ‘response'requirements should be stated as HCI requirements.

Page 180: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) 7THE USER REQUIREMENTS DEFINITION PHASE

2.4.1.3 Accuracy

The accuracy of an operation is measured by the differencebetween what is intended and what happens when it is carried out.

Examples are:• ‘the accuracy of accounting reports shall be one accounting unit';• ‘the program shall predict the satellite's altitude to within 10 metres,

seven days in advance'.

Accuracy attributes should take account of both systematic errorsand random errors.

2.4.2 Constraint requirements

Constraint requirements place restrictions on how the userrequirements are to be met. The user may place constraints on the softwarerelated to interfaces, quality, resources and timescales.

Users may constrain how communication is done with othersystems, what hardware is to be used, what software it has to becompatible with, and how it must interact with human operators. These areall interface constraints.

An interface is a shared boundary between two systems; it may bedefined in terms of what is exchanged across the boundary.

Interfaces are important kinds of constraints. The user may defineexternal interfaces (i.e. state how interactions with other systems must bedone) but should leave the developers to define the internal interfaces (i.e. tostate how software components will interact with each other).

Users may constrain the quality required of the final product. Typicalquality characteristics are: adaptability, availability, portability, security andsafety.

2.4.2.1 Communications interfaces

A communications interface requirement may specify the networksand network protocols to be used. Performance attributes of the interfacemay be specified (e.g. data rate).

The ISO reference model for Open Systems Interconnection, with itsseven layers of abstraction, can be used for describing communications

Page 181: ESA Software Engineering Standards

8 ESA PSS-05-02 Issue 1 Revision 1 (March 1995)THE USER REQUIREMENTS DEFINITION PHASE

interfaces. This means that a communications interface requirement shoulduse terminology consistent with the model. Communications interfacerequirements should avoid mixing the layers of abstraction.

2.4.2.2 Hardware interfaces

A hardware interface requirement specifies all or part of thecomputer hardware the software is to execute on. This may be done bystating the make and model of the device, physical limitations (e.g. size,weight), performance (e.g. speed, memory), qualifications (e.g. projectapproved, space qualified) and also perhaps whether any hardwareselected has to be derated (e.g. for operation at altitude). Environmentalconsiderations that affect the selection of hardware may be stated (e.ghumidity, temperature and pressure).

2.4.2.3 Software interfaces

A software interface requirement specifies whether the software is tobe compatible with other software (e.g other applications, compilers,operating systems, programming languages and database managementsystems).

2.4.2.4 Human-Computer Interaction

A Human-Computer Interaction (HCI) requirement may specify anyaspect of the user interface. This may include a statement about style (e.g.command language, menu system, icons), format (e.g. report content andlayout), messages (e.g. brief, exhaustive) and responsiveness (e.g. timetaken to respond to command). The hardware at the user interface (e.g.colour display and mouse) may be included either as an HCI requirement oras a hardware interface requirement.

2.4.2.5 Adaptability

Adaptability measures how easily a system copes with requirementschanges. Adaptable (or flexible) systems are likely to live longer, althoughthe extra design work needed may be extensive, especially for optimisingmodularity. An example of an adaptability requirement is: ‘it shall be possibleto add new commands without retesting existing commands'.

In the operations and maintenance phase the software mayundergo continuous adaptation as the user requirements are modified byexperience.

Page 182: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) 9THE USER REQUIREMENTS DEFINITION PHASE

When considering the adaptability, note that any change involvessome risk, and to change reliable parts of the system may not beacceptable.

2.4.2.6 Availability

Availability measures the ability of a system to be used during itsintended periods of its operation. Availability requirements may specify:• mean and minimum capacity available (e.g. all terminals);• start and end times of availability (e.g. from 0900 to 1730 daily);• time period for averaging availability (e.g. 1 year).

Examples of availability requirements are:• ‘the user shall be provided with 98% average availability over 1 year

during working hours and never less than 50% of working hours in anyone week';

• ‘all essential capabilities shall be at least 98% available in any 48 hourperiod and at least 75% available in every 3 hour period'.

When a system is unavailable, some, or even all, of its capabilitiescannot be used. A loss of capability is called a ‘failure' and is caused by oneor more ’faults'. The average time between the occurrence of faults internalto the software (i.e. ‘bugs') measures the ’reliability' of the software. Theaverage time taken to fix such faults measures its ’maintainability'. A systemmay also become unavailable due to external factors (e.g. loss of inputservice).

Users only need to state their availability requirements. Theavailability requirements are decomposed into specific reliability andmaintainability requirements in the SR phase.

2.4.2.7 Portability

Software portability is measured by the ease that it can be movedfrom one environment to another. Portable software tends to be long lived,but more code may have to be written and performance requirements maybe more difficult to meet. An example of a portability requirement is: ‘thesoftware shall be portable between environments X and Y'.

Portability can be measured in terms of the number of lines of codeand/or the number of modules that do not have to be changed to port the

Page 183: ESA Software Engineering Standards

10 ESA PSS-05-02 Issue 1 Revision 1 (March 1995)THE USER REQUIREMENTS DEFINITION PHASE

software from one computer to another. Either absolute or relativemeasurements can be used.

If migration to another hardware base or operating system isintended, then any requirements to run with different hardware and softwareinterfaces should be stated as portability requirements. New interfacesshould be described (e.g. name the new operating system or computerhardware).

2.4.2.8 Security

A system may need to be secured against threats to itsconfidentiality, integrity and availability. For example, a user may request thatunauthorised users be unable to use the system, or that no single eventsuch as a fire should cause the loss of more than 1 week's information. Theuser should describe threats that the system needs to be protected against,e.g. virus intrusions, hackers, fires, computer breakdowns.

The security of a system can be described in terms of the ownershipof, and rights of access to, the capabilities of the system.

A secure system protects users from their own errors as well as themalicious interference, or illegal activities, of unauthorised users.

2.4.2.9 Safety

The consequences of software failure should be made clear todevelopers. Safety requirements define the needs of users to be protectedagainst potential problems such as hardware or software faults. They maydefine scenarios that the system should handle safely (e.g. ‘the systemshould ensure that no data is lost when a power failure occurs')

2.4.2.10 Standards

Standards requirements normally reference the applicabledocuments that define the standard.

Two kinds of standards can be specified: process standards andproduct standards. Examples of product standards are export file formatsand legal report formats. Examples of the process standards are productassurance standards and accounting procedures to be followed. Adherenceto process standards should be specified in the Software ProjectManagement Plan.

Page 184: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) 11THE USER REQUIREMENTS DEFINITION PHASE

A standards requirement may specify the methods that are to beemployed by the developers in subsequent phases. Such methods must becompatible with the life cycle defined in ESA PSS-05-0.

2.4.2.11 Resources

The resources available for producing and operating the softwareare a constraint on the design. If this information is available then it shouldbe stated in the Software Project Management Plan in terms of one or moreof financial, manpower and material limits. As with any other product, thequality and sophistication of a software product are limited by the resourcesthat are put into building it.

Resource requirements may include specifications of the computerresources available (e.g. main memory). They may define the minimumhardware that the system must run on (e.g. a 486 PC with 4 Mbytes ofmemory). Care should be taken to include only the necessary resourceconstraints.

2.4.2.12 Timescales

A constraint on the design of the software may be the acceptabletimescales for its development and production. Requirements for theachievement of specific life cycle milestones may be stated in the SoftwareProject Management Plan.

2.5 ACCEPTANCE TEST PLANNING

Validation confirms whether the user requirements are satisfiedwhen the software is delivered. This is done by performing acceptance testsin the Transfer Phase.

Acceptance Test Plans must be generated in the UR phase anddocumented in the Acceptance Test section of the Software Verification andValidation Plan (SVVP/AT). The Acceptance Test Plan should describe thescope, approach and resources required for the acceptance tests, and takeaccount of the user requirements. See Chapter 6.

2.6 PLANNING THE SOFTWARE REQUIREMENTS DEFINITION PHASE

Plans of SR phase activities must be drawn up in the UR phase bythe developer. Planning of the SR phase is discussed in Chapter 6. Planning

Page 185: ESA Software Engineering Standards

12 ESA PSS-05-02 Issue 1 Revision 1 (March 1995)THE USER REQUIREMENTS DEFINITION PHASE

should cover project management, configuration management, verification,validation and quality assurance. Outputs are the:• Software Project Management Plan for the SR phase (SPMP/SR);• Software Configuration Management Plan for the SR phase (SCMP/SR);• Software Verification and Validation Plan for the SR phase (SVVP/SR);• Software Quality Assurance Plan for SR phase (SQAP/SR).

2.7 THE USER REQUIREMENTS REVIEW

Producing the URD and the SVVP/AT is an iterative process. Theinitiator should organise internal reviews of a document before its formalreview.

The outputs of the UR phase must be formally reviewed during theUser Requirements Review (UR08). This should be a technical review. Therecommended procedure is described in ESA PSS-05-10, and is derivedfrom the IEEE standard for Technical Reviews [Ref 4].

Normally, only the URD and the Acceptance Test Plan undergo thefull technical review procedure involving users, developers, managementand quality assurance staff. The Software Project Management Plan(SPMP/SR), Software Configuration Management Plan (SCMP/SR), SoftwareVerification and Validation Plan (SVVP/SR), and Software Quality AssurancePlan (SQAP/SR) are usually reviewed by management and quality assurancestaff only.

The objective of the UR/R review is to verify that:• the URD states user requirements clearly and completely and that a

general description of the process the user expects to be supported ispresent;

• the SVVP/AT is an adequate plan for validating the software in the TRphase.

The UR/R should conclude with a statement about the project'sreadiness to proceed.

Page 186: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) 13METHODS FOR USER REQUIREMENTS DEFINITION

CHAPTER 3 METHODS FOR USER REQUIREMENTS DEFINITION

3.1 INTRODUCTION

This chapter discusses methods for user requirements capture andspecification in current use. Methods can be combined to suit the needs ofa particular project.

3.2 METHODS FOR USER REQUIREMENTS CAPTURE

While user requirements ultimately come from an original idea, oneor more of the methods described below can be used to stimulate thecreative process and record its output.

3.2.1 Interviews and surveys

Interviews should be structured to ensure that all issues arecovered. When it is not practical to interview all the potential users, arepresentative sample should be selected and interviewed; this process iscalled a survey. Interviews and surveys can be useful for ensuring that:• the user requirements are complete;• there is wide agreement about the user requirements.

3.2.2 Studies of existing software

New software is often written to replace existing software. Aninvestigation of the good and bad features of what exists can identifyrequirements for what is to be built. Examination of user manuals,requirements documentation and change proposals can be especiallyhelpful.

3.2.3 Study of system requirements

If software is part of a larger system, many of the user requirementscan be derived from the System Requirements Document.

Page 187: ESA Software Engineering Standards

14 ESA PSS-05-02 Issue 1 Revision 1 (March 1995)METHODS FOR USER REQUIREMENTS DEFINITION

3.2.4 Feasibility studies

A feasibility study is the analysis and design of the principal featuresof a system. The amount of detail in the design will not normally allow itsimplementation, but may show whether implementation is possible.

3.2.5 Prototyping

A prototype is a ‘concrete executable model of selected aspects ofa proposed system' [Ref 5]. If requirements are unclear or incomplete, it canbe useful to develop a prototype based on tentative requirements to explorewhat the user requirements really are. This is called ‘exploratory prototyping'.Hands-on experience can be an excellent way of deciding what is reallywanted.

3.3 METHODS FOR REQUIREMENTS SPECIFICATION

3.3.1 Natural language

The obvious way to express a requirement is to use naturallanguage (e.g. English). Natural language is rich and accessible butinconsistency and ambiguity are more likely. For example, the statement:

‘The database will contain an address'

might be read as any of:‘There will be one and only one address'‘Some part of the database will be designated as an address'‘There will be at least one address in the database'.

3.3.2 Mathematical formalism

Mathematical formulae should be described or referenced in theURD where they clarify the statement of requirement. All symbols used in anexpression should be defined or referenced.

3.3.3 Structured English

Structured English is a specification language that makes use of alimited vocabulary and a limited syntax [Ref 7]. The vocabulary of StructuredEnglish consists only of:• imperative English language verbs;

Page 188: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) 15METHODS FOR USER REQUIREMENTS DEFINITION

• terms defined in a glossary;• certain reserved words for logic formulation.

The syntax of a Structured English statement is limited to thesepossibilities:• simple declarative sentence;• closed end decision construct;• closed end repetition construct.

Structured English is normally used to describe the basic processesof a system and is suitable for expressing capability requirements. Examplesare:

Sequence:GET RAW DATAREMOVE INSTRUMENT EFFECTSCALIBRATE CORRECTED DATA

Condition: IF SAMPLE IS OF NOMINAL QUALITY THEN CALIBRATE SAMPLE ELSE STORE BAD SAMPLE

Repetition: FOR EACH SAMPLE GET POINTING DIRECTION AT TIME OF SAMPLE STORE POINTING DIRECTION WITH SAMPLE

Formalising the English structure may allow automated processingof requirements (e.g. automated checking, analysis, transformation anddisplay) and makes it easier to define acceptance tests.

3.3.4 Tables

Tables are an effective method for describing requirementscompletely and concisely. Used extensively in later phases, they cansummarise relationships more effectively than a plain text description.

3.3.5 System block diagrams

Block diagrams are the traditional way of depicting the processingrequired. They can also demonstrate the context the software operates inwhen it is part of a larger system.

Page 189: ESA Software Engineering Standards

16 ESA PSS-05-02 Issue 1 Revision 1 (March 1995)METHODS FOR USER REQUIREMENTS DEFINITION

3.3.6 Timelines

Timelines can describe sequences of operations that software mustperform, especially if there is a real-time aspect or processing schedule.They convey a sense of interval more powerfully than a text description.

3.3.7 Context diagrams

A context diagram containsone bubble, representing thesystem, and dataflow arrows,showing the inputs and outputs.Context diagrams show externalinterfaces.

TC

ImagesTargets

TM

Ground System

Bad TMSelectionCriteria

Page 190: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) 17TOOLS FOR USER REQUIREMENTS DEFINITION

CHAPTER 4 TOOLS FOR USER REQUIREMENTS DEFINITION

4.1 INTRODUCTION

This chapter discusses tools for user requirements capture andspecification. Tools can be combined to suit the needs of a particularproject.

4.2 TOOLS FOR USER REQUIREMENTS CAPTURE

The questionnaire is the primary tool of a survey. To get useful data,careful consideration should be given to its contents and presentation.

A feasibility study may be performed using CASE tools for analysisand design. Similarly, prototyping of code may employ tools used fordetailed design and production.

Capturing user requirements from studies of existing software mayrequire building models that describe what the existing software does [Ref 7,8, 9]. CASE tools are available for the construction of such models.

When the software is being developed as part of a systemdevelopment, any tools used for system requirements analysis may alsoprove useful in identifying the user requirements.

4.3 TOOLS FOR USER REQUIREMENTS SPECIFICATION

4.3.1 User requirements management

Tools for managing user requirements information should supportone or more of the following functions:• insertion of new requirements;• modification of existing requirements;• deletion of requirements;• storage of attributes (e.g. identifier) with the text;• searching for requirements attributes and text strings;• cross-referencing;

Page 191: ESA Software Engineering Standards

18 ESA PSS-05-02 Issue 1 Revision 1 (March 1995)TOOLS FOR USER REQUIREMENTS DEFINITION

• change history recording;• access control;• display;• printing, in a variety of formats.

Database Management Systems (DBMS), available on a variety ofhardware platforms (e.g. PC, minicomputer, mainframe), provide many ofthese functions. For large systems, a requirements DBMS becomesessential.

The ability to export requirements data to the word processor usedfor URD production is essential for preserving consistency.

4.3.2 Document production

A word processor or text processor should be used for producing adocument. Tools for the creation of paragraphs, sections, headers, footers,tables of contents and indexes all facilitate the production of a document. Aspell checker is desirable. An outliner may be found useful for creation ofsub-headings, for viewing the document at different levels of detail and forrearranging the document. The ability to handle diagrams is very important.

Documents invariably go through many drafts as they are created,reviewed and modified. Revised drafts should include change bars.Document comparison programs, which can mark changed textautomatically, are invaluable for easing the review process.

Tools for communal preparation of documents are beginning to beavailable, allowing many authors to comment and add to a single documentin a controlled manner.

Page 192: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) 19THE USER REQUIREMENTS DOCUMENT

CHAPTER 5THE USER REQUIREMENTS DOCUMENT

5.1 INTRODUCTION

The URD is a mandatory output of the UR phase (UR10) and mustalways be produced before the software project is started (UR11). The URDmust:• provide a general description of what the user wants to perform with the

software system (UR12);• contain all the known user requirements (UR13);• describe the operations the user wants to perform with the software

system (UR14);• define all the constraints that the user wishes to impose on any solution

(UR15);• describe the external interfaces to the software system or reference

them in ICDs that exist or are to be written (UR16).The size and content of the URD should reflect the complexity of the

problem and the degree of expertise and understanding shared by theinitiator, users, URD author and software developer.

The URD needs to state the problem as completely and accuratelyas possible. The cost of changing the user requirements increases rapidlyas the project proceeds through the life cycle.

When software is transferred to users after development,acceptance tests are held to determine whether it meets the userrequirements. The URD should be detailed enough to allow the definition ofacceptance tests.

The URD should be a balanced statement of the problem andshould avoid over-constraining the solution. If the software described in theURD is a part of a larger system (i.e. it is a subsystem), then the URD mayreplace the descriptive information with references to higher leveldocuments. The purpose of the software, however, should always be clearfrom the URD.

ESA PSS-05-0 defines the minimum required documents for asoftware project and the URD has a definite role to play in thisdocumentation scheme. URD authors should not go beyond the bounds ofthat role.

Page 193: ESA Software Engineering Standards

20 ESA PSS-05-02 Issue 1 Revision 1 (March 1995)THE USER REQUIREMENTS DOCUMENT

The URD should not:• contain an exhaustive analysis of the requirements on the software (this

is done in the SR phase);• define any design aspects (this is done in the AD and DD phases);• cover project management aspects (which form part of the SPMP/SR);

If definition of design aspects is unavoidable, then such definitionsshould be categorised as constraint requirements.

The URD should define needs accurately and leave the maximumscope for the software engineer to choose the most efficient solution.

5.2 STYLE

The style of a URD should be plain and concise. The URD shouldbe clear, consistent and modifiable. Wherever possible, requirements shouldbe stated in quantitative terms to increase their verifiability.

5.2.1 Clarity

A URD is ‘clear' if each requirement is unambiguous andunderstandable to project participants. A requirement is unambiguous if ithas only one interpretation. To be understandable, the language used in aURD should be shared by all project participants and should be as simpleas possible.

Each requirement should be stated in a single sentence.Justifications and explanations of a requirement should be clearly separatedfrom the requirement itself.

Clarity is enhanced by grouping related requirements together. Thecapability requirements in a group should be structured to reflect anytemporal or causal relationships between them. Groups containing morethan about ten requirements should be broken down into sub-groups.Subgroups should be organised hierarchically. Structuring the userrequirements is one of the most important ways of making themunderstandable.

5.2.2 Consistency

A URD is consistent if no requirements conflict. Using differentterms for what is really the same thing, or specifying two incompatiblequalities, are examples of lack of consistency.

Page 194: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) 21THE USER REQUIREMENTS DOCUMENT

Where a term used in a particular context could have multiplemeanings, a single meaning should be defined in a glossary, and only thatmeaning should be used throughout.

5.2.3 Modifiability

A URD is modifiable if any necessary requirements changes can bedocumented easily, completely, and consistently.

A URD contains redundancy if there are duplicating or overlappingrequirements. Redundancy itself is not an error, and redundancy can help tomake a URD more readable, but a problem arises when the URD isupdated. If a requirement is stated in two places, and a change is made inonly one place, the URD will be inconsistent. When redundancy oroverlapping is necessary, the URD should include cross-references to makeit modifiable.

The removal of redundancy can lead to errors. Consider thesituation of two similar requirements from separate users being combined. Achange of mind on the part of one user may result in the removal of thecombined requirement. The requirement of the other user has been lost, andthis is an error. Source attributes should be retained when mergingrequirements to show who needs to be consulted before an update is made.

5.3 EVOLUTION

Changes to the URD are the user's responsibility. The URD shouldbe put under change control by the initiator at soon as it is first issued. Thedocument change control procedure described in ESA PSS-05-0 Issue 2,Part 2, Section 3.2.3.2.1 is recommended. This requires that a changehistory be kept.

New user requirements may be added and existing userrequirements may be modified or deleted. If anyone wants to change theuser requirements after the UR phase, the users should update the URD andresubmit it to the UR/R board for approval. Note that in the OM phase, theSoftware Review Board (SRB) replaces the UR/R board.

The initiator of the project should monitor the trend in theoccurrence of new user requirements. An upward trend signals that thesoftware is unlikely to be successful.

Page 195: ESA Software Engineering Standards

22 ESA PSS-05-02 Issue 1 Revision 1 (March 1995)THE USER REQUIREMENTS DOCUMENT

5.4 RESPONSIBILITY

The definition of the user requirements must be the responsibility ofthe user (UR01). This means that the URD must be written by the users, orsomeone appointed by them. The expertise of software engineers, hardwareengineers and operations personnel should be used to help define andreview the user requirements.

Typically the capability requirements are generated by the peoplewho will use the system, while the constraint requirements may come fromeither hardware, software, communications or quality assurance experts.Human-computer interfaces are normally best defined by a joint effort ofusers and developers, ideally through prototypes.

In a system development, some of the user requirements for thesoftware come from the System Requirements Document. The preferredapproach is to refer to system requirements in the URD. Alternatively,relevant requirements can be extracted from the System RequirementsDocument, perhaps reformulated, and then inserted in the URD. Thisapproach may pose problems from a change control point of view, but mayalso be the only possible alternative when the system requirements are notclearly identifiable or when the requirements applicable to the softwarecomponents are embedded in other requirements.

It should never be assumed that all the user requirements can bederived from system requirements. Other techniques for capturing userrequirements should always be considered (see Chapter 2). In other casesthere could be multiple user groups, each having their own set ofrequirements. A single URD, with sections compiled by the different groups,or multiple URDs, one for each group, are both possible ways ofdocumenting the user requirements.

In summary, there is no single scheme for producing a URD.Nevertheless:• responsibilities should be clearly defined before URD production is

started;• the real users of the system are responsible for determining the

capability requirements (UR01);• the software engineers to be in charge of the development should take

part in the URD creation process so that they can advise the users onthe real practicalities of requirements, point out the potential of existingsoftware and technology, and possibly develop prototypes.

Page 196: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) 23THE USER REQUIREMENTS DOCUMENT

The roles and responsibilities of the various people must be clarifiedand accepted by everybody involved before the process starts. Whateverthe organisation, users should avoid dictating solutions while developersshould avoid dictating capabilities.

5.5 MEDIUM

It is usually assumed that the URD is a paper document. There is noreason why the URD should not be distributed electronically to participantswith the necessary equipment.

5.6 CONTENT

The URD should be compiled according to the table of contentsprovided in Appendix C of ESA PSS-05-0. This table of contents is derivedfrom ANSI/IEEE Std 830-1984 ‘Software Requirements Specifications' [Ref3].

Section 1 should briefly describe the purpose and scope of thesoftware and provide an overview of the rest of the document. Section 2should provide a general description of the world the software operates in.While rigour is not necessary, a clear physical picture should emerge.Section 3 should provide the formal requirements, upon which theacceptability of the software will be judged. Large URDs (forty pages ormore) should contain an index.

References should be given where appropriate. A URD should notrefer to documents that follow it in the ESA PSS-05-0 life cycle. A URDshould contain no TBDs by the time of the User Requirements Review.

ESA PSS-05-0 recommends the following table of contents for aURD:

Service Information:

a - Abstractb - Table of Contentsc - Document Status Sheetd - Document Change records made since last issue

Page 197: ESA Software Engineering Standards

24 ESA PSS-05-02 Issue 1 Revision 1 (March 1995)THE USER REQUIREMENTS DOCUMENT

1 INTRODUCTION1.1 Purpose1.2 Scope1.3 Definitions, acronyms and abbreviations1.4 References1.5 Overview

2 GENERAL DESCRIPTION2.1 Product perspective2.2 General capabilities1

2.3 General constraints2.4 User characteristics2.5 Operational environment2.6 Assumptions and dependencies

3 SPECIFIC REQUIREMENTS3.1 Capability requirements3.2 Constraint requirements

Material unsuitable for the above contents list should be inserted inadditional appendices. If there is no material for a section then the phrase‘Not Applicable' should be inserted and the section numbering preserved.

5.6.1 URD/1 INTRODUCTION

This section should provide an overview of the entire document anda description of the scope of the software.

5.6.1.1 URD/1.1 Purpose (of the document)

This section should:(1) define the purpose of the particular URD;(2) specify the intended readership of the URD.

5.6.1.2 URD/1.2 Scope (of the software)

This section should:(1) identify the software product(s) to be produced by name;

1 This section has been inserted after ESA PSS-05-0 Issue 2 was published. Other Generral Description sections have beenreordered.

Page 198: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) 25THE USER REQUIREMENTS DOCUMENT

(2) explain what the proposed software will do (and will not do, ifnecessary);

(3) describe relevant benefits, objectives, and goals as precisely aspossible;

(4) be consistent with similar statements in higher-level specifications, ifthey exist.

5.6.1.3 URD/1.3 Definitions, acronyms and abbreviations

This section should provide the definitions of all terms, acronyms,and abbreviations, or refer to other documents where the definitions can befound.

5.6.1.4 URD/1.4 References

This section should provide a complete list of all the applicable andreference documents, identified by title, author and date. Each documentshould be marked as applicable or reference. If appropriate, report number,journal name and publishing organisation should be included.

5.6.1.5 URD/1.5 Overview

This section should:(1) describe what the rest of the URD contains;(2) explain how the URD is organised.

5.6.2 URD/2 GENERAL DESCRIPTION

This chapter should describe the general factors that affect theproduct and its requirements. This chapter does not state specificrequirements but makes those requirements easier to understand.

5.6.2.1 URD/2.1 Product perspective

This section puts the product into perspective with other relatedsystems. If the product is to replace an existing system, the system shouldbe described and referenced. Ancestors of the product that are no longer inuse might be mentioned. If the product is ‘standalone', it should be statedhere.

Page 199: ESA Software Engineering Standards

26 ESA PSS-05-02 Issue 1 Revision 1 (March 1995)THE USER REQUIREMENTS DOCUMENT

5.6.2.2 URD/2.2 General capabilities

This section should describe the main capabilities and why they areneeded. This section should describe the process to be supported by thesoftware, indicating those parts of the process where it is used.

5.6.2.3 URD/2.3 General constraints

This section should describe any items that will limit the developer'soptions for building the software.

This section should not be used to impose specific requirements orspecific design constraints, but should state the reasons why certainrequirements or constraints exist.

5.6.2.4 URD/2.4 User characteristics

This section should describe those general characteristics of theusers affecting the specific requirements.

Many people may interact with the software during the operationsand maintenance phase. Some of these people are users, operators andmaintenance personnel. Certain characteristics of these people, such aseducational level, language, experience and technical expertise imposeimportant constraints on the software.

Software may be frequently used, but individuals may use it onlyoccasionally. Frequent users will become experts whereas infrequent usersmay remain relative novices. It is important to classify the users andestimate the likely numbers in each category. If absolute numbers cannotbe stated, relative numbers can still be useful.

5.6.2.5 URD/2.5 Operational environment

This section should describe the real world the software is tooperate in. This narrative description may be supported by contextdiagrams, to summarise external interfaces, and system block diagrams, toshow how the activity fits within the larger system. The nature of theexchanges with external systems should be specified.

Page 200: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) 27THE USER REQUIREMENTS DOCUMENT

If a URD defines a product that is a component of a parent systemor project then this section should:• outline the activities that will be supported by external systems;• reference the Interface Control Documents that define the external

interfaces with the other systems;• describe the computer infrastructure to be used.

5.6.2.4 URD/2.4 Assumptions and dependencies

This section should list the assumptions that the specificrequirements are based on. Risk analysis should be used to identifyassumptions that may not prove to be valid.

A constraint requirement, for example, might specify an interfacewith a system that does not exist. If the production of the system does notoccur when expected, the URD may have to change.

5.6.3 URD/3 SPECIFIC REQUIREMENTS

Specific requirements should be described in this section, which isthe core of the URD. The acceptability of the software will be assessed withrespect to the specific requirements.

Each requirement must be uniquely identified (UR02). Forwardtraceability to subsequent phases in the life cycle depends upon eachrequirement having a unique identifier.

Essential requirements have to be met for the software to beacceptable. If a requirement is essential, it must be clearly flagged (UR03).Non-essential requirements should be marked with a measure of desirability(e.g. scale of 1, 2, 3).

Some user requirements may be ‘suspended' pending resourcesbecoming available. Such non-applicable user requirements must be clearlyflagged (UR09).

The priority of a requirement measures the order, or the timing, ofthe related software becoming available. If the transfer is to be phased, sothat some parts of the software come into operation before others, theneach requirement must be marked with a measure of priority (UR04).

Page 201: ESA Software Engineering Standards

28 ESA PSS-05-02 Issue 1 Revision 1 (March 1995)THE USER REQUIREMENTS DOCUMENT

Unstable requirements should be flagged. These requirements maybe dependent on feedback from the UR, SR and AD phases. The usualmethod for flagging unstable requirements is to attach the marker ‘TBC'.

The source of each user requirement must be stated (UR05). Thesource may be defined using the identifier of a system requirement, adocument cross-reference or even the name of a person or group.Backwards traceability depends upon each requirement explicitlyreferencing its source.

Each user requirement must be verifiable (UR06). Clarity increasesverifiability. Each statement of user requirement should contain one and onlyone requirement. A user requirement is verifiable if some method can bedevised for objectively demonstrating that the software implements it. Forexample statements such as:• ‘the software will work well';• ‘the product shall be user friendly';• ‘the output of the program shall usually be given within 10 seconds';

are not verifiable because the terms ‘well', ‘user friendly' and ‘usually'have no objective interpretation.

A statement such as: ‘the output of the program shall be givenwithin 20 s of event X, 60% of the time; and shall be given within 30 s ofevent X, 99% of the time', is verifiable because it uses concrete terms andmeasurable quantities. If a method cannot be devised to verify arequirement, the requirement is invalid.

The user must describe the consequences of losses of availabilityand breaches of security, so that the developers can fully appreciate thecriticality of each function (UR07).

5.6.3.1 URD/3.1 Capability requirements

The organisation of the capability requirements should reflect theproblem, and no single structure will be suitable for all cases.

The capability requirements can be structured around a processingsequence, for example:

a) RECEPTION OF IMAGEb) PROCESSING OF IMAGEc) DISPLAY OF IMAGE

Page 202: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) 29THE USER REQUIREMENTS DOCUMENT

perhaps followed by deviations from the baseline operation:

d) HANDLING LOW QUALITY IMAGES

Each capability requirement should be checked to see whether theinclusion of capacity, speed and accuracy attributes is appropriate.

5.6.3.2 URD/3.2 Constraint requirements

Constraint requirements may cover any topic that does not directlyrelate to the specific capabilities the users require.

Constraint requirements that relate to interfaces should be groupedaround the headings:• communications interfaces;• hardware interfaces;• software interfaces;• human-computer interactions (user interfaces).

If the software is part of a larger system then any documents (e.g.ICDs) that describe the interfaces should be identified.

Requirements that ensure the software will be fit for its purposeshould be stated, for example:• adaptability;• availability;• portability;• security;• safety;• standards.

Page 203: ESA Software Engineering Standards

30 ESA PSS-05-02 Issue 1 Revision 1 (March 1995)THE USER REQUIREMENTS DOCUMENT

This page is intentionally left blank.

Page 204: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) 31LIFE CYCLE MANAGEMENT ACTIVITIES

CHAPTER 6 LIFE CYCLE MANAGEMENT ACTIVITIES

6.1 INTRODUCTION

Plans of SR phase activities must be drawn up in the UR phase.These plans cover project management, configuration management,verification and validation, quality assurance and acceptance tests.

6.2 PROJECT MANAGEMENT PLAN FOR THE SR PHASE

By the end of the UR review, the SR phase section of the SPMP(SPMP/SR) must be produced (SPM02). The SPMP/SR describes, in detail,the project activities for the SR phase. As part of its introduction, theSPMP/SR must outline a plan for the whole project (SPM03).

A rough estimate of the total cost of the software project should beincluded in the SPMP/SR. Technical knowledge and experience gained onsimilar projects help make this estimate.

A precise estimate of the effort involved in the SR phase must beincluded in the SPMP/SR (SPM04). Specific factors affecting estimates forthe work required in the SR phase are the:• number of user requirements;• level of user requirements;• stability of user requirements;• level of definition of external interfaces;• quality of the URD.

An estimate based simply on the number of user requirementsmight be very misleading - a large number of detailed low-level userrequirements might be more useful, and save more time in the SR phase,than a few high-level user requirements. A poor quality URD might imply thata lot of requirements analysis is required in the SR phase.

Page 205: ESA Software Engineering Standards

32 ESA PSS-05-02 Issue 1 Revision 1 (March 1995)LIFE CYCLE MANAGEMENT ACTIVITIES

6.3 CONFIGURATION MANAGEMENT PLAN FOR THE SR PHASE

By the end of the UR review, the SR phase section of the SCMP(SCMP/SR) must be produced (SCM42). The SCMP/SR must cover theconfiguration management procedures for all documentation, CASE tooloutputs and prototype code, to be produced in the SR phase (SCM43).

6.4 VERIFICATION AND VALIDATION PLAN FOR THE SR PHASE

By the end of the UR review, the SR phase section of the SVVP(SVVP/SR) must be produced (SVV09). The SVVP/SR must define how totrace user requirements to software requirements, so that each softwarerequirement can be justified (SVV10). It should describe how the SRD is tobe evaluated by defining the review procedures. It may includespecifications of the tests to be performed with prototypes.

6.5 QUALITY ASSURANCE PLAN FOR THE SR PHASE

By the end of the UR review, the SR phase section of the SQAP(SQAP/SR) must be produced (SQA03). The SQAP/SR must describe, indetail, the quality assurance activities to be carried out in the SR phase(SQA04). The SQAP/SR must outline the quality assurance plan for the restof the project (SQA05).

SQA activities include monitoring the following activities:• management;• documentation;• standards, practices, conventions, and metrics;• reviews and audits;• testing activities;• problem reporting and corrective action;• tools, techniques and methods;• code and media control;• supplier control;• records collection maintenance and retention;• training;• risk management.

Page 206: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) 33LIFE CYCLE MANAGEMENT ACTIVITIES

6.6 ACCEPTANCE TEST PLANS

The initiator(s) of the user requirements should lay down theacceptance test principles. The developer must construct an acceptancetest plan in the UR phase and document it in the SVVP (SVV11). This planshould define the scope, approach, resources and schedule of acceptancetesting activities.

Specific tests for each user requirement are not formulated until theDD phase. The Acceptance Test Plan should deal with the general issues,for example:• where will the acceptance tests be done?• who will attend?• who will carry them out?• are tests needed for all user requirements?• what kinds of tests are sufficient for provisional acceptance?• what kinds of tests are sufficient for final acceptance?• must any special test software be used?• if the acceptance tests are to be done by users, what special

documentation is required?• how long is the acceptance testing programme expected to last?

Page 207: ESA Software Engineering Standards

34 ESA PSS-05-02 Issue 1 Revision 1 (March 1995)LIFE CYCLE MANAGEMENT ACTIVITIES

This page is intentionally left blank.

Page 208: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) A-1GLOSSARY

APPENDIX AGLOSSARY

A.1 LIST OF TERMS

Except for the definitions listed below, the definitions of all termsused in this document conform to the definitions provided in or referencedby ESA PSS-05-0.

Capability requirement

A capability requirement describes an operation, or sequence of relatedoperations, that the software must be able to perform.

Constraint requirement

A constraint requirement restricts the way the software is implemented,without altering or describing the capabilities of the software.

Initiator

The person, or group of people, who originates a project and is responsiblefor accepting its products.

Language

A symbolic method of expressing information. Symbols convey meaningand are used according to convention.

Outliner

A word processing program that allows the viewing of the section headingsin isolation from the rest of the text.

Risk

The amount of uncertainty in being able to satisfy a requirement.

Page 209: ESA Software Engineering Standards

A-2 ESA PSS-05-02 Issue 1 Revision 1 (March 1995)GLOSSARY

A.2 LIST OF ACRONYMS

AD Architectural DesignANSI American National Standards InstituteAT Acceptance TestsBSSC Board for Software Standardisation and ControlDD Detailed Design and productionESA European Space AgencyOM Operations and MaintenancePSS Procedures, Specifications and StandardsSCM Software Configuration ManagementSCMP Software Configuration Management PlanSPM Software Project ManagementSPMP Software Project Management PlanSQA Software Quality AssuranceSQAP Software Quality Assurance PlanSR Software RequirementsSRB Software Review BoardSVV Software Verification and ValidationSVVP Software Verification and Validation PlanTBC To Be ConfirmedTBD To Be DefinedTR TransferUR User RequirementsUR/R User Requirements Review

Page 210: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) B-1REFERENCES

APPENDIX BREFERENCES

1. ESA Software Engineering Standards, ESA PSS-05-0 Issue 2 February1991

2. IEEE Standard Glossary for Software Engineering Terminology,ANSI/IEEE Std 610.12-1990.

3. IEEE Guide to Software Requirements Specifications, ANSI/IEEE Std830-1984.

4. IEEE Standard for Software Reviews and Audits, IEEE Std 1028-1988.

5. Realistic User Requirements, G.Longworth, NCC Publications, 1987.

6. Software Evolution Through Rapid Prototyping, Luqi, in COMPUTER,May 1989.

7. Structured Analysis and System Specification, T.DeMarco, YourdonPress, 1978.

8. SSADM Version 4, NCC Blackwell publications, 1991.

9. Structured Systems Analysis and Design Methodology, G.Cutts,Paradigm, 1987.

Page 211: ESA Software Engineering Standards

B-2 ESA PSS-05-02 Issue 1 Revision 1 (March 1995)REFERENCES

This page is intentionally left blank

Page 212: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) C-1UR PHASE MANDATORY PRACTICES

APPENDIX C UR PHASE MANDATORY PRACTICES

This appendix is repeated from ESA PSS-05-0, appendix D.2.

UR01 The definition of the user requirements shall be the responsibility of the user.

UR02 Each user requirement shall include an identifier.

UR03 Essential user requirements shall be marked as such.

UR04 For incremental delivery, each user requirement shall include a measure ofpriority so that the developer can decide the production schedule.

UR05 The source of each user requirement shall be stated.

UR06 Each user requirement shall be verifiable.

UR07 The user shall describe the consequences of losses of availability, orbreaches of security, so that developers can fully appreciate the criticality ofeach function.

UR08 The outputs of the UR phase shall be formally reviewed during the UserRequirements Review.

UR09 Non-applicable user requirements shall be clearly flagged in the URD.

UR10 An output of the UR phase shall be the User Requirements Document(URD).

UR11 The URD shall always be produced before a software project is started.

UR12 The URD shall provide a general description of what the user expects thesoftware to do.

UR13 All known user requirements shall be included in the URD.

UR14 The URD shall describe the operations the user wants to perform with thesoftware system.

UR15 The URD shall define all the constraints that the user wishes to impose uponany solution.

UR16 The URD shall describe the external interfaces to the software system orreference them in ICDs that exist or are to be written.

Page 213: ESA Software Engineering Standards

C-2 ESA PSS-05-02 Issue 1 Revision 1 (March 1995)UR PHASE MANDATORY PRACTICES

This page is intentionally left blank

Page 214: ESA Software Engineering Standards

ESA PSS-05-02 Issue 1 Revision 1 (March 1995) D-1INDEX

APPENDIX DINDEX

acceptance test plan, 33Accuracy, 7adaptability, 7, 8availability, 7, 9, 10, 28block diagrams, 15, 26bug, 9capability requirement, 15, 22capability requirements, 5Capacity, 6, 9, 28CASE tool, 17, 32Communications interfaces, 7confidentiality, 10constraint requirement, 22Constraint requirements, 7, 28Context diagrams, 16, 26Essential requirements, 27external interface, 7, 19external interfaces, 26, 31failure, 9fault, 9feasibility, 3feasibility studies, 14hardware interface requirement, 8Hardware interfaces, 8Human-Computer Interaction, 8Human-computer interfaces, 22ICD, 29IEEE Std 830-1984, 23integrity, 10interface, 7Interface Control Document, 4internal interface, 7interview, 13ISO reference model for OSI, 7maintainability, 9method, 13methods, 11model, 17Operational environment, 26performance, 6portability, 7, 9prototype, 14, 32Prototyping, 14reliability, 9Resources, 11, 33

response requirement, 6safety, 7, 10SCM42, 32SCM43, 32SCMP/SR, 3, 32security, 7, 10, 28Software interfaces, 8Software Project Management Plan, 11Speed, 6, 28SPM02, 31SPM03, 31SPM04, 31SPMP, 10, 11SPMP/SR, 3, 20, 31SQA03, 32SQA04, 32SQA05, 32SQAP/SR, 3SRB, 21stability, 31Standards, 10Structured English, 14Study, 13survey, 13SVV09, 32SVV10, 32SVV11, 33SVVP/AT, 3SVVP/SR, 3, 32System Requirements Document, 22TBC, 27TBD, 23Timescales, 11Tool, 18UR01, 22UR02, 27UR03, 27UR04, 27UR05, 27UR06, 27UR07, 28UR08, 12UR09, 27UR10, 19UR11, 19

Page 215: ESA Software Engineering Standards

D-2 ESA PSS-05-02 Issue 1 Revision 1 (March 1995)INDEX

UR12, 19UR13, 19UR14, 19UR15, 19UR16, 19URD, 3, 19Validation, 11, 31verification, 31

Page 216: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1March 1995

european space agency / agence spatiale européenne8-10, rue Mario-Nikis, 75738 PARIS CEDEX, France

Guideto thesoftware requirementsdefinitionphase

Prepared by:ESA Board for SoftwareStandardisation and Control(BSSC)

Page 217: ESA Software Engineering Standards

ii ESA PSS-05-03 Issue 1 Revision 1 (March 1995))DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

1. DOCUMENT TITLE: ESA PSS-05-03 Guide to the software requirements definition phase

2. ISSUE 3. REVISION 4. DATE 5. REASON FOR CHANGE

1 0 1991 First issue

1 1 1995 Minor revisions for publication

Issue 1 Revision 1 approved, May 1995Board for Software Standardisation and ControlM. Jones and U. Mortensen, co-chairmen

Issue 1 approved 1st February 1992Telematics Supervisory Board

Issue 1 approved by:The Inspector General, ESA

Published by ESA Publications Division,ESTEC, Noordwijk, The Netherlands.Printed in the Netherlands.ESA Price code: E1ISSN 0379-4059

Copyright © 1995 by European Space Agency

Page 218: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) iiiTABLE OF CONTENTS

TABLE OF CONTENTS

CHAPTER 1 INTRODUCTION..................................................................................11.1 PURPOSE .................................................................................................................11.2 OVERVIEW................................................................................................................1

CHAPTER 2 THE SOFTWARE REQUIREMENTS DEFINITION PHASE ................32.1 INTRODUCTION.......................................................................................................32.2 EXAMINATION OF THE URD..................................................................................42.3 CONSTRUCTION OF THE LOGICAL MODEL .......................................................5

2.3.1 Functional decomposition................................................................................62.3.2 Performance analysis .......................................................................................82.3.3 Criticality analysis .............................................................................................82.3.4 Prototyping........................................................................................................8

2.4 SPECIFICATION OF THE SOFTWARE REQUIREMENTS .....................................92.4.1 Functional requirements...................................................................................92.4.2 Performance requirements.............................................................................102.4.3 Interface requirements....................................................................................112.4.4 Operational requirements...............................................................................122.4.5 Resource requirements ..................................................................................122.4.6 Verification requirements ................................................................................132.4.7 Acceptance-testing requirements..................................................................132.4.8 Documentation requirements.........................................................................132.4.9 Security requirements.....................................................................................132.4.10 Portability requirements.................................................................................142.4.11Quality requirements ......................................................................................152.4.12 Reliability requirements .................................................................................152.4.13 Maintainability requirements .........................................................................162.4.14 Safety requirements......................................................................................17

2.5 SYSTEM TEST PLANNING....................................................................................172.6 THE SOFTWARE REQUIREMENTS REVIEW .......................................................172.7 PLANNING THE ARCHITECTURAL DESIGN PHASE ..........................................18

CHAPTER 3 METHODS FOR SOFTWARE REQUIREMENTS DEFINITION ........193.1 INTRODUCTION....................................................................................................193.2 FUNCTIONAL DECOMPOSITION ........................................................................193.3 STRUCTURED ANALYSIS.....................................................................................20

3.3.1 DeMarco/SSADM ............................................................................................223.3.2 Ward/Mellor......................................................................................................233.3.3 SADT ................................................................................................................23

3.4 OBJECT-ORIENTED ANALYSIS ...........................................................................243.4.1 Coad and Yourdon ..........................................................................................25

Page 219: ESA Software Engineering Standards

iv ESA PSS-05-03 Issue 1 Revision 1 (March 1995))PREFACE

3.4.2 OMT..................................................................................................................263.4.3 Shlaer-Mellor ....................................................................................................273.4.4 Booch...............................................................................................................27

3.5 FORMAL METHODS .............................................................................................293.5.1 Z........................................................................................................................303.5.2 VDM..................................................................................................................313.5.3 LOTOS .............................................................................................................32

3.6 JACKSON SYSTEM DEVELOPMENT....................................................................333.7 RAPID PROTOTYPING...........................................................................................34

CHAPTER 4 TOOLS FOR SOFTWARE REQUIREMENTS DEFINITION ..............374.1 INTRODUCTION.....................................................................................................374.2 TOOLS FOR LOGICAL MODEL CONSTRUCTION...............................................374.3 TOOLS FOR SOFTWARE REQUIREMENTS SPECIFICATION.............................38

4.3.1 Software requirements management .............................................................384.3.2 Document production .....................................................................................38

CHAPTER 5 THE SOFTWARE REQUIREMENTS DOCUMENT ...........................395.1 INTRODUCTION.....................................................................................................395.2 STYLE......................................................................................................................39

5.2.1 Clarity ...............................................................................................................395.2.2 Consistency .....................................................................................................405.2.3 Modifiability ......................................................................................................40

5.3 EVOLUTION...........................................................................................................405.4 RESPONSIBILITY...................................................................................................415.5 MEDIUM.................................................................................................................415.6 CONTENT..............................................................................................................41

CHAPTER 6 LIFE CYCLE MANAGEMENT ACTIVITIES .......................................496.1 INTRODUCTION.....................................................................................................496.2 PROJECT MANAGEMENT PLAN FOR THE AD PHASE.......................................496.3 CONFIGURATION MANAGEMENT PLAN FOR THE AD PHASE.........................506.4 VERIFICATION AND VALIDATION PLAN FOR THE AD PHASE...........................506.5 QUALITY ASSURANCE PLAN FOR THE AD PHASE............................................516.6 SYSTEM TEST PLANS............................................................................................52

APPENDIX A GLOSSARY ....................................................................................A-1APPENDIX B REFERENCES................................................................................B-1APPENDIX C MANDATORY PRACTICES ...........................................................C-1APPENDIX D REQUIREMENTS TRACEABILITY MATRIX..................................D-1APPENDIX E CASE TOOL SELECTION CRITERIA ............................................E-1APPENDIX F INDEX ............................................................................................. F-1

Page 220: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) vPREFACE

PREFACE

This document is one of a series of guides to software engineering produced bythe Board for Software Standardisation and Control (BSSC), of the European SpaceAgency. The guides contain advisory material for software developers conforming toESA's Software Engineering Standards, ESA PSS-05-0. They have been compiled fromdiscussions with software engineers, research of the software engineering literature,and experience gained from the application of the Software Engineering Standards inprojects.

Levels one and two of the document tree at the time of writing are shown inFigure 1. This guide, identified by the shaded box, provides guidance aboutimplementing the mandatory requirements for the software requirements definitionphase described in the top level document ESA PSS-05-0.

Guide to theSoftware Engineering

Guide to theUser Requirements

Definition Phase

Guide toSoftware Project

Management

PSS-05-01

PSS-05-02 UR GuidePSS-05-03 SR Guide

PSS-05-04 AD GuidePSS-05-05 DD Guide

PSS-05-07 OM Guide

PSS-05-08 SPM GuidePSS-05-09 SCM Guide

PSS-05-11 SQA Guide

ESASoftware

EngineeringStandards

PSS-05-0

Standards

Level 1

Level 2

PSS-05-10 SVV Guide

PSS-05-06 TR Guide

Figure 1: ESA PSS-05-0 document tree

The Guide to the Software Engineering Standards, ESA PSS-05-01, containsfurther information about the document tree. The interested reader should consult thisguide for current information about the ESA PSS-05-0 standards and guides.

The following past and present BSSC members have contributed to theproduction of this guide: Carlo Mazza (chairman), Bryan Melton, Daniel de Pablo,Adriaan Scheffer and Richard Stevens.

Page 221: ESA Software Engineering Standards

vi ESA PSS-05-03 Issue 1 Revision 1 (March 1995))PREFACE

The BSSC wishes to thank Jon Fairclough for his assistance in the developmentof the Standards and Guides, and to all those software engineers in ESA and Industrywho have made contributions.

Requests for clarifications, change proposals or any other comment concerningthis guide should be addressed to:

BSSC/ESOC Secretariat BSSC/ESTEC SecretariatAttention of Mr C Mazza Attention of Mr B MeltonESOC ESTECRobert Bosch Strasse 5 Postbus 299D-64293 Darmstadt NL-2200 AG NoordwijkGermany The Netherlands

Page 222: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 1INTRODUCTION

CHAPTER 1INTRODUCTION

1.1 PURPOSE

ESA PSS-05-0 describes the software engineering standards to beapplied for all deliverable software implemented for the European SpaceAgency (ESA), either in house or by industry [Ref. 1].

ESA PSS-05-0 defines a preliminary phase to the softwaredevelopment life cycle called the ‘User Requirements Definition Phase' (URphase). The first phase of the software development life cycle is the‘Software Requirements Definition Phase' (SR phase). Activities andproducts are examined in the ‘SR review' (SR/R) at the end of the phase.

The SR phase can be called the ‘problem analysis phase' of the lifecycle. The user requirements are analysed and software requirements areproduced that must be as complete, consistent and correct as possible.

This document provides guidance on how to produce the softwarerequirements. This document should be read by all active participants in theSR phase, e.g. initiators, user representatives, analysts, designers, projectmanagers and product assurance personnel.

1.2 OVERVIEW

Chapter 2 discusses the SR phase. Chapters 3 and 4 discussmethods and tools for software requirements definition. Chapter 5 describeshow to write the SRD, starting from the template. Chapter 6 summarises thelife cycle management activities, which are discussed at greater length inother guides.

All the SR phase mandatory practices in ESA PSS-05-0 are repeatedin this document. The identifier of the practice is added in parentheses tomark a repetition. No new mandatory practices are defined.

Page 223: ESA Software Engineering Standards

2 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))INTRODUCTION

This page is intentionally left blank.

Page 224: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 3THE SOFTWARE REQUIREMENTS DEFINITION PHASE

CHAPTER 2THE SOFTWARE REQUIREMENTS DEFINITION PHASE

2.1 INTRODUCTION

In the SR phase, a set of software requirements is constructed. Thisis done by examining the URD and building a ‘logical model', usingrecognised methods and specialist knowledge of the problem domain. Thelogical model should be an abstract description of what the system must doand should not contain implementation terminology. The model structuresthe problem and makes it manageable.

A logical model is used to produce a structured set of softwarerequirements that is consistent, coherent and complete. The softwarerequirements specify the functionality, performance, interfaces, quality,reliability, maintainability, safety etc., (see Section 2.4). Softwarerequirements are documented in the Software Requirements Document(SRD). The SRD gives the developer's view of the problem rather than theuser's. The SRD must cover all the requirements stated in the URD (SR12).The correspondence between requirements in the URD and SRD is notnecessarily one-to-one and frequently isn't. The SRD may also containrequirements that the developer considers are necessary to ensure theproduct is fit for its purpose (e.g. product assurance standards andinterfaces to test equipment).

The main outputs of the SR phase are the:• Software Requirements Document (SRD);• Software Project Management Plan for the AD phase (SPMP/AD);• Software Configuration Management Plan for the AD phase(SCMP/AD);• Software Verification and Validation Plan for the AD Phase (SVVP/AD);• Software Quality Assurance Plan for the AD phase (SQAP/AD);• System Test Plan (SVVP/ST).

Progress reports, configuration status accounts, and audit reportsare also outputs of the phase. These should always be archived by theproject.

Defining the software requirements is the developer's responsibility.Besides the developer, participants in the SR phase should include users,

Page 225: ESA Software Engineering Standards

4 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))THE SOFTWARE REQUIREMENTS DEFINITION PHASE

systems engineers, hardware engineers and operations personnel. Projectmanagement should ensure that all parties can review the requirements, tominimise incompleteness and error.

SR phase activities must be carried out according to the plansdefined in the UR phase (SR01). Progress against plans should becontinuously monitored by project management and documented at regularintervals in progress reports.

Figure 2.1 summarises activities and document flow in the SRphase. The following subsections describe the activities of the SR phase inmore detail.

SCMP/AD

ApprovedURD

ExaminedURD

LogicalModel

DraftSRD

DraftSVVP/ST

ApprovedSRD

ExamineURD

ConstructModel

SpecifyReq'mnts

WriteTest Plan

SR/R

WriteAD Plans

Accepted RID

CASE tools

Methods

Prototyping

SPMP/AD

SQAP/ADSVVP/AD

SVVP/ST

SCMP/SRSPMP/SR

SQAP/SRSVVP/SR

Figure 2.1: SR phase activities

2.2 EXAMINATION OF THE URD

If the developers have not taken part in the User RequirementsReview, they should examine the URD and confirm that it is understandable.ESA PSS-05-02, ‘Guide to the User Requirements Definition Phase', containsmaterial that should assist in making the URD understandable. Developersshould also confirm that adequate technical skills are available for the SRphase.

Page 226: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 5THE SOFTWARE REQUIREMENTS DEFINITION PHASE

2.3 CONSTRUCTION OF THE LOGICAL MODEL

A software model is:• a simplified description of a system;• hierarchical, with consistent decomposition criteria;• composed of symbols organised according to some convention;• built using recognised methods and tools;• used for reasoning about the software.

A software model should be a ‘simplified description' in the sensethat it describes the high-level essentials. A hierarchical presentation alsomakes the description simpler to understand, evaluate at various levels ofdetail, and maintain. A recognised method, not an undocumented ad-hocassembly of ‘common sense ideas', should be used to construct a softwaremodel (SR03). A method, however, does not substitute for the experienceand insight of developers, but helps developers apply those abilities better.

In the SR phase, the developers construct an implementation-independent model of what is needed by the user (SR02 ). This is called a’logical model' and it:• shows what the system must do;• is organised as a hierarchy, progressing through levels of abstraction;• avoids using implementation terminology (e.g. workstation);• permits reasoning from cause-to-effect and vice-versa.

A logical model makes the software requirements understandableas a whole, not just individually.

A logical model should be built iteratively. Some tasks may need tobe repeated until the description of each level is clear and consistent.Walkthroughs, inspections and technical reviews should be used to ensurethat each level of the model is agreed before proceeding to the next level ofdetail.

CASE tools should be used in all but the smallest projects,because they make clear and consistent models easier to construct andmodify.

The type of logical model built depends on the method selected.The method selected depends on the type of software required. Chapter 3

Page 227: ESA Software Engineering Standards

6 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))THE SOFTWARE REQUIREMENTS DEFINITION PHASE

summarises the methods suitable for constructing a logical model. In thefollowing sections, functional decomposition concepts and terminology areused to describe how to construct a logical model. This does not imply thatthis method shall be used, but only that it may be suitable.

2.3.1 Functional decomposition

The first step in building a logical model is to break the systemdown into a set of basic functions with a few simple inputs and outputs. Thisis called ’functional decomposition'.

Functional decomposition is called a ‘top-down' method because itstarts from a single high-level function and results in a set of low-levelfunctions that combine to perform the high-level function. Each leveltherefore models the system at different levels of abstraction. The top-downapproach produces the most general functions first, leaving the detail to bedefined only when necessary.

Consistent criteria should be established for decomposingfunctions into subfunctions. The criteria should not have any‘implementation bias' (i.e. include design and production considerations).Examples of implementation bias are: ‘by programming language', ‘bymemory requirements' or ‘by existing software'.

ESA PSS-05-0 defines the guidelines for building a good logicalmodel. These are repeated below.

(1) Functions should have a single definite purpose.

(2) Functions should be appropriate to the level they appear at (e.g.‘Calculate Checksum' should not appear at the same level as ‘VerifyTelecommands').

(3) Interfaces should be minimised. This allows design components withweak coupling to be easily derived.

(4) Each function should be decomposed into no more than sevenlower-level functions.

(5) Implementation terminology should be absent (e.g. file, record, task,module, workstation).

(6) Performance attributes of each function (capacity, speed etc) shouldbe stated wherever possible.

Page 228: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 7THE SOFTWARE REQUIREMENTS DEFINITION PHASE

(7) Critical functions should be identified.

(8) Function names should reflect the purpose and say ‘what' is to bedone, not ‘how' it must be done.

(9) Function names should have a declarative structure (e.g. ‘ValidateTelecommands').

The recommendation to minimise interfaces needs furtherqualification. The number of interfaces can be measured by the number of:• inputs and outputs;• different functions that interface with it.

High-level functions may have many inputs and outputs. In the firstiteration of the logical model, the goal is a set of low-level functions thathave a few simple interfaces. The processing of the low-level functionsshould be briefly described.

In the second and subsequent iterations, functions and data arerestructured to reduce the number of interfaces at all levels. Data structuresshould match the functional decomposition. The data a function deals withshould be appropriate to its level.

A logical model should initially cope with routine behaviour.Additional functions may be added later to handle non-routine behaviour,such as startup, shutdown and error handling.

Functional decomposition has reached a sufficient level of detailwhen the model:• provides all the capabilities required by the user;• follows the nine guidelines describe above.

When the use of commercial software looks feasible, developersshould ensure that it meets all the user requirements. For example, supposea high-level database management function is identified. The developerdecides that decomposition of the function ‘Manage Database' can bestopped because a constraint requirement demands that a particular DBMSis used. This would be wrong if there is a user requirement to select objectsfrom the database. Decomposition should be continued until the userrequirement has been included in the model. The designer then has toensure that a DBMS with the select function is chosen.

Page 229: ESA Software Engineering Standards

8 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))THE SOFTWARE REQUIREMENTS DEFINITION PHASE

2.3.2 Performance analysis

User requirements may contain performance attributes (e.g.capacity, speed and accuracy). These attributes define the performancerequirements for a function or group of functions. The logical model shouldbe checked to ensure that no performance requirements conflict; this isdone by studying pathways through the data flow.

Decisions about how to partition the performance attributesbetween a set of functions may involve implementation considerations. Suchdecisions are best left to the designer and should be avoided in the SRphase.

2.3.3 Criticality analysis

Capability requirements in the URD may have their availabilityspecified in terms of ‘Hazard Consequence Severity Category' (HCSC). Thiscan range from ‘Catastrophic' through ‘Critical' and ‘Marginal' to ‘Negligible'.If this has been done, the logical model should be analysed to propagatethe HCSC to all the requirements related to the capability that have theHCSC attached. Reference 24 describes three criticality analysis techniques:• Software Failure Modes, Effects and Criticality Analysis (software

FMECA);• Software Common Mode Failure Analysis (Software CFMA);• Software Fault Tree Analysis (Software FTA).

2.3.4 Prototyping

Models are usually static. However it may be useful to make parts ofthe model executable to verify them. Such an animated, dynamic model is akind of prototype.

Prototyping can clarify requirements. The precise details of a userinterface may not be clear from the URD, for example. The construction of aprototype for verification by the user is the most effective method ofclarifying such requirements.

Data and control flows can be decomposed in numerous ways. Forexample a URD may say ‘enter data into a form and produce a report'.

Only after some analysis does it become apparent what thecontents of the form or report should be, and this ought to be confirmed with

Page 230: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 9THE SOFTWARE REQUIREMENTS DEFINITION PHASE

the user. The best way to do this to make an example of the form or report,and this may require prototype software to be written.

The development team may identify some requirements ascontaining a high degree of risk (i.e. there is a large amount of uncertaintywhether they can be satisfied). The production of a prototype to assess thefeasibility or necessity of such requirements can be very effective in decidingwhether they ought to be included in the SRD.

2.4 SPECIFICATION OF THE SOFTWARE REQUIREMENTS

ESA PSS-05-0 defines the following types of software requirements:Functional Requirements Documentation RequirementsPerformance Requirements Security RequirementsInterface Requirements Portability RequirementsOperational Requirements Quality RequirementsResource Requirements Reliability RequirementsVerification Requirements Maintainability RequirementsAcceptance-Testing Requirements Safety Requirements

Functional requirements should be organised top-down, accordingto the structure of the logical model. Non-functional requirements should beattached to functional requirements and can therefore appear at all levels inthe hierarchy, and apply to all functional requirements below them. Theymay be organised as a separate set and cross-referenced, where this issimpler.

2.4.1 Functional requirements

A function is a ‘defined objective or characteristic action of a systemor component' and a functional requirement ‘specifies a function that asystem or system component must be able to perform [Ref. 2]. Functionalrequirements are matched one-to-one to nodes of the logical model. Afunctional requirement should:• define ‘what' a process must do, not ‘how' to implement it;• define the transformation to be performed on specified inputs to

generate specified outputs;• have performance requirements attached;

Page 231: ESA Software Engineering Standards

10 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))THE SOFTWARE REQUIREMENTS DEFINITION PHASE

• be stated rigourously.

Functional requirements can be stated rigourously by using short,simple sentences. Several styles are possible, for example StructuredEnglish and Precondition-Postcondition style [Ref. 7, 12].

Examples of functional requirements are:• ‘Calibrate the instrument data'• ‘Select all calibration stars brighter than the 6th magnitude'

2.4.2 Performance requirements

Performance requirements specify numerical values for measurablevariables used to define a function (e.g. rate, frequency, capacity, speedand accuracy). Performance requirements may be included in thequantitative statement of each function, or included as separaterequirements. For example the requirements:

‘Calibrate the instrument data'

‘Calibration accuracy shall be 10%'

can be combined to make a single requirement:

‘Calibrate the instrument data to an accuracy of 10%'

The approach chosen has to trade-off modifiability againstduplication.

Performance requirements may be represented as a range of values[Ref. 21], for example the:• acceptable value;• nominal value;• ideal value.

The acceptable value defines the minimum level of performanceallowed; the nominal value defines a safe margin of performance above theacceptable value, and the ideal value defines the most desirable level ofperformance.

Page 232: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 11THE SOFTWARE REQUIREMENTS DEFINITION PHASE

2.4.3 Interface requirements

Interface requirements specify hardware, software or databaseelements that the system, or system component, must interact orcommunicate with. Accordingly, interface requirements should be classifiedinto software, hardware and communications interfaces.

Interface requirements should also be classified into ‘internal' and‘external' interface requirements, depending upon whether or not theinterface coincides with the system boundary. Whereas the former shouldalways be described in the SRD, the latter may be described in separate‘Interface Control Documents' (ICDs).

This ensures a common, self-contained definition of the interface.

An interface requirement may be stated by:• describing the data flow or control flow across the interface;• describing the protocol that governs the exchanges across the

interface;• referencing the specification of the component in another system that is

to be used and describing: - when the external function is utilised; - what is transferred when the external function is utilised.

• defining a constraint on an external function;• defining a constraint imposed by an external function.

Unless it is present as a constraint, an interface requirement shouldonly define the logical aspects of an interface (e.g. the number and type ofitems that have to be exchanged), and not the physical details (e.g. bytelocation of fields in a record, ASCII or binary format etc.). The physicaldescription of data structures should be deferred to the design phase.

Page 233: ESA Software Engineering Standards

12 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))THE SOFTWARE REQUIREMENTS DEFINITION PHASE

Examples of interface requirements are:• ‘Functions X and Y shall exchange instrument data';• ‘Communications between the computer and remote instruments shall

be via the IEEE-488 protocol'.• ‘Transmit the amount of fuel remaining to the ground control system

every fifteen seconds'.

2.4.4 Operational requirements

Operational requirements specify how the system will run (i.e. whenit is to be operated) and how it will communicate with human operators (e.g.screen and keyboards etc.).

Operational requirements may describe physical aspects of the userinterface. Descriptions of the dialogue, screen layouts, command languagestyle are all types of operational requirements.

Operational requirements may define ergonomic aspects, e.g. thelevels of efficiency that users should be able to attain.

The user may have constrained the user interface in the URD. Afunction may require the input of some data, for example, and this may beimplemented by a keyboard or a speech interface. The user may demandthat a speech interface be employed, and this should be stated as anoperational requirement.

2.4.5 Resource requirements

Resource requirements specify the upper limits on physicalresources such as processing power, main memory, disk space etc. Theymay describe any requirements that the development or operationalenvironment place upon the software. A resource requirement should statethe facts about the resources, and not constrain how they are deployed.

At a system design level, specific allocations of computer resourcesmay have been allocated to software subsystems. Software developersmust be aware of resource constraints when they design the software. Insome cases (e.g. embedded systems) resource constraints cannot berelaxed.

Page 234: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 13THE SOFTWARE REQUIREMENTS DEFINITION PHASE

Examples of resource requirements are:• ‘All programs shall execute with standard user quotas.'• ‘5 Mbytes of disk space are available.'

2.4.6 Verification requirements

Verification requirements constrain the design of the product. Theymay do this by requiring features that facilitate verification of systemfunctions, or by saying how the product is to be verified.

Verification requirements may include specifications of any:• simulations to be performed;• requirements imposed by the test environment;• diagnostic facilities.

Simulation is ‘a model that behaves or operates like a given systemwhen provided a set of controlled inputs’ [Ref. 2]. A simulator is neededwhen a system cannot be exercised in the operational environment prior todelivery. The simulator reproduces the behaviour of the operationalenvironment.

2.4.7 Acceptance-testing requirements

Acceptance-testing requirements constrain the design of theproduct. They are a type of verification requirement, and apply specifically tothe TR phase.

2.4.8 Documentation requirements

Documentation requirements state project-specific requirements fordocumentation, in addition to those contained in ESA PSS-05-0. The formatand style of the Interface Control Documents may be described in thedocumentation requirements, for example.

Documentation should be designed for the target readers (i.e. usersand maintenance personnel). The URD contains a section called ‘UserCharacteristics' that may help profile the readership.

2.4.9 Security requirements

Security requirements specify the requirements for securing thesystem against threats to confidentiality, integrity and availability. They

Page 235: ESA Software Engineering Standards

14 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))THE SOFTWARE REQUIREMENTS DEFINITION PHASE

should describe the level and frequency of access allowed to authorisedusers of the software. If prevention against unauthorised use is required, thetype of unauthorised user should be described. The level of physicalprotection of the computer facilities may be stated (e.g. backups are to bekept in a fire-proof safe off-site).

Examples of security requirements are protection against:• accidental destruction of the software;• accidental loss of data;• unauthorised use of the software;• computer viruses.

2.4.10 Portability requirements

Portability requirements specify how easy it should be to move thesoftware from one environment to another. Possible computer and operatingsystems, other than those of the target system, should be stated.

Examples of the portability requirements are:• ‘it shall be possible to recompile this software to run on computer X

without modifying more than 2% of the source code';• ‘no part of the software shall be written in assembler'.

Portability requirements can reduce performance of the softwareand increase the effort required to build it. For example, asking that thesoftware be made portable between operating systems may require thatoperating system service routines be called from dedicated modules. Thiscan affect performance, since the number of calls to execute a specificoperation is increased. Further, use of computer-specific extensions to aprogramming language may be precluded.

Portability requirements should therefore be formulated after takingcareful consideration of the probable life-time of the software, operatingsystem and hardware.

The portability requirements may reflect a difference in thedevelopment and target environment.

Page 236: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 15THE SOFTWARE REQUIREMENTS DEFINITION PHASE

2.4.11 Quality requirements

Quality requirements specify the attributes of the software that makeit fit for its purpose. The major quality attributes of reliability, maintainabilityand safety should always be stated separately. Where appropriate, softwarequality attributes should be specified in measurable terms (i.e. with the useof metrics). For example requirements for the:• use of specific procedures, standards and regulations;• use of external software quality assurance personnel;• qualifications of suppliers.

Any quality-related requirements stated by the user in the UR phasemay be supplemented in the SR phase by the in-house standards of thedevelopment organisation. Such standards attempt to guarantee the qualityof a product by using proper procedures for its production.

2.4.12 Reliability requirements

Software reliability is ‘the ability of a system or component toperform its required functions under stated conditions for a specified periodof time' [Ref. 2]. The reliability metric, ‘Mean Time Between Failure' (MTBF),measures reliability according to this definition.

Reliability requirements should specify the acceptable Mean TimeBetween Failures of the software, averaged over a significant period. Theymay also specify the minimum time between failures that is ever acceptable.Reliability requirements may have to be derived from the user's availabilityrequirements. This can be done from the relation:

Availability = MTBF / (MTBF + MTTR).

MTTR is the Mean Time To Repair (see Section 2.4.13). MTBF is theaverage time the software is available, whereas the sum of MTBF and MTTRis the average time it should be operational.

Adequate margins should be added to the availability requirementswhen deriving the reliability requirements. The specification of the ReliabilityRequirements should provide a classification of failures based on theirseverity. Table 2.3.12 provides an example classification, based on the‘able-to-continue?' criterion.

Page 237: ESA Software Engineering Standards

16 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))THE SOFTWARE REQUIREMENTS DEFINITION PHASE

Failure Class Definition

SEVERE Operations cannot be continued

WARNING Operations can be continued, with reduced capability

INFORMATION Operations can be continued

Table 2.3.12: Failure classification example

Two Examples of reliability requirements are:• ‘the MTBF for severe failures shall be 4 weeks, averaged over 6 months.'• ‘the minimum time between severe failures shall be in excess of 5 days.'

2.4.13 Maintainability requirements

Maintainability is ‘the ease with which a software system orcomponent can be modified to correct faults, improve performance or otherattributes, or adapt to a changed environment' [Ref. 2]. All aspects ofmaintainability should be covered in the specification of the maintainabilityrequirements, and should be specified, where appropriate, in quantitativeterms.

The idea of fault repair is used to formulate the more restricteddefinition of maintainability commonly used in software engineering, i.e. ‘theability of an item under stated conditions of use to be retained in, or restoredto, within a given period of time, a specified state in which it can perform itsrequired functions‘. The maintainability metric, Mean Time To Repair (MTTR),measures maintainability according to this second definition.

Maintainability requirements should specify the acceptable MTTR,averaged over a significant period. They may also specify the maximum timeto repair faults ever acceptable. Maintainability requirements may have to bederived from the user's availability requirements (see Section 2.4.12).

Adaptability requirements can effect the software design byensuring that parameters that are likely to vary do not get ‘hard-wired' intothe code, or that certain objects are made generic.

Page 238: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 17THE SOFTWARE REQUIREMENTS DEFINITION PHASE

Examples of maintainability requirements are:• ‘the MTTR shall be 1 day averaged over a 1 year.'• ‘the time to repair shall never exceed 1 week.'

2.4.14 Safety requirements

Safety requirements specify any requirements to reduce thepossibility of damage that can follow from software failure. Safetyrequirements may identify critical functions whose failure may be hazardousto people or property. Software should be considered safety-critical if thehardware it controls can cause injury to people or damage to property.

While reliability requirements should be used to specify theacceptable frequency of failures, safety requirements should be used tospecify what should happen when failures of a critical piece of softwareactually do occur. In the safety category are requirements for:• graceful degradation after a failure (e.g. warnings are issued to users

before system shutdown and measures are taken to protect propertyand lives);

• continuation of system availability after a single-point failure.

2.5 SYSTEM TEST PLANNING

System Test Plans must be generated in the SR phase anddocumented in the System Test section of the Software Verification andValidation Plan (SVVP/ST). See Chapter 6 of this document. The System TestPlan should describe the scope, approach and resources required for thesystem tests, and address the verification requirements in the SRD.

2.6 THE SOFTWARE REQUIREMENTS REVIEW

The SRD and SVVP/ST are produced iteratively. Walkthroughs andinternal reviews should be held before a formal review.

The outputs of the SR phase must be formally reviewed during theSoftware Requirements Review (SR09). This should be a technical review.The recommended procedure, described in ESA PSS-05-10, is basedclosely on the IEEE standard for Technical Reviews [Ref. 8].

Page 239: ESA Software Engineering Standards

18 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))THE SOFTWARE REQUIREMENTS DEFINITION PHASE

Normally, only the SRD and System Test Plans undergo the fulltechnical review procedure involving users, developers, management andquality assurance staff. The Software Project Management Plan (SPMP/AD),Software Configuration Management Plan (SCMP/AD), Software Verificationand Validation Plan (SVVP/AD), and Software Quality Assurance Plan(SQAP/AD) are usually reviewed by management and quality assurancestaff only.

In summary, the objective of the SR/R is to verify that the:• SRD states the software requirements clearly, completely and in

sufficient detail to enable the design process to be started;• SVVP/ST is an adequate plan for system testing the software in the DD

phase.

The documents are distributed to the participants in the formalreview process for examination. A problem with a document is described ina ‘Review Item Discrepancy' (RID) form that may be prepared by anyparticipant in the review. Review meetings are then held which have thedocuments and RIDs as input. A review meeting should discuss all the RIDsand either accept or reject them. The review meeting may also discusspossible solutions to the problems raised by the RIDs.

The output of a formal review meeting includes a set of acceptedRIDs. Each review meeting should end with a decision whether anotherreview meeting is necessary. It is quite possible to proceed to the AD phasewith some actions outstanding, which should be relatively minor or haveagreed solutions already defined.

2.7 PLANNING THE ARCHITECTURAL DESIGN PHASE

Plans of AD phase activities must be drawn up in the SR phase.Generation of the plans for the AD phase is discussed in chapter 6 of thisdocument. These plans should cover project management, configurationmanagement, verification and validation and quality assurance. Outputs arethe:• Software Project Management Plan for the AD phase (SPMP/AD);• Software Configuration Management Plan for the AD phase (SCMP/AD);• Software Verification and Validation Plan for the AD phase (SVVP/AD);• Software Quality Assurance Plan for the AD phase (SQAP/AD).

Page 240: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 19METHODS FOR SOFTWARE REQUIREMENTS DEFINITION

CHAPTER 3 METHODS FOR SOFTWARE REQUIREMENTS DEFINITION

3.1 INTRODUCTION

Analysis is the study of a problem, prior to taking some action. TheSR phase may be called the ‘analysis phase' of ESA PSS-05-0 life cycle. Theanalysis should be carried out using a recognised method, or combinationof methods, suitable for the project. The method selected should definetechniques for:• constructing a logical model;• specifying the software requirements.

This guide does not provide an exhaustive, authoritative descriptionof any particular method. The references should be consulted to obtain thatinformation. This guide seeks neither to make any particular method astandard nor to define a complete set of acceptable methods. Each projectshould examine its needs, choose a method and define and justify theselection in the SRD. To assist in making this choice, this chaptersummarises some well-known methods and indicates how they can beapplied in the SR phase. Possible methods are:• functional decomposition;• structured analysis;• object-oriented analysis;• formal methods;• Jackson System Development;• rapid prototyping.

Although the authors of any particular method will argue for itsgeneral applicability, all of the methods appear to have been developed witha particular type of system in mind. It is necessary to look at the examplesand case histories to decide whether a method is suitable.

3.2 FUNCTIONAL DECOMPOSITION

Functional decomposition is the traditional method of analysis. Theemphasis is on ‘what' functionality must be available, not ‘how' it is to be

Page 241: ESA Software Engineering Standards

20 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))METHODS FOR SOFTWARE REQUIREMENTS DEFINITION

implemented. The functional breakdown is constructed top-down,producing a set of functions, subfunctions and functional interfaces. SeeSection 2.3.1.

The functional decomposition method was incorporated into thestructured analysis method in the late 1970's.

3.3 STRUCTURED ANALYSIS

Structured analysis is a name for a class of methods that analyse aproblem by constructing data flow models. Members of this class are:• Yourdon methods (DeMarco and Ward/Mellor);• Structured Systems Analysis and Design Methodology (SSADM);• Structured Analysis and Design Technique (SADT 1).

Structured analysis includes all the concepts of functionaldecomposition, but produces a better functional specification by rigourouslydefining the functional interfaces, i.e. the data and control flow between theprocesses that perform the required functions. The ubiquitous ‘Data FlowDiagram' is characteristic of structured analysis methods.

Yourdon methods [Ref. 7, 12] are widely used in the USA andEurope. SSADM [Ref. 4, 9] is recommended by the UK government for ‘dataprocessing systems'. It is now under the control of the British StandardsInstitute (BSI) and will therefore become a British Standard. SADT [Ref. 17,20] has been successfully used within ESA for some time. Structuredanalysis methods are expected to be suitable for the majority of ESAsoftware projects.

According to its early operational definition by DeMarco, structuredanalysis is the use of the following techniques to produce a specification ofthe system required:• Data Flow Diagrams;• Data Dictionary;• Structured English;• Decision Tables;

1 trademark of SoftTech Inc, Waltham, Mass., USA.

Page 242: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 21METHODS FOR SOFTWARE REQUIREMENTS DEFINITION

• Decision Trees.

These techniques are adequate for the analysis of ‘informationsystems'. Developments of structured analysis for ‘real-time' or ‘embeddedsystems', have supplemented this list with:• Transformation Schema;• State-Transition Diagrams;• Event Lists;• Data Schema;• Precondition-Postcondition Specifications.

SSADM, with its emphasis on data modelling, also includes:• Entity-Relationship Diagrams (or Entity Models);• Entity Life Histories.

The methods of structured analysis fall into the groups shown inTable 3.3 according to whether they identify, organise or specify functions orentities.

Activity TechniqueFunction Identification Event Lists

Entity Life HistoriesFunction Organisation Data Flow Diagrams

Transformation SchemaActigrams

Function Specification Structured EnglishDecision TablesDecision Trees

State-Transition DiagramsTransition Tables

Precondition-PostconditionsEntity Identification "Spot the nouns in the description"Entity Organisation Data Structure Diagrams

Data SchemaEntity-Relationship Diagrams

Entity Specification Data Dictionary

Table 3.3: Structured Analysis Techniques

Structured analysis aims to produce a ‘Structured Specification'containing a systematic, rigourous description of a system. This descriptionis in terms of system models. Analysis and design of the system is a model-making process.

Page 243: ESA Software Engineering Standards

22 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))METHODS FOR SOFTWARE REQUIREMENTS DEFINITION

3.3.1 DeMarco/SSADM modelling approach

The DeMarco and SSADM methods create and refine the systemmodel through four stages:• current physical model;• current logical model;• required logical model;• required physical model.

In ESA projects, the goal of the SR phase should be to build arequired logical model. The required physical model incorporatesimplementation considerations and its construction should be deferred tothe AD phase.

The DeMarco/SSADM modelling approach assumes that a systemis being replaced. The current physical model describes the present way ofdoing things. This must be rationalised, to make the current logical model,and then combined with the ‘problems and requirements list' of the users toconstruct the required logical model.

This evolutionary concept can be applied quite usefully in ESAprojects. Most systems have a clear ancestry, and much can be learned bystudying their predecessors. This prevents people from ‘reinventing wheels'.The URD should always describe or reference similar systems so that thedeveloper can best understand the context of the user requirements. ESAPSS-05-0 explicitly demands that the relationship to predecessor projects bedocumented in the SRD. If a system is being replaced and a dataprocessing system is required, then the DeMarco/SSADM approach isrecommended.

The DeMarco/SSADM modelling approach is difficult to applydirectly when the predecessor system was retired some time previously ordoes not exist. In such cases the developer of a data processing systemshould look at the DeMarco/SSADM approach in terms of the activities thatmust be done in each modelling stage. Each activity uses one or more ofthe techniques described in Table 3.3. When it makes no sense to think interms of ‘current physical models' and ‘current logical models', the developershould define an approach that is suitable for the project. Such a tailoredapproach should include activities used in the standard modelling stages.

Page 244: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 23METHODS FOR SOFTWARE REQUIREMENTS DEFINITION

3.3.2 Ward/Mellor modelling approach

Ward and Mellor describe an iterative modelling process that firstdefines the top levels of an ‘essential model'. Instead of proceedingimmediately to the lower levels of the essential model, Ward and Mellorrecommend that the top levels of an ‘implementation model' are built. Thecycle continues with definition of the next lower level of the essential model.This stops the essential model diverging too far from reality and prevents theimplementation model losing coherence and structure.

The essential model is a kind of logical model. It describes what thesystem must do to be successful, regardless of the technology chosen toimplement it. The essential model is built by first defining the system'senvironment and identifying the inputs, outputs, stimuli and responses itmust handle. This is called ‘environmental modelling'. Once the environmentis defined, the innards of the system are defined, in progressive detail, byrelating inputs to outputs and stimuli to responses. This is called‘behavioural modelling' and supersedes the top-down functionaldecomposition modelling approach.

ESA real-time software projects should consider using theWard/Mellor method in the SR phase. The developer should not, however,attempt to construct the definitive implementation model in the SR phase.This task should be done in the AD phase. Instead, predefined designconstraints should be used to outline the implementation model and steerthe development of the essential model.

3.3.3 SADT modelling approach

There are two stages of requirements definition in SADT:• context analysis;• functional specification.

The purpose of context analysis is to define the boundary conditionsof the system. ‘Functional specification' defines what the system has to do.The output of the functional specification stage is a ‘functional architecture',which should be expressed using SADT diagrams.

The functional architecture is a kind of logical model. It is derivedfrom a top-down functional decomposition of the system, starting from thecontext diagram, which shows all the external interfaces of the system. Datashould be decomposed in parallel with the functions.

Page 245: ESA Software Engineering Standards

24 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))METHODS FOR SOFTWARE REQUIREMENTS DEFINITION

In the SR phase, SADT diagrams can be used to illustrate the dataflow between functions. Mechanism flows should be suppressed. Controlflows may be necessary to depict real-time processing.

3.4 OBJECT-ORIENTED ANALYSIS

Object-oriented analysis is the name for a class of methods thatanalyse a problem by studying the objects in the problem domain. Forexample some objects that might be important in a motor vehicle simulationproblem are engines and gearboxes.

Object-oriented analysis can be viewed as a synthesis of the objectconcepts pioneered in the Simula67 and Smalltalk programming languages,and the techniques of structured analysis, particularly data modelling.Object-oriented analysis differs from structured analysis by:• building an object model first, instead of the functional model (i.e.

hierarchy of data flow diagrams);• integrating objects, attributes and operations, instead of separating

them between the data model and the functional model.

OOA has been quite successful in tackling problems that areresistant to structured analysis, such as user interfaces. OOA provides aseamless transition to OOD and programming in languages such asSmalltalk, Ada and C++, and is the preferred analysis method when object-oriented methods are going to be used later in the life cycle. Further, theproponents of OOA argue that the objects in a system are morefundamental to its nature than the functions it provides. Specificationsbased on objects will be more adaptable than specifications based onfunctions.

The leading OOA methods are:• Coad-Yourdon;• Rumbaugh et al’s Object Modelling Technique (OMT);• Shlaer-Mellor;• Booch.

OOA methods are evolving, and analysts often combine thetechniques of different methods when analysing problems. Users of OOAmethods are recommended to adopt such a pragmatic approach.

Page 246: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 25METHODS FOR SOFTWARE REQUIREMENTS DEFINITION

3.4.1 Coad and Yourdon

Coad and Yourdon [Ref. 18] describe an Object-Oriented Analysis(OOA) method based on five major activities:• finding classes and objects;• identifying structures;• identifying subjects;• defining attributes;• defining services.

These activities are used to construct each layer of a ‘five-layer’object model.

Objects exist in the problem domain. Classes are abstractions ofthe objects. Objects are instances of classes. The first task of the method isto identify classes and objects.

The second task of the method is to identify structures. Two kinds ofstructures are recognised: ‘generalisation- specialisation structures' and‘whole-part structures'. The former type of structure is like a family tree, andinheritance is possible between members of the structure. The latter kind ofstructure is used to model entity relationships (e.g. each motor contains onearmature).

Large, complex models may need to be organised into ‘subjects’,with each subject supporting a particular view of the problem. For examplethe object model of a motor vehicle might have a mechanical view andelectrical view.

Attributes characterise each class. For example an attribute of anengine might be ‘number of cylinders’. Each object will have value for theattribute.

Services define what the objects do. Defining the services isequivalent to defining system functions.

The strengths of Coad and Yourdon’s method are its brief, concisedescription and its use of general texts as sources of definitions, so that thedefinitions fit common sense and jargon is minimised. The main weaknessof the method is its complex notation, which is difficult to use without toolsupport. Some users of the Coad-Yourdon method have used the OMTdiagramming notation instead.

Page 247: ESA Software Engineering Standards

26 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))METHODS FOR SOFTWARE REQUIREMENTS DEFINITION

3.4.2 OMT

Rumbaugh et al’s Object Modelling Technique (OMT) [Ref 25]transforms the users’ problem statement (such as that documented in aUser Requirement Document) into three models:• object model;• dynamic model;• functional model.

The three models collectively make the logical model required byESA PSS-05-0.

The object model shows the static structure in the real world. Theprocedure for constructing it is:• identify objects;• identify classes of objects;• identify associations (i.e. relationships) between objects;• identify object attributes;• use inheritance to organise and simplify class structure;• organise tightly coupled classes and associations into modules;• supply brief textual descriptions on each object.

Important types of association are ‘aggregation’ (i.e. is a part of)and ‘generalisation’ (i.e. is a type of).

The dynamic model shows the behaviour of the system, especiallythe sequencing of interactions. The procedure for constructing it is:• identify sequences of events in the problem domain and document

them in ‘event traces’;• build a state-transition diagram for each object that is affected by the

events, showing the messages that flow, actions that are performedand object state changes that take place when events occur.

Page 248: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 27METHODS FOR SOFTWARE REQUIREMENTS DEFINITION

The functional model shows how values are derived, without regardfor when they are computed. The procedure for constructing it is not to usefunctional decomposition, but to:• identify input and output values that the system receives and produces;• construct data flow diagrams showing how the output values are

computed from the input values;• identify objects that are used as ‘data stores’;• identify the object operations that comprise each process.

The functional model is synthesised from object operations, ratherthan decomposed from a top level function. The operations of objects maybe defined at any stage in modelling.

The strengths of OMT are its simple yet powerful notationcapabilities and its maturity. It was applied in several projects by its authorsbefore it was published. The main weakness is the lack of techniques forintegrating the object, dynamic and functional models.

3.4.3 Shlaer-Mellor

Shlaer and Mellor begin analysis by identifying the problem domainsof the system. Each domain ‘is a separate world inhabited by its ownconceptual entities, or objects’ [Ref 26, 27]. Large domains are partitionedinto subsystems. Each domain or subsystem is then separately analysed inthree steps:• information modelling;• state modelling;• process modelling.

The three modelling activities collectively make the logical modelrequired by ESA PSS-05-0.

The goal of information modelling is to identify the:• objects in the subsystem• attributes of each object;• relationships between each object.

The information model is documented by means of diagrams anddefinitions of the objects, attributes and relationships.

Page 249: ESA Software Engineering Standards

28 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))METHODS FOR SOFTWARE REQUIREMENTS DEFINITION

The goal of state modelling is to identify the:• states of each object, and the actions that are performed in them;• events that cause objects to move from one state to another;• sequences of states that form the life cycle of each object;• sequences of messages communicating events that flow between

objects and subsystems.

State models are documented by means of state model diagrams,showing the sequences of states, and object communication modeldiagrams, showing the message flows between states.

The goal of process modelling is to identify the:• operations of each object required in each action;• attributes of each object that are stored in each action.

Process models are documented by means of action data flowdiagrams, showing operations and data flows that occur in each action, anobject access model diagrams, showing interobject data access. Complexprocesses should also be described.

The strengths of the Shlaer-Mellor method are its maturity (itsauthors claim to have been developing it since 1979) and existence oftechniques for integrating the information, state and process models. Themain weakness of the method is its complexity.

3.4.4 Booch

Booch models an object-oriented design in terms of a logical view,which defines the classes, objects, and their relationships, and a physicalview, which defines the module and process architecture [Ref. 28]. Thelogical view corresponds to the logical model that ESA PSS-05-0 requiressoftware engineers to construct in the SR phase. The Booch object-orientedmethod has four steps:• identify the classes and objects at a given level of abstraction;• identify the semantics of these classes and objects;• identify the relationships among these classes and objects;• implement the classes and objects.

Page 250: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 29METHODS FOR SOFTWARE REQUIREMENTS DEFINITION

The first three steps should be completed in the SR phase. The laststage is performed in the AD and DD phases. Booch asserts that theprocess of object-oriented design is neither top-down nor bottom-up butsomething he calls ‘round-trip gestalt design’. The process develops asystem incrementally and iteratively. Users of the Booch method are advisedto bundle the SR and AD phases together into single ‘modelling phase’.

Booch provides four diagramming techniques for documenting thelogical view:• class diagrams, which are used to show the existence of classes and

their relationships;• object diagrams, which are used to show the existence of objects and

their behaviour, especially with regard to message communication;• state-transition diagrams, which show the possible states of each class,

and the events that cause transitions from one state to another;• timing diagrams, which show the sequence of the objects’ operations.

Booch’s books on object-oriented methods have been described byStroustrup, the inventor of C++, as the only books worth reading on thesubject. This compliment reflects the many insights into good analysis anddesign practise in his writings. However Booch’s notation is cumbersomeand few tools are available.

3.5 FORMAL METHODS

A Formal Method should be used when it is necessary to be able toprove that certain consequences will follow specified actions. FormalMethods must have a calculus, to allow proofs to be constructed. Thismakes rigourous verification possible.

Formal Methods have been criticised for making specificationsunintelligible to users. In the ESA PSS-05-0 life cycle, the URD provides theuser view of the specification. While the primary user of the SRD is thedeveloper, it does have to be reviewed by other people, and so explanatorynotes should be added to ensure that the SRD can be understood by itsreadership.

Like mathematical treatises, formal specifications contain theoremsstating truths about the system to be built. Verification of each theorem isdone by proving it from the axioms of the specification. This can,unfortunately, lead to a large number of statements. To avoid this problem,

Page 251: ESA Software Engineering Standards

30 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))METHODS FOR SOFTWARE REQUIREMENTS DEFINITION

the ‘rigourous' approach can be adopted where shorter, intuitivedemonstrations of correctness are used [Ref. 11]. Only critical orproblematical statements are explicitly proved.

Formal Methods should be considered for the specification ofsafety-critical systems, or where there are severe availability or securityconstraints. There are several Formal Methods available, some of that aresummarised in the table below. Z, VDM and LOTOS are discussed in moredetail. Table 3.5 lists the more common formal methods.

Method Reference SummaryZ 10 Functional specification method for sequential

programsVDM

ViennaDevelopment

Method

11 Functional specification and developmentmethod for sequential programs.

LOTOSLanguage Of

TemporalOrdering

Specification

IS 8807 Formal Method with an International Standard.Combination of CCS, CSP and the abstract datatyping language ACT ONE. Tools are available.

CSPCommunicating

SequentialProcesses

14 Design language for asynchronous parallelismwith synchronous communication. Influenced byJSD and CCS.

OBJ 15 Functional specification language andprototyping tool for sequential programs. Objects(i.e. abstract data types) are the maincomponents of OBJ specifications.

CCSCalculus for

CommunicatingSystems

16 Specification and design of concurrent behaviourof systems. Calculus for asynchronousparallelism with synchronous communication.Used in protocol and communications work. SeeCSP.

Petri Nets 19 Modelling of concurrent behaviour of systems

Table 3.5: Summary of Formal Methods

3.5.1 Z

Z is a model-oriented specification method based on set theory andfirst order predicate calculus. The set theory is used to define and organisethe entities the software deals with. The predicate calculus is used to defineand organise the activities the entities take part in, by stating truths abouttheir behaviour.

Page 252: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 31METHODS FOR SOFTWARE REQUIREMENTS DEFINITION

Z can be used in the SR phase to permit mathematical modellingthe functional behaviour of software. Z is suitable for the specification ofsequential programs. Its inability to model concurrent processes makes itunsuitable for use in real-time software projects. To overcome this deficiencyZ is being combined with another Formal Method, CSP.

Z specifications can be easily adapted to the requirements of anSRD. Z specifications contain:• an English language description of all parts of the system;• a mathematical definition of the system's components;• consistency theorems;• other theorems stating important consequences.

3.5.2 VDM

The ‘Vienna Development Method' (VDM) is a model-orientedspecification and design method based on set theory and Precondition-Postcondition specifications. The set theory is used to define and organisethe entities the software deals with. The condition specifications are used todefine and organise the activities the entities take part in by stating truthsabout their behaviour.

VDM uses a top-down method to develop a system model from ahigh-level definition, using abstract data types and descriptions of externaloperations, to a low-level definition in implementation-oriented terms. This‘contractual' process, whereby a given level is the implementation of thelevel above and the requirements for the level below, is called ‘reification'.The ability to reason from specification through to implementation is a majorfeature of VDM.

VDM can be used in the SR phase to permit mathematicalmodelling the functional behaviour of software. VDM specifications can beeasily adapted to the requirements of an SRD.

Page 253: ESA Software Engineering Standards

32 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))METHODS FOR SOFTWARE REQUIREMENTS DEFINITION

• VDM specifications contain a:• description of the state of the system (a mathematical definition of the

system components);• list of data types and invariants;• list of Precondition-Postcondition specifications.

VDM is generally considered to be deficient in some areas,specifically:• it is not possible to define explicitly operations (i.e the precondition-

postcondition technique does not describe how state variables arechanged);

• it is not possible to specify concurrency (making it unsuitable for real-time applications);

• the underlying structure is not modular;• the specification language lacks abstract and generic features.

3.5.3 LOTOS

LOTOS (Language Of Temporal Ordering Specification) is a formaldescription technique defined by the International StandardisationOrganisation (ISO). It is the only analysis method that is an ISO standard. Itwas originally developed for use on the Open Systems Interconnection (OSI)standards, but is especially suitable for the specification of distributedprocessing systems.

LOTOS has two integrated components:• a ‘process algebra' component, which is based on a combination of

Calculus of Communicating Systems (CCS, ref 16) and CommunicatingSequential Processes (CSP, ref 14);

• a ‘data type' component that is based on the algebraic specificationlanguage ACT ONE.

LOTOS is both ‘executable' (i.e., the specified behaviour may besimulated), and amenable to proof techniques (due to its algebraicproperties).

LOTOS encourages developers to work in a structured manner(either top-down or bottom-up), and may be used in several differentfashions. Various ‘LOTOS specification styles' have been identified and

Page 254: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 33METHODS FOR SOFTWARE REQUIREMENTS DEFINITION

documented, ranging from ‘constraint-oriented' where constraints on, orproperties of, the system are specified in isolation from any internalprocessing mechanisms, to ‘monolithic' where the system's behaviour iscaptured as a tree of alternatives.

Each style has its own strengths; the ‘constraint-oriented' styleprovides a high-level specification of the system and is a powerful way toimpose separation of concerns. The ‘monolithic' style may be viewed as amuch lower level system description that can be transformed into animplementation with relative ease. Two or more styles are often applied inconcert, and a LOTOS specification may be refined in much the same wayas an English language system requirements specification would be refinedinto an architectural design and then a detailed design. An obvious benefitof using LOTOS in this way is that each refinement may be verified (i.e.preservation of system properties from a high-level specification to a lower-level specification may be proven).

3.6 JACKSON SYSTEM DEVELOPMENT

Jackson System Development (JSD) analysis techniques are[Ref.13]:• Structure Diagrams;• Structure Text;• System Specification Diagrams.

Unlike structured analysis, which is generally known by itstechniques, JSD is best characterised by:• emphasis on the need to develop software that models the• behaviour of the things in the real world it is concerned with;• emphasis on the need to build in adaptability by devising a model that

defines the possible system functions;• avoidance of the top-down approach in favour of a subject-matter

based approach.

The term ‘model' has a specific meaning in JSD. A model is ‘arealisation, in the computer, of an abstract description of the real world' [Ref.13]. Since the real world has a time dimension it follows that the model willbe composed of one or more ‘sequential' processes. Each process is

Page 255: ESA Software Engineering Standards

34 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))METHODS FOR SOFTWARE REQUIREMENTS DEFINITION

identified by the entities that it is concerned with. The steps in each processdescribe the (sequential) events in the life cycle of the entity.

A JSD model is described in a ‘System Specification' and is, inprinciple, directly executable. However this will only be the case if eachprocess can be directly mapped to a processor. For example there may bea one-to-one relationship between a telescope and the computer thatcontrols it, but there is a many-to-one relationship between a bank customerand the computer that processes customer transactions (many customers,one computer). JSD implementations employ a ‘scheduling' process tocoordinate many concurrent processes running on a single processor.

Structured analysis concentrates on devising a ‘data model' (henceData Flow Diagrams and Data Dictionaries). While this may be quite properfor information systems it is sometimes not very helpful when building real-time systems. The JSD method attempts to correct this deficiency byconcentrating on devising a ‘process model' that is more suited to real-timeapplications. Application of JSD to information systems is likely to be moredifficult than with structured methods, but should be considered whereadaptability is a very high priority constraint requirement.

Table 3.6 shows what JSD development activities should take placein the SR phase.

JSD ActivityLevel 0 Level 1 Level 2 Level 3

Develop Write Entity-ActionList

Specify Model ofReality

Model Abstractly Draw Entity-Structure Diagrams

Specification Define initialversions of model

processesSpecify System

FunctionsAdd functions tomodel processes

Add timingconstraints

Table 3.6: SR phase JSD development activities

3.7 RAPID PROTOTYPING

A prototype is a ‘concrete executable model of selected aspects ofa proposed system' [Ref. 5]. If the requirements are not clear, or suspectedto be incomplete, it can be useful to develop a prototype based on tentativerequirements to explore what the software requirements really are. This is

Page 256: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 35METHODS FOR SOFTWARE REQUIREMENTS DEFINITION

called ‘exploratory prototyping'. Prototypes can help define user interfacerequirements.

Rapid Prototyping is ‘the process of quickly building and evaluatinga series of prototypes' [Ref. 6]. Specification and development are iterative.

Rapid Prototyping can be incorporated within the ESA PSS-05-0 lifecycle if the iteration loop is contained within a phase.

The development of an information system using 4GLs wouldcontain the loop:

repeat until the user signs off the SRDanalyse requirementscreate data basecreate user interfaceadd selected functionsreview execution of prototype with user

Requirements should be analysed using a recognised method,such as structured analysis, and properly documented in an SRD. RapidPrototyping needs tool support, otherwise the prototyping may not be rapidenough to be worthwhile.

The prototype's operation should be reviewed with the user and theresults used to formulate the requirements in the SRD. The review of theexecution of a prototype with a user is not a substitute for the SR review.

Rapid Prototyping can also be used with an EvolutionaryDevelopment life cycle approach. A Rapid Prototyping project could consistof several short-period life cycles. Rapid Prototyping tools would be usedextensively in the early life cycles, permitting the speedy development of thefirst prototypes. In later life cycles the product is optimised, which mayrequire new system-specific code to be written.

Software written to support the prototyping activity should not bereused in later phases - the prototypes are ‘throwaways'. To allowprototypes to be built quickly, design standards and software requirementscan be relaxed. Since quality should be built into deliverable software fromits inception, it is bad practice to reuse prototype modules in later phases.Such modules are likely to have interfaces inconsistent with the rest of thedesign. It is permissible, of course, to use ideas present in prototype code inthe DD phase.

Page 257: ESA Software Engineering Standards

36 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))METHODS FOR SOFTWARE REQUIREMENTS DEFINITION

This page is intentionally left blank.

Page 258: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 37TOOLS FOR SOFTWARE REQUIREMENTS DEFINITION

CHAPTER 4 TOOLS FOR SOFTWARE REQUIREMENTS DEFINITION

4.1 INTRODUCTION

This chapter discusses the tools for constructing a logical modeland specifying the software requirements. Tools can be combined to suitthe needs of a particular project.

4.2 TOOLS FOR LOGICAL MODEL CONSTRUCTION

In all but the smallest projects, CASE tools should be used duringthe SR phase. Like many general purpose tools, such as word processorsand drawing packages, a CASE tool should provide:• a windows, icons, menu and pointer (WIMP) style interface for the easy

creation and editing of diagrams;• a what you see is what you get (WYSIWYG) style interface that ensures

that what is created on the display screen is an exact image of what willappear in the document.

Method-specific CASE tools offer the following advantages overgeneral purpose tools:• enforcement of the rules of the methods;• consistency checking;• ease of modification;• automatic traceability of user requirements through to the software

requirements;• built-in configuration management.

Tools should be selected that have an integrated data dictionary or‘repository' for consistency checking. Developers should check that a toolsupports the method that they intend to use. Appendix E contains a moredetailed list of desirable tool capabilities.

Configuration management of the model description is essential.The model should evolve from baseline to baseline during the SR phase,and the specification and enforcement of procedures for the identification,change control and status accounting of the model description are

Page 259: ESA Software Engineering Standards

38 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))TOOLS FOR SOFTWARE REQUIREMENTS DEFINITION

necessary. In large projects, configuration management tools should beused.

4.3 TOOLS FOR SOFTWARE REQUIREMENTS SPECIFICATION

4.3.1 Software requirements management

For large systems, a database management system (DBMS) forstoring the software requirements becomes invaluable for maintainingconsistency and accessibility. Desirable capabilities of a requirementsDBMS are:• insertion of new requirements;• modification of existing requirements;• deletion of requirements;• storage of attributes (e.g. identifier) with the text;• selecting by requirement attributes and text strings;• sorting by requirement attributes and text strings;• cross-referencing;• change history recording;• access control;• display;• printing, in a variety formats.

4.3.2 Document production

A word processor or text processor should be used for producing adocument. Tools for the creation of paragraphs, sections, headers, footers,tables of contents and indexes all facilitate the production of a document. Aspell checker is desirable. An outliner may be found useful for creation ofsubheadings, for viewing the document at different levels of detail and forrearranging the document. The ability to handle diagrams is very important.

Documents invariably go through many drafts as they are created,reviewed and modified. Revised drafts should include change bars.Document comparison programs, which can mark changed textautomatically, are invaluable for easing the review process.

Tools for communal preparation of documents are now beginningto be available, allowing many authors to comment and add to a singledocument in a controlled manner.

Page 260: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 39THE SOFTWARE REQUIREMENTS DOCUMENT

CHAPTER 5 THE SOFTWARE REQUIREMENTS DOCUMENT

5.1 INTRODUCTION

The purpose of an SRD is to be an authoritative statement of ‘what'the software is to do. An SRD must be complete (SR11) and cover all therequirements stated in the URD (SR12).

The SRD should be detailed enough to allow the implementation ofthe software without user involvement. The size and content of the SRDshould, however, reflect the size and complexity of the software product. Itdoes not, however, need to cover any implementation aspects, and,provided that it is complete, the smaller the SRD, the more readable andreviewable it is.

The SRD is a mandatory output (SR10) of the SR phase and has adefinite role to play in the ESA PSS-05-0 documentation scheme. SRDauthors should not go beyond the bounds of that role. This means that:• the SRD must not include implementation details or terminology, unless

it has to be present as a constraint (SR15);• descriptions of functions must say what the software is to do,• and must avoid saying how it is to be done (SR16);• the SRD must avoid specifying the hardware or equipment, unless it is a

constraint placed by the user (SR17).

5.2 STYLE

The SRD should be systematic, rigourous, clear, consistent andmodifiable. Wherever possible, software requirements should be stated inquantitative terms to increase their verifiability.

5.2.1 Clarity

An SRD is ‘clear' if each requirement is unambiguous and itsmeaning is clear to all readers.

Page 261: ESA Software Engineering Standards

40 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))THE SOFTWARE REQUIREMENTS DOCUMENT

If a requirements specification language is used, explanatory text,written in natural language, should be included in the SRD to make itunderstandable to those not familiar with the specification language.

5.2.2 Consistency

The SRD must be consistent (SR14). There are several types ofinconsistency:• different terms used for the same thing;• the same term used for different things;• incompatible activities happening at the same time;• activities happening in the wrong order.

Where a term could have multiple meanings, a single meaning forthe term should be defined in a glossary, and only that meaning should beused throughout.

An SRD is consistent if no set of individual requirements conflict.Methods and tools help consistency to be achieved.

5.2.3 Modifiability

Modifiability enables changes to be made easily, completely, andconsistently.

When requirements duplicate or overlap one another, cross-references should be included to preserve modifiability.

5.3 EVOLUTION

The SRD should be put under formal change control by thedeveloper as soon as it is approved. New requirements may need to beadded and old requirements may have to be modified or deleted. If the SRDis being developed by a team of people, the control of the document mayneed to be started at the beginning of the SR phase.

The Software Configuration Management Plan for the SR phaseshould have defined a formal change process to identify, control, track andreport projected changes as soon as they are initially identified. Approvedchanges in requirements must be recorded in the SRD by inserting

Page 262: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 41THE SOFTWARE REQUIREMENTS DOCUMENT

document change records and a document status sheet at the start of theSRD.

5.4 RESPONSIBILITY

Whoever actually writes the SRD, the responsibility for it lies with thedeveloper. The developer should nominate people with proven analyticalskills to write the SRD. Members of the design and implementation teammay take part in the SR/R as they can advise on the technical feasibility ofrequirements.

5.5 MEDIUM

It is usually assumed that the SRD is a paper document. It may bedistributed electronically to participants who have access to the necessaryequipment.

5.6 CONTENT

The SRD must be compiled according to the table of contentsprovided in Appendix C of ESA PSS-05-0 (SR18). This table of contents isderived from ANSI/IEEE Std 830-1984 ‘Software RequirementsSpecifications'[Ref. 3]. The description of the model is the only significantaddition.

Section 1 should briefly describe the purpose of both the SRD andthe product. Section 2 should provide a general description of the projectand the product. Section 3 should contain the definitive material about whatis required. Appendix A should contain a glossary of terms. Large SRDs(forty pages or more) should also contain an index.

References should be given where appropriate. An SRD should notrefer to documents that follow it in the ESA PSS-05-0 life cycle. An SRDshould contain no TBCs or TBDs by the time of the Software RequirementsReview.

Service Information:a - Abstractb - Table of Contentsc - Document Status Sheetd - Document Change Records made since last issue

Page 263: ESA Software Engineering Standards

42 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))THE SOFTWARE REQUIREMENTS DOCUMENT

1 INTRODUCTION1.1 Purpose1.2 Scope1.3 Definitions, acronyms and abbreviations1.4 References1.5 Overview

2 GENERAL DESCRIPTION2.1 Relation to current projects2.2 Relation to predecessor and successor projects2.3 Function and purpose2.4 Environmental considerations2.5 Relation to other systems2.6 General constraints2.7 Model description

3 SPECIFIC REQUIREMENTS(The subsections may be regrouped around high-level functions)

3.1 Functional requirements3.2 Performance requirements3.3 Interface requirements3.4 Operational requirements3.5 Resource requirements3.6 Verification requirements3.7 Acceptance testing requirements3.8 Documentation requirements3.9 Security requirements3.10 Portability requirements3.11 Quality requirements3.12 Reliability requirements3.13 Maintainability requirements3.14 Safety requirements

4 REQUIREMENTS TRACEABILITY MATRIX

Relevant material unsuitable for inclusion in the above contents listshould be inserted in additional appendices. If there is no material for asection then the phrase ‘Not Applicable' should be inserted and the sectionnumbering preserved.

Page 264: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 43THE SOFTWARE REQUIREMENTS DOCUMENT

5.6.1 SRD/1 Introduction

5.6.1.1 SRD/1.1 Purpose (of the document)

This section should:

(1) define the purpose of the particular SRD;

(2) specify the intended readership of the SRD.

5.6.1.2 SRD/1.2 Scope (of the software)

This section should:

(1) identify the software product(s) to be produced by name;

(2) explain what the proposed software will do (and will not do, ifnecessary);

(3) describe the relevant benefits, objectives and goals as precisely aspossible;

(4) be consistent with similar statements in higher-level specifications, ifthey exist.

5.6.1.3 SRD/1.3 Definitions, acronyms and abbreviations

This section should provide definitions of all terms, acronyms, andabbreviations needed for the SRD, or refer to other documents where thedefinitions can be found.

5.6.1.4 SRD/1.4 References

This section should provide a complete list of all the applicable andreference documents. Each document should be identified by its title, authorand date. Each document should be marked as applicable or reference. Ifappropriate, report number, journal name and publishing organisationshould be included.

Page 265: ESA Software Engineering Standards

44 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))THE SOFTWARE REQUIREMENTS DOCUMENT

5.6.1.5 SRD/1.5 Overview (of the document)

This section should:

(1) describe what the rest of the SRD contains;

(2) explain how the SRD is organised.

5.6.2 SRD/2 General Description

This chapter should describe the general factors that affect theproduct and its requirements. It does not state specific requirements; it onlymakes those requirements easier to understand.

5.6.2.1 SRD/2.1 Relationship to current projects

This section should describe the context of the project in relation tocurrent projects. This section should identify any other relevant projects thedeveloper is carrying out for the initiator. The project may be independent ofother projects or part of a larger project.

Any parent projects should be identified. A detailed description ofthe interface of the product to the larger system should, however, bedeferred until Section 2.5.

5.6.2.2 SRD/2.2 Relationship to predecessor and successor projects

This section should describe the context of the project in relation topast and future projects. This section should identify projects that thedeveloper has carried out for the initiator in the past and also any projectsthe developer may be expected to carry out in the future, if known.

If the product is to replace an existing product, the reasons forreplacing that system should be summarised in this section.

5.6.2.3 SRD/2.3 Function and purpose

This section should discuss the purpose of the product. It shouldexpand upon the points made in Section 1.2.

Page 266: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 45THE SOFTWARE REQUIREMENTS DOCUMENT

5.6.2.4 SRD/2.4 Environmental considerations

This section should summarise:• the physical environment of the target system, i.e. where is the system

going to be used and by whom? The URD section called ‘UserCharacteristics' may provide useful material on this topic.

• the hardware environment in the target system, i.e. what computer(s)does the software have to run on?

• the operating environment in the target system, i.e. what operatingsystems are used?

• the hardware environment in the development system, i.e. whatcomputer(s) does the software have to be developed on?

• the operating environment in the development system, i.e. whatsoftware development environment is to be used? or what softwaretools are to be used?

5.6.2.5 SRD/2.5 Relation to other systems

This section should describe in detail the product's relationship toother systems, for example is the product:• an independent system?• a subsystem of a larger system?• replacing another system?

If the product is a subsystem then this section should:• summarise the essential characteristics of the larger system;• identify the other subsystems this subsystem will interface with;• summarise the computer hardware and peripheral equipment to be

used.

A block diagram may be presented showing the major componentsof the larger system or project, interconnections, and external interfaces.

The URD contains a section ‘Product Perspective' that may providerelevant material for this section.

Page 267: ESA Software Engineering Standards

46 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))THE SOFTWARE REQUIREMENTS DOCUMENT

5.6.2.6 SRD/2.6 General constraints

This section should describe any items that will limit the developer'soptions for building the software. It should provide background informationand seek to justify the constraints. The URD contains a section called‘General Constraints' that may provide relevant material for this section.

5.6.2.7 SRD/2.7 Model description

This section should include a top-down description of the logicalmodel. Diagrams, tables and explanatory text may be included.

The functionality at each level should be described, to enable thereader to ‘walkthrough' the model level-by-level, function-by-function, flow-by-flow. A bare-bones description of a system in terms of data flowdiagrams and low-level functional specifications needs supplementarycommentary. Natural language is recommended.

5.6.3 SRD/3 Specific Requirements

The software requirements are detailed in this section. Eachrequirement must include an identifier (SR04). Essential requirements mustbe marked as such (SR05). For incremental delivery, each softwarerequirement must include a measure of priority so that the developer candecide the production schedule (SR06). References that trace the softwarerequirements back to the URD must accompany each software requirement(SR07). Any other sources should be stated. Each software requirementmust be verifiable (SR08).

The functional requirements should be structured top-down in thissection. Non-functional requirements can appear at all levels of the hierarchyof functions, and, by the inheritance principle, apply to all the functionalrequirements below them. Non-functional requirements may be attached tofunctional requirements by cross-references or by physically grouping themtogether in the document.

If a non-functional requirement appears at a lower level, itsupersedes any requirement of that type that appears at a higher level.Critical functions, for example, may have more stringent reliability and safetyrequirements than those of non-critical functions.

Specific requirements may be written in natural language. Thismakes them understandable to non-specialists, but permits inconsistencies,ambiguity and imprecision to arise. These undesirable properties can be

Page 268: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 47THE SOFTWARE REQUIREMENTS DOCUMENT

avoided by using requirements specification languages. Such languagesrange in rigour from Structured English to Z, VDM and LOTOS.

Each software requirement must have a unique identifier (SR04).Forward traceability to subsequent phases in the life cycle depends uponeach requirement having a unique identifier.

Essential software requirements have to be met for the software tobe acceptable. If a software requirement is essential, it must be clearlyflagged (SR05). Non-essential software requirements should be marked witha measure of desirability (e.g. scale of 1, 2, 3).

The priority of a requirement measures the order, or the timing, ofthe related functionality becoming available. If the transfer is to be phased,so that some parts come into operation before others, each requirementmust be marked with a measure of priority (SR06).

Unstable requirements should be flagged. These requirements maybe dependent on feedback from the UR, SR and AD phases. The usualmethod for flagging unstable requirements is to attach the marker ‘TBC'.

The source of each software requirement must be stated (SR07),using the identifier of a user requirement, a document cross-reference, oreven the name of a person or group. Backwards traceability depends uponeach requirement explicitly referencing its source in the URD or elsewhere.

Each software requirement must be verifiable (SR08). Clarityincreases verifiability. Clarity is enhanced by ensuring that each softwarerequirement is well separated from the others. A software requirement isverifiable if some method can be devised for objectively demonstrating thatthe software implements it correctly.

5.6.4 SRD/Appendix A Requirements Traceability matrix

This section should contain a table summarising how each userrequirement is met in the SRD (SR13). See Appendix D.

Page 269: ESA Software Engineering Standards

48 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))THE SOFTWARE REQUIREMENTS DOCUMENT

This page is intentionally left blank.

Page 270: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 49LIFE CYCLE MANAGEMENT ACTIVITIES

CHAPTER 6 LIFE CYCLE MANAGEMENT ACTIVITIES

6.1 INTRODUCTION

SR phase activities must be carried out according to the plansdefined in the UR phase (SR01). These are:• Software Project Management Plan for the SR phase (SPMP/SR);• Software Configuration Management Plan for the SR phase (SCMP/SR);• Software Verification and Validation Plan for the SR phase (SVVP/SR);• Software Quality Assurance Plan for the SR phase (SQAP/SR).

Progress against plans should be continuously monitored byproject management and documented at regular intervals in progressreports.

Plans for AD phase activities must be drawn up in the SR phase.These plans cover project management, configuration management,verification and validation, quality assurance and system tests.

6.2 PROJECT MANAGEMENT PLAN FOR THE AD PHASE

During the SR phase, the AD phase section of the SPMP(SPMP/AD) must be produced (SPM05). The SPMP/AD describes, in detail,the project activities to be carried out in the AD phase.

An estimate of the total project cost must be included in theSPMP/AD (SPM06). Every effort should be made to arrive at a total projectcost estimate with an accuracy better than 30%. In addition, a preciseestimate of the effort involved in the AD phase must be included in theSPMP/AD (SPM07).

Technical knowledge and experience gained on similar projectsshould be used to produce the cost estimate. Specific factors affectingestimates for the work required in the AD phase are the:• number of software requirements;• level of software requirements;• complexity of the software requirements;

Page 271: ESA Software Engineering Standards

50 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))LIFE CYCLE MANAGEMENT ACTIVITIES

• stability of software requirements;• level of definition of external interfaces;• quality of the SRD.

The function-point analysis method may be of use in the SR phasefor costing data-processing systems [Ref. 23].

If an evolutionary software development or incremental delivery lifecycle is to be used, the SPMP/AD should say so.

Guidance on writing the SPMP/AD is provided in ESA PSS-05-08,Guide to Software Project Management Planning.

6.3 CONFIGURATION MANAGEMENT PLAN FOR THE AD PHASE

During the SR phase, the AD phase section of the SCMP(SCMP/AD) must be produced (SCM44). The SCMP/AD must cover theconfiguration management procedures for documentation, and any CASEtool outputs or prototype code, to be produced in the AD phase (SCM45).

Guidance on writing the SCMP/AD is provided in ESA PSS-05-09,Guide to Software Configuration Management Planning.

6.4 VERIFICATION AND VALIDATION PLAN FOR THE AD PHASE

During the AD phase, the AD phase section of the SVVP (SVVP/AD)must be produced (SVV12). The SVVP/AD must define how to trace softwarerequirements to components, so that each software component can bejustified (SVV13). It should describe how the ADD is to be evaluated bydefining the review procedures. It may include specifications of the tests tobe performed with prototypes.

During the SR phase, the developer analyses the user requirementsand may insert ’acceptance-testing requirements' in the SRD. Theserequirements constrain the design of the acceptance tests. This must berecognised in the design of the acceptance tests.

The planning of the system tests should proceed in parallel with thedefinition of the software requirements. The developer may identify‘verification requirements' for the software. These are additional constraintson the verification activities. These requirements are also stated in the SRD.

Page 272: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) 51LIFE CYCLE MANAGEMENT ACTIVITIES

Guidance on writing the SVVP/AD is provided in ESA PSS-05-10,Guide to Software Verification and Validation.

6.5 QUALITY ASSURANCE PLAN FOR THE AD PHASE

During the SR phase, the AD phase section of the SQAP (SQAP/AD)must be produced (SQA06). The SQAP/AD must describe, in detail, thequality assurance activities to be carried out in the AD phase (SQA07).

SQA activities include monitoring the following activities:• management;• documentation;• standards, practices, conventions, and metrics;• reviews and audits;• testing activities;• problem reporting and corrective action;• tools, techniques and methods;• code and media control;• supplier control;• records collection maintenance and retention;• training;• risk management.

Guidance on writing the SQAP/AD is provided in ESA PSS-05-11,Guide to Software Quality Assurance Planning.

The SQAP/AD should take account of all the software requirementsrelated to quality, in particular:• quality requirements;• reliability requirements;• maintainability requirements;• safety requirements;• standards requirements;• verification requirements;• acceptance-testing requirements.

Page 273: ESA Software Engineering Standards

52 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))LIFE CYCLE MANAGEMENT ACTIVITIES

The level of monitoring planned for the AD phase should beappropriate to the requirements and the criticality of the software. Riskanalysis should be used to target areas for detailed scrutiny.

6.6 SYSTEM TEST PLANS

The developer must plan the system tests in the SR phase anddocument it in the SVVP (SVV14). This plan should define the scope,approach, resources and schedule of system testing activities.

Specific tests for each software requirement are not formulated untilthe DD phase. The System Test Plan should deal with the general issues, forexample:• where will the system tests be done?• who will attend?• who will carry them out?• are tests needed for all software requirements?• must any special test equipment be used?• how long is the system testing programme expected to last?• are simulations necessary?

Guidance on writing the SVVP/ST is provided in ESA PSS-05-10,Guide to Software Verification and Validation.

Page 274: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) A-1GLOSSARY

APPENDIX AGLOSSARY

A.1 LIST OF ACRONYMSAD Architectural DesignANSI American National Standards InstituteBSSC Board for Software Standardisation and ControlCASE Computer Aided Software EngineeringCCS Calculus of Communicating SystemsCSP Communicating Sequential ProcessesDBMS Database Management SystemESA European Space AgencyIEEE Institute of Electrical and Electronics EngineersISO International Standards OrganisationICD Interface Control DocumentLOTOS Language Of Temporal Ordering SpecificationOOA Object-Oriented AnalysisOSI Open Systems InterconnectionPA Product AssurancePSS Procedures, Specifications and StandardsQA Quality AssuranceRID Review Item DiscrepancySADT Structured Analysis and Design TechniqueSCM Software Configuration ManagementSCMP Software Configuration Management PlanSPM Software Project ManagementSPMP Software Project Management PlanSQA Software Quality AssuranceSQAP Software Quality Assurance PlanSR Software RequirementsSRD Software Requirements DocumentSR/R Software Requirements ReviewSSADM Structured Systems Analysis and Design MethodologyST System TestSUM Software User ManualSVVP Software Verification and Validation PlanTBC To Be ConfirmedTBD To Be DefinedUR User RequirementsURD User Requirements DocumentVDM Vienna Development Method

Page 275: ESA Software Engineering Standards

A-2 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))GLOSSARY

This page is intentionally left blank.

Page 276: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) B-1REFERENCES

APPENDIX BREFERENCES

1. ESA Software Engineering Standards, ESA PSS-05-0 Issue 2, February1991.

2. IEEE Standard Glossary for Software Engineering Terminology,ANSI/IEEE Std 610.12-1990.

3. IEEE Guide to Software Requirements Specifications, ANSI/IEEE Std830-1984.

4. SSADM Version 4, NCC Blackwell Publications, 1991

5. Software Evolution Through Rapid Prototyping, Luqi, in COMPUTER,May 1989

6. Structured Rapid Prototyping, J.Connell and L.Shafer, Yourdon Press,1989.

7. Structured Analysis and System Specification, T.DeMarco, YourdonPress, 1978.

8. IEEE Standard for Software Reviews and Audits, IEEE Std 1028-1988.

9. Structured Systems Analysis and Design Methodology, G.Cutts,Paradigm, 1987.

10. The Z Notation - a reference manual, J.M.Spivey, Prentice-Hall, 1989.

11. Systematic Software Development Using VDM, C.B.Jones, Prentice-Hall, 1986.

12. Structured Development for Real-Time Systems, P.T.Ward & S.J.Mellor,Yourdon Press, 1985. (Three Volumes).

13. System Development, M.Jackson, Prentice-Hall, 1983.

14. Communicating Sequential Processes, C.A.R.Hoare, Prentice HallInternational, 1985.

15. Programming with parameterised abstract objects in OBJ, Goguen JA, Meseguer J and Plaisted D, in Theory and Practice of SoftwareTechnology, North Holland, 1982.

Page 277: ESA Software Engineering Standards

B-2 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))REFERENCES

16. A Calculus for Communicating Systems, R.Milner, in Lecture Notes inComputer Science, No 192, Springer-Verlag.

17. Structured Analysis for Requirements Definition, D.T.Ross andK.E.Schoman, IEEE Transactions on Software Engineering, Vol SE-3,No 1, January 1977.

18. Object-Oriented Analysis, P.Coad and E.Yourdon, Second Edition,Yourdon Press, 1991.

19. Petri Nets, J.L.Petersen, in ACM Computing Surveys, 9(3) Sept 1977.

20. Structured Analysis (SA): A Language for Communicating Ideas,D.T.Ross, IEEE Transactions on Software Engineering, Vol SE-3, No 1,January 1977.

21. Principles of Software Engineering Management, T.Gilb, Addison-Wesley.

22. The STARTs Guide - a guide to methods and software tools for theconstruction of large real-time systems, NCC Publications, 1987.

23. Software function, source lines of code, and development effortprediction: a software science validation, A.J.Albrecht and J.E.Gaffney,IEEE Transactions on Software Engineering, vol SE-9, No 6, November1983.

24. Software Reliability Assurance for ESA space systems, ESA PSS-01-230 Issue 1 Draft 8, October 1991.

25. Object-Oriented Modeling and Design, J.Rumbaugh, M.Blaha,W.Premerlani, F.Eddy and W.Lorensen, Prentice-Hall, 1991

26. Object-Oriented Systems Analysis - Modeling the World in Data,S.Shlaer and S.J.Mellor, Yourdon Press, 1988

27. Object Lifecycles - Modeling the World in States, S.Shlaer andS.J.Mellor, Yourdon Press, 1992.

28. Object-Oriented Design with Applications, G.Booch, BenjaminCummings, 1991.

Page 278: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) C-1MANDATORY PRACTICES

APPENDIX CMANDATORY PRACTICES

This appendix is repeated from ESA PSS-05-0, Appendix D.3SR01 SR phase activities shall be carried out according to the plans defined in the

UR phase.SR02 The developer shall construct an implementation-independent model of

what is needed by the user.SR03 A recognised method for software requirements analysis shall be adopted

and applied consistently in the SR phase.SR04 Each software requirement shall include an identifier.SR05 Essential software requirements shall be marked as such.SR06 For incremental delivery, each software requirement shall include a measure

of priority so that the developer can decide the production schedule.SR07 References that trace software requirements back to the URD shall

accompany each software requirement.SR08 Each software requirement shall be verifiable.SR09 The outputs of the SR phase shall be formally reviewed during the Software

Requirements Review.SR10 An output of the SR phase shall be the Software Requirements Document

(SRD).SR11 The SRD shall be complete.SR12 The SRD shall cover all the requirements stated in the URD.SR13 A table showing how user requirements correspond to software

requirements shall be placed in the SRD.SR14 The SRD shall be consistent.SR15 The SRD shall not include implementation details or terminology, unless it

has to be present as a constraint.SR16 Descriptions of functions ... shall say what the software is to do, and must

avoid saying how it is to be done.SR17 The SRD shall avoid specifying the hardware or equipment, unless it is a

constraint placed by the user.SR18 The SRD shall be compiled according to the table of contents provided in

Appendix C.

Page 279: ESA Software Engineering Standards
Page 280: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) F-1REQUIREMENTS TRACEABILITY MATRIX

APPENDIX DREQUIREMENTS TRACEABILITY MATRIX

REQUIREMENTS TRACEABILITY MATRIXSRD TRACED TO URD

DATE: <YY-MM-DD>PAGE 1 OF <nn>

PROJECT: <TITLE OF PROJECT>URD

IDENTIFIERSRD

IDENTIFIERSUMMARY OF

USER REQUIREMENT

Page 281: ESA Software Engineering Standards

E-2 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))REQUIREMENTS TRACEABILITY MATRIX

This page is intentionally left blank.

Page 282: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) F-1CASE TOOL SELECTION CRITERIA

APPENDIX E CASE TOOL SELECTION CRITERIA

This appendix lists selection criteria for the evaluation of CASE toolsfor building a logical model.

The tool should:

1. enforce the rules of each method it supports;

2. allow the user to construct diagrams according to the conventions ofthe selected method;

3. support consistency checking (e.g. balancing a DFD set);

4. store the model description;

5. be able to store multiple model descriptions;

6. support top-down decomposition (e.g. by allowing the user to createand edit lower-level DFDs by ‘exploding' processes in a higher-leveldiagram);

7. minimise line-crossing in a diagram;

8. minimise the number of keystrokes required to add a symbol;

9. allow the user to ‘tune' a method by adding and deleting rules;

10. support concurrent access by multiple users;

11. permit controlled access to the model database (usually called arepository);

12. permit reuse of all or part of existing model descriptions, to allow thebottom-up integration of models (e.g. rather than decompose a high-level function such as ‘Calculate Orbit', it should be possible to importa specification of this function from another model);

13. support document generation according to user-defined templatesand formats;

14. support traceability of user requirements to software requirements (andsoftware requirements through to components and code);

Page 283: ESA Software Engineering Standards

E-2 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))CASE TOOL SELECTION CRITERIA

15. support conversion of a diagram from one style to another (e.g.Yourdon to SADT and vice-versa);

16. allow the user to execute the model (to verify real-time behaviour byanimation and simulation);

17. support all configuration management functions (e.g. to identify items,control changes to them, storage of baselines etc);

18. keep the users informed of changes when part of a design isconcurrently accessed;

19. support consistency checking in any of 3 checking modes, selectableby the user:• interpretative, i.e. check each change as it is made; reject any

illegal changes;• compiler, i.e. accept all changes and check them all at once at the

end of the session,;• monitor, i.e. check each change as it is made; issue warnings

about illegal changes;

20. link the checking mode with the access mode when there are multipleusers, so that local changes can be done in any checking mode, butchanges that have non-local effects are checked immediately andrejected if they are in error;

21. support scoping and overloading of data item names;

22. provide context-dependent help facilities;

23. make effective use of colour to allow parts of a display to be easilydistinguished;

24. have a consistent user interface;

25. permit direct hardcopy output;

26. permit data to be imported from other CASE tools;

27. permit the exporting of data in standard graphics file formats (e.g.CGM);

28. provide standard word processing functions;

Page 284: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) F-3CASE TOOL SELECTION CRITERIA

29. permit the exporting of data to external word-processing applicationsand editors;

30. describe the format of tool database (i.e. repository) in the userdocumentation, so that users can manipulate the database usingexternal software and packages (e.g. tables of ASCII data).

Page 285: ESA Software Engineering Standards

E-4 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))CASE TOOL SELECTION CRITERIA

This page is intentionally left blank.

Page 286: ESA Software Engineering Standards

ESA PSS-05-03 Issue 1 Revision 1 (March 1995)) F-1INDEX

APPENDIX FINDEX

acceptance-testing requirements, 13, 50AD phase., 22adaptability requirements, 16ANSI/IEEE Std 1028, 17ANSI/IEEE Std 830-1984, 41audit, 51availability, 8, 13, 30availability requirements., 15backwards traceability, 47baseline, 37Booch, 28Capability requirements, 8CASE tool, 50CASE tools, 5, 37CFMA, 8change control, 37, 40class diagrams, 29component, 11components, 50computer resources, 12computer viruses, 14configuration management, 37constraint, 39control flow, 11corrective action, 51coupling, 6critical functions, 7criticality analysis, 8Data Dictionary, 20, 37Data Flow Diagrams, 20DD phase, 18, 52Decision Tables, 20Decision Trees, 20documentation requirements, 13dynamic model, 26evolutionary software development, 50external interfaces, 50feasibility, 9, 41FMECA, 8formal method, 29formal methods, 19formal review, 17forward traceability, 47FTA, 8function, 9functional decomposition, 6functional model, 26

functional requirements, 9ICD, 11incremental delivery life cycle, 50information modelling, 27inspections, 5integrity, 13interface requirements, 11life cycle, 35logical model, 5, 22logical model', 5maintainability requirements, 16media control, 51method, 19Methods, 40, 51metric, 16, 51metrics, 15model, 5MTTR, 16object diagrams, 29object model, 26object-oriented analysis, 24OMT, 26operational environment, 12operational requirements, 12performance requirements, 8, 10physical model, 22portability requirements, 14problem reporting, 51process modelling, 27product assurance, 3programming language, 14progress reports, 4, 49prototype, 8prototype code, 50prototypes, 50prototyping, 8quality assurance, 49quality requirements, 15rapid prototyping, 19, 35reliability requirements, 15resource requirements, 12resources, 52review, 51review procedures, 50RID, 18risk, 9risk analysis, 52

Page 287: ESA Software Engineering Standards

E-2 ESA PSS-05-03 Issue 1 Revision 1 (March 1995))INDEX

risk management, 51Rumbaugh, 26safety requirements, 17safety-critical system, 30SCM44, 50SCMP/AD, 3, 50SCMP/SR, 40, 49security requirements, 13Shlaer-Mellor, 27simulations, 13, 52software requirements, 1, 9SPM05, 49SPM06, 49SPM07, 49SPMP/AD, 3, 49, 50SPMP/SR, 49SQA, 51SQA06, 51SQA07, 51SQAP/AD, 3SQAP/SR, 49SR/R, 1, 17SR01, 4, 49SR02, 5SR03, 5SR04, 47SR05, 47SR06, 47SR07, 47SR08, 47SR09, 17SR10, 39SR11, 39SR12, 3, 39SR13, 47SR14, 40SR15, 39SR16, 39SR17, 39SR18, 41SRD, 3, 39stability, 50state modelling, 27Structured English, 20SVV12, 50SVV13, 50SVV14, 52SVVP/AD, 3SVVP/SR, 49SVVP/ST, 3, 52System Test Plans, 17

TBC, 47technical reviews, 5test environment, 13test equipment, 3testing, 51tests, 50timing diagrams, 29tools, 37, 40, 51traceability, 37training, 51Transformation Schema, 21URD, 4user interface, 12verification and validation, 49verification requirements, 13walkthroughs, 5

Page 288: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1March 1995

european space agency / agence spatiale européenne8-10, rue Mario-Nikis, 75738 PARIS CEDEX, France

Guideto the softwarearchitectural designphasePrepared by:ESA Board for SoftwareStandardisation and Control(BSSC)

Page 289: ESA Software Engineering Standards

ii ESA PSS-05-04 Issue 1 Revision 1 (March 1995)PREFACE

DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

1. DOCUMENT TITLE: ESA PSS-05-04 Guide to the software architectural design phase

2. ISSUE 3. REVISION 4. DATE 5. REASON FOR CHANGE

1 0 1992 First issue

1 1 1995 Minor updates for publication

Issue 1 Revision 1 approved, May 1995Board for Software Standardisation and ControlM. Jones and U. Mortensen, co-chairmen

Issue 1 approved, March 31st 1992Telematics Supervisory Board

Issue 1 approved by:The Inspector General, ESA

Published by ESA Publications Division,ESTEC, Noordwijk, The Netherlands.Printed in the Netherlands.ESA Price code: E1ISSN 0379-4059

Copyright © 1995 by European Space Agency

Page 290: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) iiiPREFACE

TABLE OF CONTENTS

CHAPTER 1 INTRODUCTION...................................................................................11.1 PURPOSE.................................................................................................................. 11.2 OVERVIEW................................................................................................................. 1

CHAPTER 2 THE ARCHITECTURAL DESIGN PHASE............................................32.1 INTRODUCTION....................................................................................................... 32.2 EXAMINATION OF THE SRD ................................................................................... 42.3 CONSTRUCTION OF THE PHYSICAL MODEL....................................................... 5

2.3.1 Decomposition of the software into components ............................................ 72.3.2 Implementation of non-functional requirements............................................... 7

2.3.2.1 Performance requirements....................................................................... 82.3.2.2 Interface requirements ............................................................................. 82.3.2.3 Operational requirements ...................................................................... 102.3.2.4 Resource requirements.......................................................................... 102.3.2.5 Verification requirements........................................................................ 102.3.2.6 Acceptance-testing requirements ......................................................... 102.3.2.7 Documentation requirements ................................................................ 102.3.2.8 Security requirements............................................................................. 102.3.2.9 Portability requirements.......................................................................... 112.3.2.10 Quality requirements ............................................................................ 112.3.2.11 Reliability requirements ........................................................................ 112.3.2.12 Maintainability requirements ................................................................ 132.3.2.13 Safety requirements ............................................................................. 14

2.3.3 Design quality ................................................................................................... 152.3.3.1 Design metrics........................................................................................ 16

2.3.3.1.1 Cohesion ...................................................................................... 162.3.3.1.2 Coupling ....................................................................................... 182.3.3.1.3 System shape .............................................................................. 202.3.3.1.4 Data structure match ................................................................... 202.3.3.1.5 Fan-in............................................................................................ 202.3.3.1.6 Fan-out ......................................................................................... 212.3.3.1.7 Factoring ...................................................................................... 21

2.3.3.2 Complexity metrics................................................................................. 212.3.3.3 Abstraction ............................................................................................. 212.3.3.4 Information hiding................................................................................... 232.3.3.5 Modularity ............................................................................................... 23

2.3.4 Prototyping....................................................................................................... 242.3.5 Choosing a design approach .......................................................................... 24

Page 291: ESA Software Engineering Standards

iv ESA PSS-05-04 Issue 1 Revision 1 (March 1995)PREFACE

2.4 SPECIFICATION OF THE ARCHITECTURAL DESIGN .......................................... 252.4.1 Assigning functions to components ................................................................ 252.4.2 Definition of data structures ............................................................................. 262.4.3 Definition of control flow ................................................................................... 27

2.4.3.1 Sequential and parallel control flow....................................................... 282.4.3.2 Synchronous and asynchronous control flow....................................... 28

2.4.4 Definition of the computer resource utilisation ................................................ 292.4.5 Selection of the programming language ......................................................... 29

2.5 INTEGRATION TEST PLANNING............................................................................ 302.6 PLANNING THE DETAILED DESIGN PHASE......................................................... 302.7 THE ARCHITECTURAL DESIGN PHASE REVIEW ................................................. 31

CHAPTER 3 METHODS FOR ARCHITECTURAL DESIGN....................................333.1 INTRODUCTION...................................................................................................... 333.2 STRUCTURED DESIGN .......................................................................................... 33

3.2.1 Yourdon methods ............................................................................................. 343.2.2 SADT ................................................................................................................. 353.2.3 SSADM.............................................................................................................. 36

3.3 OBJECT-ORIENTED DESIGN................................................................................. 363.3.1 Booch............................................................................................................... 383.3.2 HOOD .............................................................................................................. 393.3.3 Coad and Yourdon ........................................................................................... 403.3.4 OMT................................................................................................................... 413.3.5 Shlaer-Mellor ..................................................................................................... 42

3.4 JACKSON SYSTEM DEVELOPMENT.................................................................... 433.5 FORMAL METHODS .............................................................................................. 45

CHAPTER 4 TOOLS FOR ARCHITECTURAL DESIGN..........................................474.1 INTRODUCTION..................................................................................................... 474.2 TOOLS FOR PHYSICAL MODEL CONSTRUCTION............................................. 474.3 TOOLS FOR ARCHITECTURAL DESIGN SPECIFICATION.................................. 48

CHAPTER 5 THE ARCHITECTURAL DESIGN DOCUMENT .................................495.1 INTRODUCTION..................................................................................................... 495.2 STYLE...................................................................................................................... 49

5.2.1 Clarity ............................................................................................................... 495.2.2 Consistency ..................................................................................................... 505.2.3 Modifiability ...................................................................................................... 50

5.3 EVOLUTION............................................................................................................ 505.4 RESPONSIBILITY.................................................................................................... 515.5 MEDIUM.................................................................................................................. 515.6 CONTENT............................................................................................................... 51

Page 292: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) vPREFACE

CHAPTER 6 LIFE CYCLE MANAGEMENT ACTIVITIES ........................................616.1 INTRODUCTION..................................................................................................... 616.2 PROJECT MANAGEMENT PLAN FOR THE DD PHASE ...................................... 616.3 CONFIGURATION MANAGEMENT PLAN FOR THE DD PHASE......................... 626.4 VERIFICATION AND VALIDATION PLAN FOR THE DD PHASE........................... 626.5 QUALITY ASSURANCE PLAN FOR THE DD PHASE............................................ 626.6 INTEGRATION TEST PLANS.................................................................................. 64

APPENDIX A GLOSSARY ..................................................................................... A-1APPENDIX B REFERENCES................................................................................. B-1APPENDIX C MANDATORY PRACTICES ............................................................C-1APPENDIX D REQUIREMENTS TRACEABILITY MATRIX................................... D-1APPENDIX E CASE TOOL SELECTION CRITERIA ............................................. E-1APPENDIX F INDEX .............................................................................................. F-1

Page 293: ESA Software Engineering Standards

vi ESA PSS-05-04 Issue 1 Revision 1 (March 1995)PREFACE

This page is intentionally left blank

Page 294: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) viiPREFACE

PREFACE

This document is one of a series of guides to software engineering produced bythe Board for Software Standardisation and Control (BSSC), of the European SpaceAgency. The guides contain advisory material for software developers conforming toESA's Software Engineering Standards, ESA PSS-05-0. They have been compiled fromdiscussions with software engineers, research of the software engineering literature,and experience gained from the application of the Software Engineering Standards inprojects.

Levels one and two of the document tree at the time of writing are shown inFigure 1. This guide, identified by the shaded box, provides guidance aboutimplementing the mandatory requirements for the software architectural designdescribed in the top level document ESA PSS-05-0.

Guide to theSoftware Engineering

Guide to theUser Requirements

Definition Phase

Guide toSoftware Project

Management

PSS-05-01

PSS-05-02 UR GuidePSS-05-03 SR Guide

PSS-05-04 AD GuidePSS-05-05 DD Guide

PSS-05-07 OM Guide

PSS-05-08 SPM GuidePSS-05-09 SCM Guide

PSS-05-11 SQA Guide

ESASoftware

EngineeringStandards

PSS-05-0

Standards

Level 1

Level 2

PSS-05-10 SVV Guide

PSS-05-06 TR Guide

Figure 1: ESA PSS-05-0 document tree

The Guide to the Software Engineering Standards, ESA PSS-05-01, containsfurther information about the document tree. The interested reader should consult thisguide for current information about the ESA PSS-05-0 standards and guides.

The following past and present BSSC members have contributed to theproduction of this guide: Carlo Mazza (chairman), Bryan Melton, Daniel de Pablo,Adriaan Scheffer and Richard Stevens.

Page 295: ESA Software Engineering Standards

viii ESA PSS-05-04 Issue 1 Revision 1 (March 1995)PREFACE

The BSSC wishes to thank Jon Fairclough for his assistance in the developmentof the Standards and Guides, and to all those software engineers in ESA and Industrywho have made contributions.

Requests for clarifications, change proposals or any other comment concerningthis guide should be addressed to:

BSSC/ESOC Secretariat BSSC/ESTEC SecretariatAttention of Mr C Mazza Attention of Mr B MeltonESOC ESTECRobert Bosch Strasse 5 Postbus 299D-64293 Darmstadt NL-2200 AG NoordwijkGermany The Netherlands

Page 296: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 1INTRODUCTION

CHAPTER 1INTRODUCTION

1.1 PURPOSE

ESA PSS-05-0 describes the software engineering standards to beapplied for all deliverable software implemented for the European SpaceAgency (ESA), either in house or by industry [Ref. 1].

ESA PSS-05-0 defines the first phase of the software developmentlife cycle as the 'Software Requirements Definition Phase' (SR phase). Theoutput of this phase is the Software Requirements Document (SRD). Thesecond phase of the life cycle is the 'Architectural Design Phase' (ADphase). Activities and products are examined in the 'AD review' (AD/R) at theend of the phase.

The AD phase can be called the 'solution phase' of the life cyclebecause it defines the software in terms of the major software componentsand interfaces. The 'Architectural Design' must cover all the requirements inthe SRD.

This document provides guidance on how to produce thearchitectural design. This document should be read by all active participantsin the AD phase, e.g. initiators, user representatives, analysts, designers,project managers and product assurance personnel.

1.2 OVERVIEW

Chapter 2 discusses the AD phase. Chapters 3 and 4 discussmethods and tools for architectural design definition. Chapter 5 describeshow to write the ADD, starting from the template. Chapter 6 summarises thelife cycle management activities, which are discussed at greater length inother guides.

All the mandatory practices in ESA PSS-05-0 relevant to the ADphase are repeated in this document. The identifier of the practice is addedin parentheses to mark a repetition. No new mandatory practices aredefined.

Page 297: ESA Software Engineering Standards

2 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)INTRODUCTION

This page is intentionally left blank

Page 298: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 3THE ARCHITECTURAL DESIGN PHASE

CHAPTER 2THE ARCHITECTURAL DESIGN PHASE

2.1 INTRODUCTION

In the AD phase, the software requirements are transformed intodefinitions of software components and their interfaces, to establish theframework of the software. This is done by examining the SRD and buildinga 'physical model' using recognised software engineering methods. Thephysical model should describe the solution in concrete, implementationterms. Just as the logical model produced in the SR phase structures theproblem and makes it manageable, the physical model does the same forthe solution.

The physical model is used to produce a structured set ofcomponent specifications that are consistent, coherent and complete. Eachspecification defines the functions, inputs and outputs of the component(see Section 2.4).

The major software components are documented in theArchitectural Design Document (ADD). The ADD gives the developer'ssolution to the problem stated in the SRD. The ADD must cover all therequirements stated in the SRD (AD20), but avoid the detailed considerationof software requirements that do not affect the structure.

The main outputs of the AD phase are the:

• Architectural Design Document (ADD);

• Software Project Management Plan for the DD phase (SPMP/DD);

• Software Configuration Management Plan for the DD phase(SCMP/DD);

• Software Verification and Validation Plan for the DD Phase (SVVP/DD);

• Software Quality Assurance Plan for the DD phase (SQAP/DD);

• Integration Test Plan (SVVP/IT).

Progress reports, configuration status accounts and audit reportsare also outputs of the phase. These should always be archived.

Page 299: ESA Software Engineering Standards

4 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)THE ARCHITECTURAL DESIGN PHASE

While the architectural design is a responsibility of the developer,participants in the AD phase also should include user representatives,systems engineers, hardware engineers and operations personnel. Inreviewing the architectural design, project management should ensure thatall parties are consulted, to minimise the risk of incompleteness and error.

AD phase activities must be carried out according to the plansdefined in the SR phase (AD01). Progress against plans should becontinuously monitored by project management and documented at regularintervals in progress reports.

Figure 2.1 summarises activities and document flow in the ADphase. The following subsections describe the activities of the AD phase inmore detail.

SCMP/DD

ApprovedSRD

ExaminedSRD

PhysicalModel

DraftADD

DraftSVVP/IT

ApprovedADD

ExamineSRD

ConstructModel

SpecifyDesign

WriteTest Plan

AD/R

WriteDD Plans

Accepted RID

CASE tools

Methods

Prototyping

SPMP/DD

SQAP/DDSVVP/DD

SVVP/IT

SCMP/ADSPMP/AD

SQAP/ADSVVP/AD

Figure 2.1: AD phase activities

2.2 EXAMINATION OF THE SRD

Normally the developers will have taken part in the SR/R, but if not,they should examine the SRD and confirm that it is understandable. ESAPSS-05-03, 'Guide to the Software Requirements Definition Phase', containsmaterial that should assist this. In particular, if a specific method has been

Page 300: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 5THE ARCHITECTURAL DESIGN PHASE

used for the specification of the requirements, the development organisationshould ensure that the examination is carried out by staff familiar with thatmethod.

The developers should also confirm that adequate technical skill isavailable to produce the outputs of the AD phase.

2.3 CONSTRUCTION OF THE PHYSICAL MODEL

A software model is:

• a simplified description of a system;

• hierarchical, with consistent decomposition criteria;

• composed of symbols organised according to some convention;

• built with the aid of recognised methods and tools;

• used for reasoning about the software.

A 'simplified description' is obtained by abstracting the essentialsand ignoring non-essentials. A hierarchical presentation makes the modelsimpler to understand.

A recognised method for software design must be adopted andused consistently in the AD phase (AD02). The key criterion for the'recognition' of a method is full documentation. The availability of tools andtraining courses are also important.

A method does not substitute for the experience and insight of thedeveloper but helps to apply those abilities more effectively.

In the AD phase, the developer constructs a 'physical model'describing the design in implementation terminology (AD03). The physicalmodel:

• defines the software components;

• is presented as a hierarchy of control;

• uses implementation terminology (e.g. computer, file , record);

• shows control and data flow.

Page 301: ESA Software Engineering Standards

6 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)THE ARCHITECTURAL DESIGN PHASE

The physical model provides a framework for software developmentthat allows independent work on the low-level components in the DD phase.The framework defines interfaces that programmers need to know to buildtheir part of the software. Programmers can work in parallel, knowing thattheir part of the system will fit in with the others.

The logical model is the starting point for the construction of thephysical model. The design method may provide a technique fortransforming the logical model into the physical model. Design constraints,such as the need to reuse components, may cause departure from thestructure of the logical model. The affect on adaptability of each modificationof the structure should be assessed. A minor new requirement should notcause a major design change. The designer's goal is to produce an efficientdesign that has the structure of the logical model and meets all the softwarerequirements.

The method used to decompose the software into its componentparts shall permit the top-down approach (AD04). Top-down design startsby defining the top level software components and then proceeds to definethe lower-level components. The top-down approach is vital for controllingcomplexity because it enforces 'information hiding' by demanding that lower-level components behave as 'black boxes'. Even if a design method allowsother approaches (e.g. bottom-up), the presentation of the design in theADD must always be top-down.

The construction of the physical model should be an iterativeprocess where some or all of the tasks are repeated until a clear, consistentdefinition has been formulated. Walkthroughs, inspections and technicalreviews should be used to agree each level of the physical model beforedescending to the next level of detail.

In all but the smallest projects, CASE tools should be used forbuilding a physical model. They make consistent models that are easier toconstruct and modify.

The type of physical model that is built depends upon the methodselected. Chapter 3 summarises the methods for constructing a physicalmodel. In the following subsections, concepts and terminology of structuredmethods are used to show how to define the architecture. This does notimply that these methods shall be used, but only that they may be suitable.

Page 302: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 7THE ARCHITECTURAL DESIGN PHASE

2.3.1 Decomposition of the software into components

The design process starts by decomposing the software intocomponents. The decomposition should be done top-down, based on thefunctional decomposition in the logical model.

The correspondence between functional requirements in the SRDand components in the ADD is not necessarily one-to-one and frequentlyisn't. This is because of the need to work within the constraints forced by thenon-functional requirements (see Section 2.3.2) and the capabilities of thesoftware and hardware technology. Designs that meet these constraints aresaid to be 'feasible'. When there are two or more feasible solutions to adesign problem, the designer should choose the most cost-effective one.

Correctness at each level can only be confirmed afterdemonstrating feasibility of the next level down. Such demonstrations mayrequire prototyping. Designers rely on their knowledge of the technology,and experience of similar systems, to achieve a good design in just a fewiterations.

The decomposition of the software into components shouldproceed until the architectural design is detailed enough to allow:

• individual groups or team members to commence the DD phase;

• the schedule and cost of the DD and TR phases to be estimated.

In a multitasking real-time system, the appropriate level to stoparchitectural design is when the processing has become purely sequential.This is the lowest level of the task hierarchy, and is the stage at which thecontrol flow has been fully defined.

It is usually unnecessary to describe the architecture down to themodule level. However some consideration of module level processing isusually necessary if the functionality at higher levels is to be allocatedcorrectly.

2.3.2 Implementation of non-functional requirements

The SRD contains several categories of requirements that are non-functional. Such requirements can be found under the headings of:

Performance Requirements Interface Requirements Operational Requirements

Page 303: ESA Software Engineering Standards

8 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)THE ARCHITECTURAL DESIGN PHASE

Resource Requirements Verification Requirements Acceptance-Testing Requirements Documentation Requirements Security Requirements Portability Requirements Quality Requirements Reliability Requirements Maintainability Requirements Safety Requirements

The specification of each component should be compared with allthe requirements. This task is made easier if the requirements are groupedwith their related functions in the SRD. Some kinds of non-functionalrequirements do not influence the architecture and their implementation canbe deferred to the DD phase. Nevertheless, each requirement must beaccounted for, even if this only involves including the non-functionalrequirement in a component's specification.

2.3.2.1 Performance requirements

Performance requirements may be stated for the whole system, orbe attached to high-level functions. Whatever the approach, the designershould identify the components that are used to provide the service impliedby the performance requirements and specify the performance of eachcomponent.

Performance attributes may be incorporated in the functionalrequirements in the SRD. Where there is a one-to-one correspondencebetween a component and a function, definition of the performance of thecomponent is straightforward.

The performance of a component that has not yet been built canonly be estimated from knowledge of the underlying technology upon whichit is based. The designer may have to confirm the feasibility of theperformance specification by prototyping. As an example, the execution timeof an i/o routine can be predicted from knowledge of the required data flowvolume and i/o rate to a peripheral. This prediction should be confirmed byprototyping the routine.

2.3.2.2 Interface requirements

A component design should satisfy all the interface requirementsassociated with a functional requirement.

Page 304: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 9THE ARCHITECTURAL DESIGN PHASE

Interface requirements may define internal or external control anddata flows. The developer has to design the physical mechanism for thecontrol and data flows. Mechanisms may include subroutine calls,interrupts, file and record formats and so on.

The interfaces to a component should match its level ofabstraction. A component that processes files should have files flowing inand out. Processing of records, fields, bits and bytes should be delegated tolower-level components.

Interfaces should be implemented in such a way as to minimise thecoupling. This can be done in several ways:

• minimise the number of separate items flowing into and out of thecomponent;

• minimise the number of separate routes into and out of the component.

Minimisation of the number of separate items flowing into thecomponent should not reduce the understandability of the interface. Dataitems should be aggregated into structures that have the appropriate level ofabstraction, but not arbitrarily bunched together to reduce the numbers (e.g.by using an array).

Ideally, a single entry point for both the control and data flowsshould be provided. For a subroutine, this means providing all the externaldata the routine needs in the call argument list. Separating the control anddata flow routes (e.g. by routing the data into a component via a commonarea) greatly increases the coupling between components.

External interface requirements may be referenced by the SRD in anInterface Control Document (ICD). The physical mechanism of the interfacemay be predefined, and may constrain the software design. Distortingeffects should be minimised by providing dedicated components to handleexternal interfaces. Incidentally, this approach enhances portability andmaintainability.

Where an interface standard has to be used, software componentsthat provide the interface can be reused from existing libraries or systems.The repercussions of reusing a component should be considered beforemaking the decision to do it.

Page 305: ESA Software Engineering Standards

10 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)THE ARCHITECTURAL DESIGN PHASE

2.3.2.3 Operational requirements

The operational requirements of the software should be consideredin the AD phase. This should result in a definition of the:

• ergonomic aspects;

• screen layouts;

• user interface language;

• help system.

2.3.2.4 Resource requirements

Resource requirements should be an important consideration in theAD phase, as they may affect the choice of programming language and thehigh-level architecture. For example the design of the software for a singleprocessor system can be quite different from that of a multiple processorsystem.

2.3.2.5 Verification requirements

Verification requirements may call for test harnesses or even a'simulator'. The developer must consider the interfaces to external softwareused for verification.

A simulator may be used to exercise the system underdevelopment, and is external to it. The architectural design of a simulatorshould be entirely separate from the main design.

2.3.2.6 Acceptance-testing requirements

Like verification requirements, acceptance-testing requirements maycall for test harnesses or even a 'simulator' (see Section 2.3.2.5).

2.3.2.7 Documentation requirements

Documentation requirements seldom affect the ArchitecturalDesign.

2.3.2.8 Security requirements

Security requirements may affect the selection of hardware, theoperating system, the design of the interfaces among components, and the

Page 306: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 11THE ARCHITECTURAL DESIGN PHASE

design of database components. Decisions about access to capabilities ofthe system may have to be made in the AD phase.

2.3.2.9 Portability requirements

Portability requirements can severely constrain the architecturaldesign by forcing the isolation of machine-dependent features. Portabilityrequirements can restrict the options for meeting performance requirementsand make them more difficult to meet.

The designer must study the portability requirements carefully andisolate the features common to all the intended environments, and thendesign the software to use only these features. This may not always bepossible, so the designer must explicitly define components to implementfeatures not available in all the intended environments.

2.3.2.10 Quality requirements

Quality requirements can affect:

• the choice of design methods;

• the decisions about reuse, e.g: has the reused software been specified,designed and tested properly?

• the choice of tools;

• the type of design review, e.g. walkthroughs or inspections?

2.3.2.11 Reliability requirements

Reliability requirements can only be directly verified after thesoftware has been written. The key to designing a system having a specifiedlevel of reliability is to:

• be aware of the design quality factors that are predictive of eventualsoftware reliability (e.g. complexity);

• calibrate the relationship between predictive metrics and the indicativemetrics of reliability (e.g. MTBF);

• ensure that the design scores well in terms of the predictive metrics.

For small projects, careful adherence to the design qualityguidelines will be sufficient. This is the 'qualitative' approach to satisfying the

Page 307: ESA Software Engineering Standards

12 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)THE ARCHITECTURAL DESIGN PHASE

reliability requirements. For larger projects, or projects with stringentreliability requirements, a 'quantitative' approach should measure adherenceto design quality guidelines with reliability metrics. Metrics have beendevised (e.g. 39 metrics are listed in Reference 17) to measure reliability atall phases in the life cycle.

An obvious way of meeting reliability requirements is to reusesoftware whose operational reliability has already been proven.

Engineering features specifically designed to enhance reliabilityinclude:

• fault tolerance;

• redundancy;

• voting.

Fault tolerance allows continued operation of a system after theoccurrence of a fault, perhaps with a reduced level of capability orperformance. Fault tolerance can be widely applied in software (e.g.software will frequently fail if it does not check its input).

Redundancy achieves fault tolerance with no reduction of capabilitybecause systems are replicated. There are two types of redundancy:

• passive;

• active.

Passive redundancy is characterised by one system being 'online' ata time. The remaining systems are on standby. If the online system fails, astandby is brought into operation. This is the 'backup' concept.

Active redundancy is characterised by multiple systems being'online' simultaneously. Voting is required to control an active redundantsystem. An odd number of different implementations of the system eachmeeting the same requirements are developed and installed by differentteams. Control flow is a majority decision between the different systems. Thesystems may have:

• identical software and hardware;• identical hardware and different software;• different hardware and software.

Page 308: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 13THE ARCHITECTURAL DESIGN PHASE

2.3.2.12 Maintainability requirements

Maintainability requirements should be considered throughout thearchitectural design phase. However it is only possible to verify directly theMean Time To Repair (MTTR) in the operations phase. MTTR can be verifiedindirectly in the architectural design phase by relating it to design metrics,such as complexity.

The techniques used for increasing software reliability also tend toincrease its maintainability. Smaller projects should follow the design qualityguidelines to assure maintainability. Larger projects should adopt aquantitative approach based on metrics. Maintainability is also enhancedby:

• selecting a high-level programming language;

• minimising the volume of code that has to be maintained;

• providing software engineering tools that allow modifications to bemade easily;

• assigning values to parameters at runtime;

• building in features that allow faults to be located quickly;

• provision of remote maintenance facilities.

Some software engineering tools may be for configurationmanagement. The Software Configuration Management Plan should takeaccount of the maintainability requirements, since the efficiency of theseprocedures can have a marked effect on maintainability.

Binding is 'the assignment of a value or referent to an identifier' [Ref.2]. An example is the assignment of a value to a parameter. Static binding is'binding performed prior to the execution of a program and not subject tochange during program execution' [Ref. 2]. Contrast with dynamic binding,which is 'binding performed during the execution of a computer program'[Ref. 2]. Increasing dynamic binding and reducing static binding makes thesoftware more adaptable and maintainable, usually at the expense ofcomplexity and performance. The designer should examine these trade-offsin deciding when to bind values to parameters.

Features that allow errors to be located quickly and easily should bebuilt into the software. Examples are tracebacks, error reporting systems

Page 309: ESA Software Engineering Standards

14 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)THE ARCHITECTURAL DESIGN PHASE

and routines that check their input and report anomalies. The policy andprocedures for error reporting should be formulated in the AD phase andapplied consistently by the whole project from the start of the DD phase.

Remote maintenance facilities can enhance maintainability in twoways. Firstly, operations can be monitored and the occurrence of faultsprevented. Secondly, faults can be cured more quickly because it is notalways necessary to travel to the site.

2.3.2.13 Safety requirements

Designing for safety ensures that people and property are notendangered:

• during normal operations;

• following a failure.

It would be unsafe, for example, to design a control system thatopens and closes a lift door and omit a mechanism to detect and respondto people or property blocking the door.

A system may be very reliable and maintainable yet may still beunsafe when a fault occurs. The designer should include features to ensuresafe behaviour after a fault, for example:

• transfer of control to an identical redundant system;

• transfer of control to a reduced capability backup system;

• graceful degradation, either to a useful, but reduced level of capability,or even to complete shutdown.

Systems which behave safely when faults occur are said to be 'failsafe'.

Prevention of accidents is achieved by adding 'active safety'features, such as the capability to monitor obstructions in the lift doorcontrol system example. Prevention of damage following an accident isachieved by adding 'passive safety' features, such as an error handlingmechanism.

If safety requirements have been omitted from the SRD, thedesigner should alert management. Analysts and designers share theresponsibility for the development of safe systems.

Page 310: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 15THE ARCHITECTURAL DESIGN PHASE

2.3.3 Design quality

Designs should be adaptable, efficient and understandable.Adaptable designs are easy to modify and maintain. Efficient designs makeminimal use of available resources. Designs must be understandable if theyare to be built, operated and maintained effectively.

Attaining these goals is assisted by aiming for simplicity. Metricsshould be used for measuring simplicity/complexity (e.g. number ofinterfaces per component and cyclomatic complexity).

Simple components are made by maximising the degree to whichthe activities internal to the component are related to one another (i.e.'cohesion'). Simple structures can be made by:

• minimising the number of distinct items that are passed betweencomponents (i.e. 'coupling');

• ensuring that the function a component performs is appropriate to itsplace in the hierarchy (i.e. 'system shape');

• ensuring the structure of the software follows the structure of the data itdeals with (i.e. 'data structure match');

• designing components so that their use by other components can bemaximised (i.e. 'fan-in');

• restricting the number of child components to seven or less (i.e. 'fan-out');

• removing duplication between components by making newcomponents (i.e. 'factoring').

Designs should be 'modular', with minimal coupling betweencomponents and maximum cohesion within each component. Modulardesigns are more understandable, reusable and maintainable than designsconsisting of components with complicated interfaces and poorly matchedgroupings of functions.

Each level of a design should include only the essential aspects andomit inessential detail. This means they should have the appropriate degreeof 'abstraction'. This enhances understandability, adaptability andmaintainability.

Page 311: ESA Software Engineering Standards

16 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)THE ARCHITECTURAL DESIGN PHASE

Understandable designs employ terminology in a consistent wayand always use the same solution to the same problem. Where teams ofdesigners collaborate to produce a design, understandability can beconsiderably impaired by permitting unnecessary variety. CASE tools,designs standards and design reviews all help to enforce consistency anduniformity.

As each layer is defined, the design should be refined to increasethe quality, reliability, maintainability and safety of the software. When thenext lower layer is designed, unforeseen consequences of earlier designdecisions are often spotted. The developer must be prepared to revise alllevels of a design during its development. However if the guidelinesdescribed in this section are applied throughout the design process, theneed for major revisions in the design will be minimised.

2.3.3.1 Design metrics

2.3.3.1.1 Cohesion

The parts of a component should all relate to a single purpose.Cohesion measures the degree to which activities within a component arerelated to one another. The cohesion of each component should bemaximised. High cohesion leads to robust, adaptable, maintainablesoftware.

Cohesion should be evaluated externally (does the name signify aspecific purpose?) and internally (does it contain unrelated pieces of logic?).Reference 15 identifies seven types of cohesion:

• functional cohesion;

• sequential cohesion;

• communicational cohesion;

• procedural cohesion;

• temporal cohesion;

• logical cohesion;

• coincidental cohesion.

Functional cohesion is most desirable and coincidental cohesionshould definitely be avoided. The other types of cohesion are acceptable

Page 312: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 17THE ARCHITECTURAL DESIGN PHASE

under certain conditions. The seven types are discussed below withreference to typical components that might be found in a statistics package.

Functionally cohesive components contain elements that allcontribute to the execution of one and only one problem-related task.Names carry a clear idea of purpose (e.g. CALCULATE_MEAN). Functionallycohesive components should be easy to reuse. All the components at thebottom of a structure should be functionally cohesive.

Sequentially cohesive components contain chains of activities, withthe output of an activity being the input to the next. Names consist ofspecific verbs and composite objects (e.g.CALCULATE_TOTAL_AND_MEAN). Sequentially cohesive componentsshould be split into functionally cohesive components.

Communicationally cohesive components contain activities thatshare the same input data. Names have over-general verbs but specificobjects (e.g. ANALYSE_CENSUS_DATA), showing that the object is usedfor a range of activities. Communicationally cohesive components should besplit into functionally cohesive components.

Procedurally cohesive components contain chains of activities, withcontrol passing from one activity to the next. Individual activities are onlyrelated to the overall goal rather than to one another. Names of procedurallycohesive components are often 'high level', because they convey an idea oftheir possible activities, rather than their actual activities (e.g.PROCESS_CENSUS_DATA). Procedurally cohesive components areacceptable at the highest levels of the structure.

Temporally cohesive components contain activities that all takeplace simultaneously. The names define a stage in the processing (e.g.INITIALISE, TERMINATE). Bundling unrelated activities together on atemporal basis is bad practice because it reduces modularity. Aninitialisation component may have to be related to other components by aglobal data area, for example, and this increases coupling and reduces thepossibilities for reuse. Temporal cohesion should be avoided.

Logically cohesive components perform a range of similar activities.Much of the processing is shared, but special effects are controlled bymeans of flags or 'control couples'. The names of logically cohesivecomponents can be quite general, but the results can be quite specialised(e.g. PROCESS_STATISTICS). Logical cohesion causes general soundingnames to appear at the bottom levels of the structure, and this can make the

Page 313: ESA Software Engineering Standards

18 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)THE ARCHITECTURAL DESIGN PHASE

design less understandable. Logical cohesion should be avoided, but thismay require significant restructuring.

Coincidentally cohesive components contain wholly unrelatedactivities. Names give no idea of the purpose of the component (e.g.MISCELLANEOUS_ROUTINE). Coincidentally cohesive components areoften invented at the end of the design process to contain miscellaneousactivities that do not neatly fit with any others. Coincidental cohesionseverely reduces understandability and maintainability, and should beavoided, even if the penalty is substantial redesign.

Practical methods for increasing cohesion are to:

• make sure that each component has a single clearly defined function;

• make sure that every part of the component contributes to that function;

• keep the internal processing short and simple.

2.3.3.1.2 Coupling

A 'couple' is an item of information passed between twocomponents. The 'coupling' between two components can be measured bythe number of couples passed. The number of couples flowing into and outof a component should be minimised.

The term 'coupling' is also frequently used to describe the relativeindependence of a component. 'Tightly' coupled components have a highlevel of interdependence and 'loosely' coupled components have a low levelof interdependence. Dependencies between components should beminimised to maximise reusability and maintainability.

Reducing coupling also simplifies interfaces. The informationpassed between components should be the minimum necessary to do thejob.

High-level components can have high coupling when low-levelinformation items are passed in and out. Low-level items should be groupedinto data structures to reduce the coupling. The component can then refer tothe data structure.

Reference 15 describes five types of coupling:

• data coupling;

Page 314: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 19THE ARCHITECTURAL DESIGN PHASE

• stamp coupling;

• control coupling;

• common coupling;

• content coupling.

Data coupling is most desirable and content coupling should beavoided. The other types of coupling are all acceptable under certainconditions, which are discussed below.

Data coupling is passing only the data needed. The couple name isa noun or noun phrase. The couple name can be the object in thecomponent name. For example a component called 'CALCULATE_MEAN'might have an output data couple called 'MEAN'.

Stamp coupling is passing more data than is needed. The couplename is a noun or noun phrase. Stamp coupling should be avoided.Components should only be given the data that is relevant to their task. Anexample of stamp coupling is to pass a data structure called'CENSUS_DATA' to a component that only requires the ages of the people inthe census to calculate the average age.

Control coupling is communication by flags. The couple name maycontain a verb. It is bad practice to use control coupling for the primarycontrol flow; this should be implied by the component hierarchy. It isacceptable to use control coupling for secondary control flows such as inerror handling.

Common coupling is communication by means of global data, andcauses data to be less secure because faults in components that haveaccess to the data area may corrupt it. Common coupling should normallybe avoided, although it is sometimes necessary to provide access to databy components that do not have a direct call connection (e.g. if A calls Bcalls C, and A has to pass data to C).

Content coupling is communication by changing the instructionsand internal data in another component. Communication coupling is badbecause it prevents information hiding. Only older languages and assemblerallow this type of coupling.

Page 315: ESA Software Engineering Standards

20 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)THE ARCHITECTURAL DESIGN PHASE

Practical methods of reducing coupling are to:

• structure the data;

• avoid using flags or semaphores for primary control flow and usemessages or component calls;

• avoid passing data through components that do not use it;

• provide access to global or shared data by dedicated components.

• make sure that each component has a single entry point and exit point.

2.3.3.1.3 System shape

System shape describes the degree to which:

• the top components in a system are divorced from the physical aspectsof the data they deal with;

• physical characteristics of the input and output data are independent.

If the system achieves these criteria to a high degree then the shapeis said to be 'balanced'.

The system shape rule is: 'separate logical and physicalfunctionality'.

2.3.3.1.4 Data structure match

In administrative data processing systems, the structure of thesystem should follow the structure of the data it deals with [Ref. 21]. A pay-roll package should deal with files at the highest levels, records at themiddle levels and fields at the lowest levels.

The data structure match rule is: 'when appropriate, match thesystem structure to the data structure'.

2.3.3.1.5 Fan-in

The number of components that call a component measures its'fan-in'. Reusability increases as fan-in increases.

Simple fan-in rules are: 'maximise fan-in' and 'reuse components asoften as possible'.

Page 316: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 21THE ARCHITECTURAL DESIGN PHASE

2.3.3.1.6 Fan-out

The number of components that a component calls measures its'fan-out'. Fan-out should usually be small, not more than seven. Howeversome kinds of constructs force much higher fan-out values (e.g. casestatements) and this is often acceptable.

Simple fan-out rules are: 'make the average fan-out seven or less' or'make components depend on as few others as possible'.

2.3.3.1.7 Factoring

Factoring measures lack of duplication in software. A componentmay be factorised to increase cohesiveness and reduce size. Factoringcreates reusable components with high fan-in. Factoring should beperformed in the AD phase to identify utility software common to allcomponents.

2.3.3.2 Complexity metrics

Suitable metrics for the design phases are listed in Table 2.3.3.2.This list has been compiled from ANSI/IEEE Std 982.1-1988 and 982.2-1988[Ref. 17 and 18]. For a detailed discussion, the reader should consult thereferences.

2.3.3.3 Abstraction

An abstraction is 'a view of an object that focuses on the informationrelevant to a particular purpose and ignores the remainder of the information'[Ref. 2]. Design methods and tools should support abstraction by providingtechniques for encapsulating functions and data in ways best suited to theview required. It should be possible to view a hierarchical design at variouslevels of detail.

The level of abstraction of a component should be related to itsplace within the structure of a system. For example a component that dealswith files should delegate record handling operations to lower-level modules,as shown in Figures 2.3.3.3.A and B.

Page 317: ESA Software Engineering Standards

22 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)THE ARCHITECTURAL DESIGN PHASE

Metric Definition1. Number of Entries and Exits measures the complexity by counting the

entry and exit points in each component [Ref. 17,Section 4.13 ]

2 Software Science Measures measures the complexity using the counts ofoperators and operands [Ref. 17, Section 4.14]

3. Graph Theoretic Complexity measures the design complexity in terms ofnumbers of components, interfaces andresources [Ref. 17, Section 4.15, and Ref. 23].Similar to Cyclomatic Complexity.

4. Cyclomatic Complexity measures the component complexity from:Number of program flows between groups ofprogram statements - Number of sequentialgroups of program statements + 2. A componentwith sequential flow from entry to exit has acyclomatic complexity of 1-2+1=1 [Ref. 17,Section 4.16, and Ref. 23]

5. Minimal Unit Test CaseDetermination

measures the complexity of a component fromthe number of independent paths through it, sothat a minimal number of covering test cases canbe generated for unit test [Ref. 17, Section 4.17]

6. Design Structure measures the design complexity in terms of itstop-down profile, component dependence,component requirements for prior processing,database size and modularity and number ofentry and exit points per component [Ref. 17,Section 4.19]

7. Data Flow Complexity measures the structural and proceduralcomplexity in terms of component length, fan-inand fan-out [Ref. 17, Section 4.25]

Table 2.3.3.2: Complexity metrics

SEARCH

GET CRITERIA OPEN FILE READ RECORD CHECK FIELDS CLOSE FILE

DATABASE

Figure 2.3.3.3A: Incorrectly layered abstractions

Page 318: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 23THE ARCHITECTURAL DESIGN PHASE

SEARCH FILE

GET CRITERIA

OPEN FILE

READ RECORD

CHECK FIELDS

CLOSE FILE

SEARCH DATABASE

Figure 2.3.3.3B: Correctly layered abstractions.

2.3.3.4 Information hiding

Information hiding is 'the technique of encapsulating softwaredesign decisions in components so that the interfaces to the componentreveal as little as possible about its inner working' (adapted from Reference2). Components that hide information about their inner workings are 'blackboxes'.

Information hiding is needed to enable developers to view thesystem at different layers of abstraction. Therefore it should not benecessary to understand any low-level processing to understand the high-level processing.

Sometimes information needs to be shared among componentsand this goes against the 'information hiding' principle. The 'informationclustering' technique restricts access to information to as few componentsas possible. A library of components is often built to provide the sole meansof access to information clusters such as common blocks.

2.3.3.5 Modularity

Modularity is 'the degree to which a system or computer program iscomposed of discrete components such that a change in one componenthas minimal impact on other components' [Ref. 2]. Component coupling islow in a modular design.

Page 319: ESA Software Engineering Standards

24 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)THE ARCHITECTURAL DESIGN PHASE

2.3.4 Prototyping

'Experimental prototypes' may be built to verify the correctness of atechnical solution. If different designs meet the same requirements,prototypes can help identify the best choice. High risk, critical technologiesshould be prototyped in the AD phase. This is essential if a workable designis to be selected. Just as in the requirements phases, prototyping in the ADphase can avoid expensive redesign and recoding in later phases.

Two types of experimental prototypes can be distinguished: 'partial-system' prototypes and 'whole-system' prototypes. Partial-systemprototypes are built to demonstrate the feasibility of a particular feature ofthe design. They are usually programmed in the language to be used in theDD phase. Whole-system prototypes are built to test the complete design.They may be built with CASE tools that permit execution of the design, or off-the-shelf tools that can be adapted for the application. Whole-systemprototypes can help identify behavioural problems, such as resourcecontention.

2.3.5 Choosing a design approach

There is no unique design for any software system. Studies of thedifferent options may be necessary. Alternative designs should be outlinedand the trade-off between promising candidates discussed. The choicedepends upon the type of system. In a real-time situation, performance andresponse time could be important, whereas in an administrative systemstability of the data base might be more important.

The designer should consider the two major kinds ofimplementation for each software component:

• generating new components from scratch;

• reusing a component from another system or library.

New components are likely to need most maintenance. Theexpertise needed to maintain old components might not be available at all,and this can be an equally difficult problem.

Reuse generally leads to increased reliability because softwarefailures peak at time of first release and decline thereafter. Unfortunately thelogic of a system often has to be distorted to accommodate componentsnot designed specifically for it, and this reduces adaptability. The structure

Page 320: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 25THE ARCHITECTURAL DESIGN PHASE

of the system becomes over-strained and cannot stand the stressesimposed by additional features.

Developers should consider the requirements for maintainability,reliability, portability and adaptability in deciding whether to create newcomponents or reuse old ones.

Only the selected design approach should be reflected in the ADD(and DDD). However the reasons for each major design decision should berecorded in the Project History Document. Modification of the software inlater phases may require design decisions to be re-examined.

2.4 SPECIFICATION OF THE ARCHITECTURAL DESIGN

The architectural design is specified by identifying the components,defining the control and data flow between them, and stating for each ofthem the:

• functions to be performed;

• data input;

• data output;

• resource utilisation.

2.4.1 Assigning functions to components

The function of a component is defined by stating its objective orcharacteristic actions [Ref. 2].

The functional definition of a component should be derived from thefunctional requirements in the SRD, and references should be added to thecomponent definition to record this.

When the architectural design is terminated at the task level, eachtask may be associated with many functional requirements. Nevertheless,there should be a strong association between the functions carried out by acomponent (i.e. each component should have high cohesion).

Internal processing should be described in terms of the control anddata flow to lower-level components. The control flow may be describeddiagrammatically, using structure charts, transformation schemas or objectdiagrams. It should describe both nominal and non-nominal behaviour.

Page 321: ESA Software Engineering Standards

26 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)THE ARCHITECTURAL DESIGN PHASE

At the lowest design level, the processing done by each componentshould be defined. This may be done in natural language, Structured Englishor pseudo code. Parts of the functional specification may be reproduced orreferenced to define what processing is required (e.g. state-transitiondiagrams, decision trees etc).

The data processing of each component can be defined by listingthe data that the component deals with, and the operations that thecomponent performs (e.g. create, read, update, delete). This informationcan be gathered from entity life histories [Ref. 4, 9]. The job of functionaldefinition is then to describe how data processing operations are to bedone. This approach ensures that no data processing functions areaccidentally omitted.

2.4.2 Definition of data structures

Data structure definitions must include the:

• characteristics of each part (e.g. name, type, dimension) (AD10);

• relationships between parts (i.e. structure) (AD11);

• range of possible values of each part (AD12);

• initial values of each part (AD13).

The names of the parts of a data structure should be meaningfuland unambiguous. They should leave no doubt about what the part is.Acronyms and obscure coding systems should be avoided in thearchitectural design.

The data type of each part of the data structure must be stated.Similarly the dimensionality of each item must be defined. Effort should betaken to keep the dimensionality of each part of the data structure as low aspossible. Arrays should be homogeneous, with all the cells storing similarinformation.

The range of possible values of each part of the data structureshould be stated. Knowledge about the possible values helps define thestorage requirements. In the DD phase, defensive programming techniquesuse knowledge about the possible values to check the input to components.

The designer must define how data structures are initialised. Inputsthat may change should be accessible to the user or operator (e.g. as in

Page 322: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 27THE ARCHITECTURAL DESIGN PHASE

'table-driven' software). Parameters that will never change (e.g. ) need to bestored so that all components use the same value (e.g. FORTRAN includefiles of PARAMETER statements).

The accessibility of the data structures to components is animportant consideration in architectural design, and has a crucial effect onthe component coupling. Access to a data structure should be granted onlyon a 'need-to-know' basis and the temptation to define global data shouldbe resisted.

Global data implies some form of data sharing, and even if accessto the global data is controlled (e.g. by semaphores), a single componentmay corrupt the data area. Such failures are usually irrecoverable. It is,however, acceptable to make data global if it is the only way of achieving thenecessary performance.

Data structures shared between a component and its immediatesubordinates should be passed. When components are separated from oneanother in the hierarchy this technique becomes clumsy, and leads to theoccurrence of 'tramp data', i.e. data that passes through componentswithout being used. The best solution to the 'tramp data' problem is to makean 'information cluster' (see Section 2.3.3.4).

2.4.3 Definition of control flow

Control flow should always be described from initialisation totermination. It should be related to the system state, perhaps by the use ofstate-transition diagrams [Ref. 11].

The precise control flow mechanism should be described, forexample:

• calls;

• interrupts;

• messages.

Control flow defines the component hierarchy. When control anddata couples are removed from a structure chart, for example, the treediagram that is left defines the component hierarchy.

Page 323: ESA Software Engineering Standards

28 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)THE ARCHITECTURAL DESIGN PHASE

2.4.3.1 Sequential and parallel control flow

Control flow may be sequential, parallel or a combination of the two.Decisions about sequential and parallel control flow are a crucial activity inarchitectural design. They should be based on the logical model. Functionsthat exchange data, even indirectly, must be arranged in sequence, whereasfunctions that have no such interface can execute in parallel.

Observations about sequential and parallel activities can affect howfunctions are allocated to components. For example, all the functions thatexecute in sequence may be allocated to a single task (or program) whereasfunctions that can execute in parallel may be allocated to different tasks.

2.4.3.2 Synchronous and asynchronous control flow

Control flow may be synchronous or asynchronous. The executionof two components is synchronous if, at any stage in the operation of onecomponent, the state of the other is precisely determined. Sequentialprocessing is always synchronous (when one component is doingsomething, the other must be doing nothing). Parallel control flow may alsobe synchronised, as in an array processor where multiple processors, eachhandling a different array element, are all synchronised by the same clock.

'Polling' is a synchronous activity. A component interrogates otherlower-level components (which may be external) for information and takesappropriate action. Polling is a robust, easy-to-program mechanism but itcan cause performance problems. To get an answer one has to ask thesame question of everyone in turn, and this takes time.

Tasks running on separate computers run asynchronously, as dotasks running on a single processor under a multi-tasking operating system.Operations must be controlled by some type of handshake, such as arendezvous. A 'scheduler' type task is usually necessary to coordinate all theother tasks.

An interrupt is an example of an asynchronous control flow.Interrupts force components to suspend their operation and pass control toother components. The exact component state when the interruption occurscannot be predicted. Such non-deterministic behaviour makes verificationand maintenance more difficult.

Another example of an asynchronous control flow is a 'one-way'message. A component sends a message and proceeds with its ownprocessing without waiting for confirmation of reception. There is no

Page 324: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 29THE ARCHITECTURAL DESIGN PHASE

handshake. Designers should only introduce this type of control flow if therest of the processing of the sender makes no assumptions about thereceiver state.

Asynchronous control flows can cause improvements in softwareperformance; this advantage should be carefully traded-off against the extracomplexity they may add.

2.4.4 Definition of the computer resource utilisation

The computer resources (e.g. CPU speed, memory, storage,system software) needed for development and operations must beestimated in the AD phase and defined in the ADD (AD15). For manysoftware projects, the development and operational environments will be thesame. Any resource requirements in the SRD will constrain the design.

Virtual memory systems have reduced the problem of main memoryutilisation. Even so, the packaging of the components into units that executetogether may still be necessary in some systems. These issues may have tobe addressed in the AD phase, as they influence the division of work andhow the software is integrated.

2.4.5 Selection of the programming language

The chosen programming languages should support top-downdecomposition, structured programming and concurrent production anddocumentation. The programming language and the AD method should becompatible.

The relation between the AD method and programming languagemeans that the choice should be made when the AD method is selected, atthe beginning of the AD phase. However the process of design may lead tothis initial choice of programming language being revised.

The selection of the programming languages to be used should bereviewed when the architectural design is complete.

Other considerations will affect the choice of programminglanguage. Portability and maintenance considerations imply that assemblershould be avoided. Portability considerations argue against using FORTRANfor real-time software because operating system services have to be called.The system services are not standard language features. Programminglanguages that allow meaningful names rather than terse, abbreviatednames should be favoured since readable software is easier to maintain.

Page 325: ESA Software Engineering Standards

30 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)THE ARCHITECTURAL DESIGN PHASE

Standardised languages and high-quality compilers, debuggers, static anddynamic analysis tools are important considerations.

While it may be necessary to use more than one programminglanguage (e.g a mixed microprocessor and minicomputer configuration),maintenance is easier if a single programming language is used.

It is not sufficient just to define the language to be used. Theparticular language standard and compiler should also be defined duringthe AD phase. The rules for using non-standard features allowed by acompiler should be formulated. Language extensions should be avoided asthis makes the software less portable. However, performance andmaintenance requirements may sometimes outweigh those of portabilityand make some non-standard features acceptable.

2.5 INTEGRATION TEST PLANNING

Integration Test Plans must be generated in the AD phase anddocumented in the Integration Test section of the Software Verification andValidation Plan (SVVP/IT). The Integration Test Plan should describe thescope, approach and resources required for the integration tests, and takeaccount of the verification requirements in the SRD. See Chapter 6.

2.6 PLANNING THE DETAILED DESIGN PHASE

Plans of DD phase activities must be drawn up in the AD phase.Generation of DD phase plans is discussed in Chapter 6. These plans coverproject management, configuration management, verification and validationand quality assurance. Outputs are the:

• Software Project Management Plan for the AD phase (SPMP/DD);

• Software Configuration Management Plan for the DD phase(SCMP/DD);

• Software Verification and Validation Plan for the DD phase (SVVP/DD);

• Software Quality Assurance Plan for the DD phase (SQAP/DD).

Page 326: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 31THE ARCHITECTURAL DESIGN PHASE

2.7 THE ARCHITECTURAL DESIGN PHASE REVIEW

Production of the ADD and SVVP/IT is an iterative process. Thedevelopment team should hold walkthroughs and internal reviews of adocument before its formal review.

The outputs of the AD phase shall be formally reviewed during theArchitectural Design Review (AD16). This should be a technical review. Therecommended procedure is described in ESA PSS-05-10, and is basedclosely on the IEEE standard for Technical Reviews [Ref. 8].

Normally, only the ADD and Integration Test Plans undergo the fulltechnical review procedure involving users, developers, management andquality assurance staff. The Software Project Management Plan (SPMP/DD),Software Configuration Management Plan (SCMP/DD), Software Verificationand Validation Plan (SVVP/DD), and Software Quality Assurance Plan(SQAP/DD) are usually reviewed by management and quality assurancestaff only.

In summary, the objective of the AD/R is to verify that the:

• ADD describes the optimal solution to the problem stated in the SRD;

• ADD describes the architectural design clearly, completely and insufficient detail to enable the detailed design process to be started;

• SVVP/IT is an adequate plan for integration testing of the software in theDD phase.

An AD/R begins when the documents are distributed to participantsfor review. A problem with a document is described in a 'Review ItemDiscrepancy' (RID) form that may be prepared by any participant. Reviewmeetings are then held that have the documents and RIDs as input. A reviewmeeting should discuss all the RIDs and either accept or reject them. Thereview meeting may also discuss possible solutions to the problems raisedby them. The output of the meeting includes the processed RIDs.

An AD/R terminates when a disposition has been agreed for all theRIDs. Each AD/R must decide whether another review cycle is necessary, orwhether the DD phase can begin.

Page 327: ESA Software Engineering Standards

32 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)THE ARCHITECTURAL DESIGN PHASE

This page is intentionally left blank

Page 328: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 33METHODS FOR ARCHITECTURAL DESIGN

CHAPTER 3 METHODS FOR ARCHITECTURAL DESIGN

3.1 INTRODUCTION

An architectural design method should provide a systematic way ofdefining the software components. Any method should facilitate theproduction of a high-quality, efficient design that meets all the requirements.

This guide summarises a range of design methods to enable thedesigner to make a choice. The details about each method should beobtained from the references. This guide does not seek either to make anyparticular method a standard or to define a set of acceptable methods.Each project should examine its needs, choose a method (or combinationof methods) and justify the selection in the ADD. To help make this choice,this guide summarises some well-known methods and discusses how theycan be applied in the AD phase.

Methods that may prove useful for architectural design are:

• Structured Design;

• Object-Oriented Design;

• Jackson System Development;

• Formal Methods.

The AD method can affect the choice of programming language.Early versions of Structured Design, for example, decompose the softwareto fit a control flow concept based on the call structure of a third generationprogramming language such as FORTRAN, C and COBOL. Similarly, theuse of object-oriented design implies the use of an object-orientedprogramming language such as C++, Smalltalk and Ada.

3.2 STRUCTURED DESIGN

Structured Design is not a single method, but a name for a class ofmethods. Members of this class are:

• Yourdon methods (Yourdon/Constantine and Ward/Mellor);

Page 329: ESA Software Engineering Standards

34 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)METHODS FOR ARCHITECTURAL DESIGN

• Structured Analysis and Design Technique (SADT™1);

• Structured Systems Analysis and Design Methodology (SSADM).

Yourdon methods are widely used in the USA and Europe. SSADMis recommended by the UK government for 'data processing systems'. It isnow under the control of the British Standards Institute (BSI). SADT hasbeen successfully used within ESA for some time.

3.2.1 Yourdon methods

Structured Design is 'a disciplined approach to software design thatadheres to a specified set of rules based on principles such as top-downdesign, stepwise refinement and data flow analysis' [Ref. 1]. In the Yourdonapproach [Ref. 11, 16, 20], the technique can be characterised by the useof:

• Structure Charts to show hierarchy and control and data flow;

• Pseudo Code to describe the processing.The well-defined path between the analysis and design phases is a

major advantage of structured methods. DeMarco's version of structuredanalysis defines four kinds of model. Table 3.2.1 relates the models to theESA life cycle phase in which they should be built.

Model No Model Name ESA PSS-05-0 phase1 current physical model SR phase2 current logical model SR phase3 required logical model SR phase4 required physical model AD phase

Table 3.2.1: Structured Analysis Models

The required physical model should be defined at the beginning theAD phase since it is input to the Structured Design process.

Structured Design defines concepts for assessing the quality ofdesign produced by any method. It has also pioneered concepts, nowfound in more recent methods, such as:

• allowing the form of the problem to guide the solution;

1 Trademark of SoftTech Inc, Waltham, Mass, USA

Page 330: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 35METHODS FOR ARCHITECTURAL DESIGN

• attempting to reduce complexity via partitioning and hierarchicalorganisation;

• making systems understandable through rigorous diagrammingtechniques.

Structured Design methods arose out of experience of building 'dataprocessing' systems and so early methods could only describe sequentialcontrol flow. A more recent extension of structured methods developed byWard and Mellor catered for real-time systems [Ref. 11] by adding extrafeatures such as:

• control transformations and event flows in DFDs;

• time-continuous and time-discrete behaviour in DFDs;

• Petri Nets, to verify the executability of DFDs [Ref. 13];

• State-Transition Diagrams to describe control transformations.

Ward and Mellor define the purpose of the analysis phase as thecreation of an 'essential model'. In the design phase the 'essential model' istransformed to a computer 'implementation model'. The implementationmodel is progressively designed in a top-down manner by allocatingfunctions to processors, tasks and modules.

Transformation Schemas are used to describe the design at theprocessor level, where there may be parallelism. Structure Charts are usedat the task level and below, where the control flow is sequential.

3.2.2 SADT

Structured Analysis and Design Technique (SADT) uses the samediagramming techniques for both analysis and design [Ref. 3]. Unlike theearly versions of Yourdon, SADT has always catered for real-time systemsmodelling, because it permits the representation of control flows. HoweverSADT diagrams need to be supplemented with State-Transition Diagrams todefine behaviour.

SADT diagram boxes should represent components. Each boxshould be labelled with the component identifier. Control flows shouldrepresent calls, interrupts and loading operations. Input and output dataflows should represent the files, messages and calling arguments passed

Page 331: ESA Software Engineering Standards

36 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)METHODS FOR ARCHITECTURAL DESIGN

between components. Mechanism flows should carry special informationabout the implementation of the component.

3.2.3 SSADM

SSADM is a detailed methodology with six activities [Ref. 4, 9]:

1. analysis of the current situation;

2. specification of requirements;

3. selection of system option;

4. logical data design;

5. logical process design;

6. physical design.

In ESA projects, activities 1 and 2 should take place in the UR andSR phases respectively. The output of 2 is a required logical model of thesoftware, which should be described in the SRD. Activities 3 to 6 should takeplace in the AD phase. The ADD should describe the physical design.

SSADM results in a process model and a data model. The modelsare built in parallel and each is used to check the consistency of the other.This built-in consistency checking approach is the major advantage thatSSADM offers. Data modelling is treated more thoroughly by SSADM thanby the other methods. SSADM does not support the design of real-timesystems.

3.3 OBJECT-ORIENTED DESIGN

Object-Oriented Design (OOD) is an approach to software designbased on objects and classes. An object is 'an abstraction of something inthe domain of a problem or its implementation, reflecting the capabilities ofa system to keep information about it, interact with it or both; anencapsulation of attribute values and their exclusive services' [Ref. 24]. Aclass describes a set of objects with common attributes and services. Anobject is an 'instance' of a class and the act of creating an object is called'instantiation'.

Classes may be entities with abstract data types, such as 'TelemetryPacket' and 'Spectrum', as well as simpler entities with primitive data types

Page 332: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 37METHODS FOR ARCHITECTURAL DESIGN

such as real numbers, integers and character strings. A class is defined byits attributes and services. For example the 'Spectrum' class would haveattributes such as 'minimum frequency', 'centre frequency', 'maximumfrequency' and services such as 'calibrate'.

Classes can be decomposed into subclasses. There might beseveral types of Telemetry Packet for example, and subclasses of TelemetryPacket such as 'Photometer Packet' and 'Spectrometer Packet' may becreated. Subclasses share family characteristics, and OOD provides for thisby permitting subclasses to inherit operations and attributes from theirparents. This leads to modular, structured systems that require less code toimplement. Common code is automatically localised in a top-down way.

Object-oriented design methods offer better support for reuse thanother methods. The traditional bottom-up reuse mechanism where anapplication module calls a library module is of course possible. In addition,the inheritance feature permits top-down reuse of superclass attributes andoperations.

OOD combines information and services, leading to increasedmodularity. Control and data structures can be defined in an integrated way.

Other features of the object-oriented approach besides classes,objects and inheritance are message passing and polymorphism. Objectssend messages to other objects to command their services. Messages arealso used to pass information. Polymorphism is the capability, at runtime, torefer to instances of various classes. Polymorphism is often implemented byallowing 'dynamic binding'.

The object-oriented features such as polymorphism rely on dynamicmemory allocation. This can make the performance of object-orientedsoftware unpredictable. Care should be taken when programming real-timeapplications to ensure that functions that must complete by a defineddeadline have their processing fully defined at compile-time, not run-time.

OOD is most effective when implemented through an object-oriented programming language that supports object definition, inheritance,message passing and polymorphism. Smalltalk, C++, Eiffel and ObjectPascal support all these features.

Object-oriented techniques have been shown to be much moresuitable for implementing event-driven software, such as Graphical UserInterfaces (GUIs), than structured methods. Many CASE tools for structuredmethods are implemented by means of object-oriented techniques.

Page 333: ESA Software Engineering Standards

38 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)METHODS FOR ARCHITECTURAL DESIGN

If object-oriented methods are to be used, they should be usedthroughout the life cycle. This means that OOD should only be selected forthe AD phase if Object-Oriented Analysis (OOA) have been used in the SRphase. The seamless transition from analysis to design to programming is amajor advantage of the object-oriented methods, as it facilitates iteration. Anearly paper by Booch [Ref. 13] describes a technique for transforming alogical model built using structured analysis to a physical model usingobject-oriented design. In practice the results have not been satisfactory.The structured analysis view is based on functions and data and the object-oriented view is based on classes, objects, attributes and services. Theviews are quite different, and it is difficult to hold both in mindsimultaneously.

Like Structured Design, Object-Oriented Design is not a singlemethod, but a name for a class of methods. Members of this class include:

• Booch;

• Hierarchical Object-Oriented Design (HOOD).

• Coad-Yourdon;

• Object Modelling Technique (OMT) from Rumbaugh et al;

• Shlaer-Mellor.

These methods are discussed below.

3.3.1 Booch

Booch originated object-oriented design [Ref. 22], and continues toplay a leading role in the development of the method [Ref. 26].

Booch models an object-oriented design in terms of a logical view,which defines the classes, objects, and their relationships, and a physicalview, which defines the module and process architecture. The logical viewcorresponds to the logical model that ESA PSS-05-0 requires softwareengineers to construct in the SR phase (see ESA PSS-05-03, Guide to theSoftware Requirements Definition Phase). The physical view corresponds tothe physical model that ESA PSS-05-0 requires software engineers toconstruct in the AD phase.

Page 334: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 39METHODS FOR ARCHITECTURAL DESIGN

Booch provides two diagramming techniques for documenting thephysical view:

• module diagrams, which are used to show the allocation of classes andobjects to modules such as programs, packages and tasks in thephysical design (the term ‘module’ in Booch’s method is used todescribe any design component);

• process diagrams, which show the allocation of modules to hardwareprocessors.

Booch’s books on Object-Oriented Design contain many insightsinto good design practise. However Booch’s notation is cumbersome andfew tools are available.

3.3.2 HOOD

Hierarchical Object-Oriented Design (HOOD) [Ref. 14] is one of afamily of object-oriented methods that try to marry object-orientation withstructured design methods (see Reference 25 for another example).Hierarchy follows naturally from decomposition of the top-level 'root' object.As in structured design, data couples flow between software components.The main difference between HOOD and structured methods is thatsoftware components get their identity from their correspondence to thingsin the real world, rather than to functions that the system has to perform.

HOOD was originally designed to be used with Ada, although Adadoes not support inheritance, and is not an object-oriented programminglanguage. This is not serious problem for the HOOD designer, because themethod does not use classes to structure systems.

HOOD does not have a complementary analysis method. Thelogical model is normally built using structured analysis. Transformation ofthe logical model to the physical model is difficult, making it hard toconstruct a coherent design.

HOOD has been used embedded Ada applications. It has a niche,but is unlikely to become as widespread and well supported by tools as,say, OMT.

Page 335: ESA Software Engineering Standards

40 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)METHODS FOR ARCHITECTURAL DESIGN

3.3.3 Coad and Yourdon

Coad and Yourdon have published an integrated approach toobject-oriented analysis and design [Ref. 24]. An object-oriented design isconstructed from four components:

• problem domain component;

• human interaction component;

• task management component;

• data management component.

Each component is composed of classes and objects. The problemdomain component is based on the (logical) model built with OOA in theanalysis phase. It defines the subject matter of the system and itsresponsibilities. If the system is to be implemented in an object-orientedlanguage, the correspondence between problem domain classes andobjects will be one to one, and the problem domain component can bedirectly programmed. Reuse considerations, and the non-availability of afully object-oriented programming language, may make the design of theproblem domain component depart from ideal represented by the OOAmodel.

The human interaction component handles sending and receivingmessages to and from the user. The classes and objects in the humaninteraction component have names taken from the user interface language,e.g. window and menu.

Many systems will have multiple threads of execution, and thedesigner must construct a task management component to organise theprocessing. The designer needs to define tasks as event-driven or clock-driven, as well as their priority and criticality.

The data management component provides the infrastructure tostore and retrieve objects. It may be a simple file system, a relationaldatabase management system, or even an object-oriented databasemanagement system.

The four components together make the physical model of thesystem. At the top level, all Coad and Yourdon Object-Oriented Designshave the same structure.

Page 336: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 41METHODS FOR ARCHITECTURAL DESIGN

Classes and objects are organised into 'generalisation-specialisation' and 'whole-part' structures. Generalisation-specialisationstructures are 'family trees', with children inheriting the attributes of theirparents. Whole-part structures are formed when an object is decomposed.

The strengths of Coad and Yourdon’s method are its brief, concisedescription and its use of general texts as sources of definitions, so that thedefinitions fit common sense and jargon is minimised. The main weaknessof the method is its complex notation, which is difficult to use without toolsupport. Some users of the Coad-Yourdon method have used the OMTdiagramming notation instead.

3.3.4 OMT

The Object Modelling Technique (OMT) of Rumbaugh et al [Ref 26]contains two design activities:

• system design;

• object design.

System design should be performed in the AD phase. The objectdesign should be performed in the DD phase.

The steps of system design are conventional and are:

• organise the system into subsystems and arrange them in layers andpartitions;

• identify concurrency inherent in the problem;

• allocate subsystems to processors;

• define the data management implementation strategy;

• identify global resources and define the mechanism for controllingaccess to them;

• choose an approach to implementing software control;

• consider boundary conditions;

• establish trade-off priorities.

Page 337: ESA Software Engineering Standards

42 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)METHODS FOR ARCHITECTURAL DESIGN

Many systems are quite similar, and Rumbaugh suggests thatsystem designs be based on one of several frameworks or ‘canonicalarchitectures’. The ones they propose are:

• batch transformation - a data transformation performed once on anentire input set;

• continuous transformation - a data transformation performedcontinuously as inputs change;

• interactive interface - a system dominated by external interactions;

• dynamic simulation - a system that simulates evolving real worldobjects;

• real-time system - a system dominated by strict timing constraints;

• transaction manager - a system concerned with storing and updatingdata.

The OMT system design approach contains many design ideas thatare generally applicable.

3.3.5 Shlaer-Mellor

Shlaer and Mellor [Ref 28] describe an Object-Oriented DesignLanguage (OODLE), derived from the Booch and Buhr notation. There arefour types of diagram:

• class diagram;

• class structure chart;

• dependency diagram;

• inheritance diagram.

There is a class diagram for each class. The class diagram definesthe operations and attributes of the class.

The class structure chart defines the module structure of the class,and the control and data flow between the modules of the class. There is aclass structure chart for each class.

Page 338: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 43METHODS FOR ARCHITECTURAL DESIGN

Dependency diagrams illustrate the dependencies betweenclasses, which may be:

• client-server;

• friends.

A client server dependency exists when a class (the client) callsupon the operations of another class (the server).

A friendship dependency exists when one class access the internaldata of another class. This is an information-hiding violation.

Inheritance diagrams show the inheritance relationships betweenclasses.

Shlaer and Mellor define a ‘recursive design’ method that uses theOODLE notation as follows:

• define how the generic computing processes will be implemented;

• implement the object model classes using the generic computingprocesses.

The Shlaer-Mellor design approach is more complex than that ofother object-oriented methods.

3.4 JACKSON SYSTEM DEVELOPMENT

Jackson System Development method should only be used in theAD phase if it has been used in the SR phase [Ref. 12]. In this case the SRDwill contain:

• definitions of the real-world entities with which the software is to beconcerned;

• definitions of the actions of these entities in the real world;

• a description of that part of the real world that is to be simulated insoftware;

• specifications of the functions that the software will perform, i.e. aselection of real-world actions that the software will emulate.

Page 339: ESA Software Engineering Standards

44 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)METHODS FOR ARCHITECTURAL DESIGN

Table 3.4 shows how the JSD activities map to the ESA PSS-05-0life cycle.

The first JSD task in the AD phase is matching processes toprocessors, i.e. deciding what computer runs each process. In JSD theremay be a process for each instance of an entity type. The designer mustmake sure that the computer has enough performance to run all theinstances of the process type.

The second JSD task in the AD phase is to transform processspecifications. Unless there is one computer available for each processdescribed in the SRD (e.g. a dedicated control system), it will be necessaryto transform the specification of each process into a form capable ofexecution on a computer. Rather than having many individual processesthat run once, it is necessary to have a single generic process that runsmany times. A computer program consists of data and instructions. Totransform a process into a program, the instruction and data components ofeach process must be separated. In JSD, the data component of a processis called the 'state-vector'. The state-vector also retains information on thestage that processing has got to, so that individual processes can beresumed at the correct point. The state-vector of a process should bespecified in the AD phase. A state-vector should reside in permanentstorage. The executable portion of a process should be specified in the ADphase in pseudo code. This may be carried over from the SRD without majormodification; introduction of detailed implementation considerations shouldbe deferred until the DD phase.

The other major JSD task in the AD phase is to scheduleprocessing. The execution of the processing has to be coordinated. In JSDthis is done by introducing a 'scheduling' process that activates anddeactivates other processes and controls their execution.

Page 340: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 45METHODS FOR ARCHITECTURAL DESIGN

JSD ActivityLevel 0 Level 1 Level 2 Level 3

Develop ModelAbstractly

Write Entity-ActionList

Specify Model ofReality

Draw Entity-Structure Diagrams

Specification(SR Phase)

Define Initialversions of Model

processesSpecify System

FunctionsAdd functions tomodel processes

Add timingconstraints

Implementation(AD Phase)

Match processes toprocessorsTransformprocesses

Devise SchedulerImplementation

(DD Phase)Define Database

Table 3.4: JSD activities and the ESA PSS-05-0 life cycle

3.5 FORMAL METHODS

Formal methods aim to produce a rigorous specification of thesoftware in terms of an agreed notation with well-defined semantics. Formalmethods should only be applied in the AD phase if they have been used inthe SR phase. See ESA PSS-05-02, Guide to the Software RequirementsDefinition Phase, for a discussion of Formal Methods. Formal methods maybe used when the software is safety-critical, or when the availability andsecurity requirements are very demanding.

Development of a formal specification should proceed byincreasing the detail until direct realisation of the specification in aprogramming language is possible. This procedure is sometimes called'reification' [Ref. 10].

The AD phase of a software development using a formal method ismarked by the taking of the first implementation decisions about thesoftware. Relevant questions are: 'how are the abstract data types to berepresented? and 'how are operations to be implemented? These decisionsmay constrain subsequent iterations of the specification. When a FormalMethod is being used, the AD phase should end when the specification canbe coded in a high-level programming language.

Page 341: ESA Software Engineering Standards

46 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)METHODS FOR ARCHITECTURAL DESIGN

This page is intentionally left blank.

Page 342: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 47TOOLS FOR ARCHITECTURAL DESIGN

CHAPTER 4 TOOLS FOR ARCHITECTURAL DESIGN

4.1 INTRODUCTION

This chapter discusses the tools for constructing a physical modeland specifying the architectural design. Tools can be combined to suit theneeds of a particular project.

4.2 TOOLS FOR PHYSICAL MODEL CONSTRUCTION

In all but the smallest projects, CASE tools should be used duringthe AD phase. Like many general purpose tools (e.g. such as wordprocessors and drawing packages), CASE tools should provide:

• windows, icons, menu and pointer (WIMP) style interface for the easycreation and editing of diagrams;

• what you see is what you get (WYSIWYG) style interface that ensuresthat what is created on the display screen is an exact image of what willappear in the document.

Method-specific CASE tools offer the following features not offeredby general-purpose tools:

• enforcement of the rules of the methods;

• consistency checking;

• easy modification;

• automatic traceability of components to software requirements;

• built-in configuration management;

• support for abstraction and information hiding;

• support for simulation.

CASE Tools should have an integrated data dictionary or anotherkind or 'repository'. This is necessary for consistency checking. Developers

Page 343: ESA Software Engineering Standards

48 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)TOOLS FOR ARCHITECTURAL DESIGN

should check that a tool supports the parts of the method that they intend touse. Appendix E contains a more detailed list of desirable tool capabilities.

Configuration management of the model is essential. The modelshould evolve from baseline to baseline as it develops in the AD phase, andenforcement of procedures for the identification, change control and statusaccounting of the model are necessary. In large projects, configurationmanagement tools should be used for the management of the modeldatabase.

4.3 TOOLS FOR ARCHITECTURAL DESIGN SPECIFICATION

A word processor or text processor should be used. Tools for thecreation of paragraphs, sections, headers, footers, tables of contents andindexes all ease the production of a document. A spelling checker isdesirable. An outliner may be found useful for creation of sub-headings, forviewing the document at different levels of detail and for rearranging thedocument. The ability to handle diagrams is very important.

Documents invariably go through many drafts as they are created,reviewed and modified. Revised drafts should include change bars.Document-comparison programs, which can mark changed textautomatically, are invaluable for easing the review process.

Tools for communal preparation of documents are now beginningto be available, allowing many authors to comment and add to a singledocument.

Page 344: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 49THE ARCHITECTURAL DESIGN DOCUMENT

CHAPTER 5 THE ARCHITECTURAL DESIGN DOCUMENT

5.1 INTRODUCTION

The ADD defines the framework of the solution. The ADD must bean output from the AD phase (AD19). The ADD must be sufficiently detailedto allow the project leader to draw up a detailed implementation plan and tocontrol the overall project during the remaining development phases (AD23).It should be detailed enough to define the integration tests. Components(especially interfaces) should be defined in sufficient detail so thatprogrammers, or teams of programmers, can work independently. The ADDis incomplete if programmers have to make decisions that affectprogrammers working on other components. Provided it meets these goals,the smaller the ADD, the more readable and reviewable it is.

The ADD should:

• avoid extensive specification of module processing (this is done in theDD phase);

• not cover the project management, configuration management,verification and quality assurance aspects (which are covered by theSPMP, SCMP, SVVP and SQAP).

Only the selected design approach should be reflected in the ADD(AD05). Descriptions of options and analyses of trade-offs should bearchived by the project and reviewed in the Project History Document (PHD).

5.2 STYLE

The style of an ADD should be systematic and rigorous. The ADDshould be clear, consistent and modifiable.

5.2.1 Clarity

An ADD is clear if each component has definite inputs, outputs,function and relationships to other components. The natural language usedin an ADD should be shared by all members of the development team.

Page 345: ESA Software Engineering Standards

50 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)THE ARCHITECTURAL DESIGN DOCUMENT

Explanatory text, written in natural language, should be included toenable review by those not familiar with the design method.

5.2.2 Consistency

The ADD must be consistent (AD22). There are several types ofinconsistency:

• different terms used for the same thing;

• the same term used for different things;

• incompatible activities happening simultaneously;

• activities happening in the wrong order;

• using two different components to perform the same function.

Where a term could have multiple meanings, a single meaning forthe term should be defined in a glossary, and only that meaning should beused throughout.

Duplication and overlap lead to inconsistency. If the same functionalrequirement can be traced to more than one component, this will be a clueto inconsistency. Methods and tools help consistency to be achieved.

5.2.3 Modifiability

An ADD is modifiable if design changes can be documented easily,completely and consistently.

5.3 EVOLUTION

The ADD should be put under change control by the developer atthe start of the Detailed Design Phase. New components may need to beadded and old components modified or deleted. If the ADD is beingdeveloped by a team of people, the control of the document may be startedat the beginning of the AD phase.

The Software Configuration Management Plan defines a formalchange process to identify, control, track and report projected changes, assoon as they are identified. Approved changes in components must berecorded in the ADD by inserting document change records and adocument status sheet at the start of the ADD.

Page 346: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 51THE ARCHITECTURAL DESIGN DOCUMENT

5.4 RESPONSIBILITY

The developer is responsible for producing the ADD. The developershould nominate people with proven design and implementation skills towrite it.

5.5 MEDIUM

The ADD is usually a paper document. It may be distributedelectronically when participants have access to the necessary equipment.

5.6 CONTENT

The ADD must define the major software components and theinterfaces between them (AD17). Components may be systems,subsystems, data stores, components, programs and processes. The typesof software components will reflect the AD method used. At the end of theAD phase the descriptions of the components are assembled into an ADD.

The ADD must be compiled according to the table of contentsprovided in Appendix C of ESA PSS-05-0 (AD24). This table of contents isderived from ANSI/IEEE Std 1016-1987 'Software Design Descriptions' [Ref.17]. This standard defines a Software Design Description as a'representation or model of the software system to be created. The modelshould provide the precise design information needed for the planning,analysis and implementation of the software system'. The ADD should besuch a Software Design Description.

References should be given where appropriate, but an ADD shouldnot refer to documents that follow it in the ESA PSS-05-0 life cycle. An ADDshould contain no TBCs or TBDs by the time of the Architectural DesignReview.

Relevant material unsuitable for inclusion in the contents list shouldbe inserted in additional appendices. If there is no material for a section thenthe phrase 'Not Applicable' should be inserted and the section numberingpreserved.

Page 347: ESA Software Engineering Standards

52 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)THE ARCHITECTURAL DESIGN DOCUMENT

Service Information:a - Abstractb - Table of Contentsc - Document Status Sheetd - Document Change Records made since last issue

1 Introduction1.1 Purpose1.2 Scope1.3 Definitions, acronyms and abbreviations1.4 References1.5 Overview

2 System Overview

3 System Context3.n External interface definition

4 System Design4.1 Design method4.2 Decomposition description

5 Component Description 5.n [Component identifier]

5.n.1 Type5.n.2 Purpose5.n.3 Function5.n.4 Subordinates5.n.5 Dependencies5.n.6 Interfaces5.n.7 Resources5.n.8 References5.n.9 Processing5.n.10 Data

6 Feasibility and Resource Estimates

7 Software Requirements vs Components Traceability matrix

5.6.1 ADD/1 INTRODUCTION

This section should provide an overview of the entire document anda description of the scope of the software.

Page 348: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 53THE ARCHITECTURAL DESIGN DOCUMENT

5.6.1.1 ADD/1.1 Purpose (of the document)

This section should:

(1) describe the purpose of the particular ADD;

(2) specify the intended readership of the ADD.

5.6.1.2 ADD/1.2 Scope (of the software)

This section should:

(1) identify the software products to be produced;

(2) explain what the proposed software will do (and will not do, if necessary);

(3) define relevant benefits, objectives and goals as precisely as possible;

(4)be consistent with similar statements in higher-level specifications, if theyexist.

5.6.1.3 ADD/1.3 Definitions, acronyms and abbreviations

This section should define all terms, acronyms and abbreviationsused in the ADD, or refer to other documents where the definitions can befound.

5.6.1.4 ADD/1.4 References

This section should list all the applicable and reference documents,identified by title, author and date. Each document should be marked asapplicable or reference. If appropriate, report number, journal name andpublishing organisation should be included.

5.6.1.5 ADD/1.5 Overview (of the document)

This section should:

(1) describe what the rest of the ADD contains;

(2) explain how the ADD is organised.

This section is not included in the table of contents provided in ESAPSS-05-0, and is therefore optional.

Page 349: ESA Software Engineering Standards

54 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)THE ARCHITECTURAL DESIGN DOCUMENT

5.6.2 ADD/2 SYSTEM OVERVIEW

This section should briefly introduce to the system context anddesign, and discuss the background to the project.

This section may summarise the costs and benefits of the selectedarchitecture, and may refer to trade-off studies and prototyping exercises.

5.6.3 ADD/3 SYSTEM CONTEXT

This section should define all the external interfaces (AD18). Thisdiscussion should be based on a system block diagram or context diagramto illustrate the relationship between this system and other systems.

5.6.4 ADD/4 SYSTEM DESIGN

5.6.4.1 ADD/4.1 Design Method

The design method used should be named and referenced. A briefdescription may be added to aid readers not familiar with the method. Anydeviations and extensions of the method should be explained and justified.

5.6.4.2 ADD/4.2 Decomposition description

The software components should be summarised. This should bepresented as structure charts or object diagrams showing the hierarchy,control flow (AD14) and data flow between the components.

Components can be organised in various ways to provide the viewsneeded by different members of the development organisation. ANSI/IEEE1016-1987 describes three possible ways of presenting the software design.They are the 'Decomposition View', the 'Dependency View' and the 'InterfaceView'. Ideally, all views should be provided.

The Decomposition View shows the component breakdown. Itdefines the system using one or more of the identity, type, purpose, functionand subordinate parts of the component description. Suitable methods aretables listing component identities with a one-line summary of their purpose.The intended readership consists of managers, for work package definition,and development personnel, to trace or cross-reference components andfunctions.

The Dependency View emphasises the relationships among thecomponents. It should define the system using one or more of the identity,

Page 350: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 55THE ARCHITECTURAL DESIGN DOCUMENT

type, purpose, dependency and resource parts of the componentdescription. Suitable methods of presentation are Structure Charts andObject Diagrams. The intended readership of this description consists ofmanagers, for the formulation of the order of implementation of workpackages, and maintenance personnel, who need to assess the impact ofdesign changes.

The Interface View emphasises the functionality of the componentsand their interfaces. It should define the system using one or more of theidentity, functions and interface parts of the component description. Suitablemethods of presentation are interface files and parameter tables. Theintended readership of this description comprises designers, programmersand testers, who need to know how to use the components in the system.

5.6.5 ADD/3 COMPONENT DESCRIPTION

The descriptions of the components should be laid outhierarchically. There should be subsections dealing with the followingaspects of each component:

5.n Component identifier5.n.1 Type5.n.2 Purpose5.n.3 Function5.n.4 Subordinates5.n.5 Dependencies5.n.6 Interfaces5.n.7 Resources5.n.8 References5.n.9 Processing5.n.10 Data

The number 'n' should relate to the place of the component in thehierarchy.

5.6.5.1 ADD/5.n Component Identifier

Each component should have a unique identifier (SCM06) foreffective configuration management. The component should be namedaccording to the rules of the programming language or operating system tobe used. Where possible, a hierarchical naming scheme should be usedthat identifies the parent of the component (e.g. ParentName_ChildName)

Page 351: ESA Software Engineering Standards

56 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)THE ARCHITECTURAL DESIGN DOCUMENT

The identifier should reflect the purpose and function of thecomponent and be brief yet be meaningful. If abbreviation is necessary,abbreviations should be applied consistently and without ambiguity.Abbreviations should be documented. Component identifiers should bemutually consistent (e.g. if there is a routine called READ_RECORD then onemight expect a routine called WRITE_RECORD, notRECORD_WRITING_ROUTINE).

5.6.5.1.1 ADD/5.n.1 Type

Component type should be defined by stating its logical andphysical characteristics. The logical characteristics should be defined bystating the package, library or class that the component belongs to. Thephysical characteristics should be defined by stating the type of component,using the implementation terminology (e.g. task, subroutine, subprogram,package, file).

The contents of some component description sections depend onthe component type. For the purpose of this guide the categories:executable (i.e. contains computer instructions) or non-executable (i.e.contains only data) are used.

5.6.5.1.2 ADD/5.n.2 Purpose

The purpose of a component should be defined by tracing it to thesoftware requirements that it implements.

Backwards traceability depends upon each component descriptionexplicitly referencing the requirements that justify its existence.

5.6.5.1.3 ADD/5.n.3 Function

The function of a component must be defined in the ADD (AD07).This should be done by stating what the component does.

The function description depends upon the component type.Therefore it may be a description of:

• the process;

• the information stored or transmitted.

Page 352: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 57THE ARCHITECTURAL DESIGN DOCUMENT

Process descriptions may use such techniques as StructuredEnglish, Precondition-Postcondition specifications and State-TransitionDiagrams.

5.6.5.1.4 ADD/5.n.4 Subordinates

The subordinates of a component should be defined by listing theimmediate children. The subordinates of a module are the modules that are'called by' it. The subordinates of a database could be the files that'compose' it. The subordinates of an object are the objects that are 'used by'it.

5.6.5.1.5 ADD/5.n.5 Dependencies

The dependencies of a component should be defined by listing theconstraints placed upon its use by other components. For example:

• 'what operations have to have taken place before this component iscalled?'

• 'what operations are excluded when this operation is taking place?'

• 'what components have to be executed after this one?'.

5.6.5.1.6 ADD/5.n.6 Interfaces

Both control flow and data flow aspects of an interface need to bespecified in the ADD for each 'executable' component. Data aspects of 'non-executable' components should be defined in Subsection 10.

The control flow to and from a component should be defined interms of how execution of the component is to be started (e.g. subroutinecall) and how it is to be terminated (e.g. return). This may be implicit in thedefinition of the type of component, and a description may not benecessary. Control flows may also take place during execution (e.g.interrupt) and these should be defined, if they exist.

The data flow input to and output from each component must bedetailed in the ADD (AD06, AD08, AD09). Data structures should beidentified that:

• are associated with the control flow (e.g. call argument list);

• interface components through common data areas and files.

Page 353: ESA Software Engineering Standards

58 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)THE ARCHITECTURAL DESIGN DOCUMENT

One component's input may be another's output and to avoidduplication of interface definitions, specific data components should bedefined and described separately (e.g. files, messages). The interfacedefinition should only identify the data component and not define itscontents.

The interfaces of a component should be defined by explaining'how' the component interacts with the components that use it. This can bedone by describing the mechanisms for:

• invoking or interrupting the component's execution;

• communicating through parameters, common data areas or messages.

If a component interfaces to components in the same system thenthe interface description should be defined in the ADD. If a componentinterfaces to components in other systems, the interface description shouldbe defined in an Interface Control Document (ICD).

5.6.5.1.7 ADD/5.n.7 Resources

The resources a component requires should be defined by itemisingwhat the component needs from its environment to perform its function.Items that are part of the component interface are excluded. Examples ofresources that might be needed by a component are displays, printers andbuffers.

5.6.5.1.8 ADD/5.n.8 References

Explicit references should be inserted where a componentdescription uses or implies material from another document.

5.6.5.1.9 ADD/5.n.9 Processing

The processing a component needs to do should be defined bysummarising the control and data flow within it. For some kinds ofcomponent (e.g. files) there is no such flow. In practice it is often difficult toseparate the description of function from the description of processing.Therefore a detailed description of function can compensate for a lack ofdetail in the specification of the processing. Techniques of processspecification more oriented towards software design are Program DesignLanguage, Pseudo Code and Flow Charts.

Page 354: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 59THE ARCHITECTURAL DESIGN DOCUMENT

The ADD should not provide a complete, exhaustive specification ofthe processing to be done by bottom-level components; this should bedone in the Detailed Design and Production Phase. The processing ofhigher-level components, especially the control flow, data flow and statetransitions should be specified.

Software constraints may specify that the processing be performedby means of a particular algorithm (which should be stated or referenced).

5.6.5.1.10 ADD/5.n.10 Data

The data internal to a component should be defined. The amount ofdetail required depends strongly on the type of component. The logical andphysical data structure of files that interface major components should bedefined in detail. The specification of the data internal to a major componentshould be postponed to the DD phase.

Data structure definitions must include the:

• description of each element (e.g. name, type, dimension) (AD10);

• relationships between the elements (i.e. the structure, AD11);

• range of possible values of each element (AD12);

• initial values of each element (AD13).

5.6.6 ADD/6 FEASIBILITY AND RESOURCE ESTIMATES

This section should contain a summary of the computer resourcesrequired to build, operate and maintain the software (AD15).

5.6.7 ADD/7 SOFTWARE REQUIREMENTS TRACEABILITY MATRIX

This section should contain a table that summarises how eachsoftware requirement has been met in the ADD (AD21). The tabular formatpermits one-to-one and one-to-many relationships to be shown. A templateis provided in Appendix D.

Page 355: ESA Software Engineering Standards

60 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)THE ARCHITECTURAL DESIGN DOCUMENT

This page is intentionally left blank.

Page 356: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 61LIFE CYCLE MANAGEMENT ACTIVITIES

CHAPTER 6 LIFE CYCLE MANAGEMENT ACTIVITIES

6.1 INTRODUCTION

AD phase activities must be carried out according to the plansdefined in the SR phase (AD01). These are:

• Software Project Management Plan for the AD phase (SPMP/AD);

• Software Configuration Management Plan for the AD phase (SCMP/AD);

• Software Verification and Validation Plan for the AD phase (SVVP/AD);

• Software Quality Assurance Plan for the AD phase (SQAP/AD).

Progress against plans should be continuously monitored byproject management and documented at regular intervals in progressreports.

Plans of DD phase activities must be drawn up in the AD phase.These plans should cover project management, configuration management,verification and validation, quality assurance and integration tests.

6.2 PROJECT MANAGEMENT PLAN FOR THE DD PHASE

By the end of the AD review, the DD phase section of the SPMP(SPMP/DD) must be produced (SPM08). The SPMP/DD describes, in detail,the project activities to be carried out in the DD phase.

An estimate of the total project cost must be included in theSPMP/DD (SPM09). Every effort should be made to arrive at a total projectcost estimates with an accuracy better than 10%. The SPMP/DD mustcontain a Work-Breakdown Structure (WBS) that is directly related to thebreakdown of the software into components (SPM10).

The SPMP/DD must contain a planning network showing therelationships between the coding testing and integration activities (SPM11).No software production packages in the SPMP/DD must last longer than 1man-month (SPM12).

Page 357: ESA Software Engineering Standards

62 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)LIFE CYCLE MANAGEMENT ACTIVITIES

Cost estimation methods can be used to estimate the total numberof man-months and the implementation schedule needed for the DD phase.Estimates of the number of lines of code, supplemented by many otherparameters, are the usual input. When properly calibrated (using datacollected from previous projects done by the same organisation), suchmodels can yield predictions that are 20% accurate 68% of the time (TRW'sinitial calibration of the COCOMO model). Even if cost models are used,technical knowledge and experience gained on similar projects need to beused to arrive at the desired 10% accurate cost estimate.

Guidance on writing the SPMP/DD is provided in ESA PSS-05-08,Guide to Software Project Management.

6.3 CONFIGURATION MANAGEMENT PLAN FOR THE DD PHASE

During the AD phase, the DD phase section of the SCMP(SCMP/DD) must be produced (SCM46). The SCMP/DD must cover theconfiguration management procedures for documentation, deliverable code,and any CASE tool outputs or prototype code, to be produced in the DDphase (SCM47).

Guidance on writing the SCMP/DD is provided in ESA PSS-05-09,Guide to Software Configuration Management.

6.4 VERIFICATION AND VALIDATION PLAN FOR THE DD PHASE

During the AD phase, the DD phase section of the SVVP (SVVP/DD)must be produced (SVV15). The SVVP/DD must define how the DDD andthe code are evaluated by defining the review and traceability procedures. Itmay include specifications of the tests to be performed with prototypes.

The planning of the integration tests should proceed in parallel withthe definition of the architectural design.

Guidance on writing the SVVP/DD is provided in ESA PSS-05-10,Guide to Software Verification and Validation.

6.5 QUALITY ASSURANCE PLAN FOR THE DD PHASE

During the AD phase, the DD phase section of the SQAP(SQAP/DD) must be produced (SQA08). The SQAP/DD must describe, in

Page 358: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) 63LIFE CYCLE MANAGEMENT ACTIVITIES

detail, the quality assurance activities to be carried out in the DD phase(SQA09).

SQA activities include monitoring the following activities:

• management;• documentation;• standards, practices, conventions and metrics;• reviews and audits;• testing activities;• problem reporting and corrective action;• tools, techniques and methods;• code and media control;• supplier control;• record collection, maintenance and retention;• training;• risk management.

Guidance on writing the SQAP/AD is provided in ESA PSS-05-11,Guide to Software Quality Assurance.

The SQAP/DD should take account of all the software requirementsrelated to quality, in particular:

• quality requirements;• reliability requirements;• maintainability requirements;• safety requirements;• verification requirements;• acceptance-testing requirements.

The level of monitoring planned for the AD phase should beappropriate to the requirements and the criticality of the software. Riskanalysis should be used to target areas for detailed scrutiny.

Page 359: ESA Software Engineering Standards

64 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)LIFE CYCLE MANAGEMENT ACTIVITIES

6.6 INTEGRATION TEST PLANS

The developer must construct an integration test plan in the ADphase and document it in the SVVP (SVV17). This plan should define thescope, approach, resources and schedule of integration testing activities.

Specific tests for each software requirement are not formulated untilthe DD phase. The Integration Test Plan should deal with the general issues,for example:

• where will the integration tests be done?

• who will attend?

• who will carry them out?

• are tests needed for all software requirements?

• must any special test software be used?

• how long is the integration testing programme expected to last?

• are simulations necessary?

Guidance on writing the SVVP/IT is provided in ESA PSS-05-10,Guide to Software Verification and Validation.

Page 360: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) A-1GLOSSARY

APPENDIX AGLOSSARY

A.1 LIST OF ACRONYMS

Terms used in this document are consistent with ESA PSS-05-0[Ref. 1] and ANSI/IEEE Std 610.12-1990 [Ref. 2].

AD Architectural DesignADD Architectural Design DocumentAD/R Architectural Design ReviewANSI American National Standards InstituteBSSC Board for Software Standardisation and ControlCASE Computer Aided Software EngineeringDD Detailed Design and productionDFD Data Flow DiagramESA European Space AgencyGUI Graphical User InterfaceHOOD Hierarchical Object Oriented DesignIEEE Institute of Electrical and Electronics EngineersISO International Standards OrganisationICD Interface Control DocumentJSD Jackson System DevelopmentMTBF Mean Time Between FailuresMTTR Mean Time To RepairOOA Object-Oriented AnalysisOOD Object-Oriented DesignOSI Open Systems InterconnectionPA Product AssurancePSS Procedures, Specifications and StandardsQA Quality AssuranceRID Review Item DiscrepancySADT Structured Analysis and Design TechniqueSCM Software Configuration ManagementSCMP Software Configuration Management PlanSPM Software Project ManagementSPMP Software Project Management PlanSQA Software Quality AssuranceSQAP Software Quality Assurance PlanSR Software RequirementsSRD Software Requirements Document

Page 361: ESA Software Engineering Standards

A-2 ESA PSS-05-04 Issue 1 Revision 1 (March 1995))GLOSSARY

SR/R Software Requirements ReviewSSADM Structured Systems Analysis and Design MethodologyST System TestSUM Software User ManualSVVP Software Verification and Validation PlanTBC To Be ConfirmedTBD To Be DefinedUR User RequirementsURD User Requirements DocumentUR/R User Requirements Review

Page 362: ESA Software Engineering Standards

ESA PSS PSS-05-04 Issue 1 Revision 1 (March 1995) B-1REFERENCES

APPENDIX BREFERENCES

1. ESA Software Engineering Standards, ESA PSS-05-0 Issue 2 February1991.

2. IEEE Standard Glossary for Software Engineering Terminology,ANSI/IEEE Std 610.12-1990.

3. Structured Analysis (SA): A Language for Communicating Ideas,D.T.Ross, IEEE Transactions on Software Engineering, Vol SE-3, No 1,January 1977.

4. SSADM Version 4, NCC Blackwell Publications, 1991

5. The STARTs Guide - a guide to methods and software tools for theconstruction of large real-time systems, NCC Publications, 1987.

6. Structured Rapid Prototyping, J.Connell and L.Shafer, Yourdon Press,1989.

7. Structured Analysis and System Specification, T.DeMarco, YourdonPress, 1978.

8. IEEE Standard for Software Reviews and Audits, IEEE Std 1028-1988.

9. Structured Systems Analysis and Design Methodology, G.Cutts,Paradigm, 1987.

10. Systematic Software Development Using VDM, C.B.Jones, Prentice-Hall, 1986.

11. Structured Development for Real-Time Systems, P.T.Ward & S.J.Mellor,Yourdon Press, 1985. (Three Volumes).

12. System Development, M.Jackson, Prentice-Hall, 1983.

13. Object Oriented Development, G.Booch, in IEEE Transactions onSoftware Engineering, VOL SE-12, February 1986.

14. Hood Reference Manual, Issue 3, Draft C, Reference WME/89-173/JBHOOD Working Group, ESTEC, 1989.

Page 363: ESA Software Engineering Standards

B-2 ESA PSS-05-04 Issue 1 Revision 1 (March 1995))REFERENCES

15. The Practical Guide to Structured Systems Design, M.Page-Jones,Yourdon Press, 1980.

16. IEEE Recommended Practice for Software Design Descriptions,ANSI/IEEE Std 1016-1987.

17. IEEE Standard Dictionary of Measures to Produce Reliable Software,IEEE Std 982.1-1988.

18. IEEE Guide for the Use of IEEE Standard Dictionary of Measures toProduce Reliable Software, IEEE Std 982.2-1988.

19. Structured Design: Fundamentals of a Discipline of Computer Programand Systems Design, E. Yourdon and L. Constantine, Yourdon Press,1978.

20. Structured Analysis and System Specification, T. DeMarco, YourdonPress, 1978.

21. Principles of Program Design, M. Jackson, Academic Press, 1975.

22. Software Engineering with Ada, second edition, G.Booch,Benjamin/Cummings Publishing Company Inc, 1986.

23. A complexity Measure, T.J.McCabe, IEEE Transactions on SoftwareEngineering, Vol. SE-2, No. 4, December 1976.

24. Object-Oriented Design, P. Coad and E. Yourdon, Prentice-Hall, 1991.

25. The Object-Oriented Structured Design Notation for Software DesignRepresentation, A.I. Wasserman, P.A. Pircher and R.J. Muller,Computer, March 1990.

26. Object-Oriented Design with Applications, G. Booch, Benjamin-Cummings, 1991.

27. Object-Oriented Modeling and Design, J.Rumbaugh, M.Blaha,W.Premerlani, F.Eddy and W.Lorensen, Prentice-Hall, 1991

28. Object-Oriented Systems Analysis - Modeling the World in Data,S.Shlaer and S.J.Mellor, Yourdon Press, 1988

29. Object Lifecycles - Modeling the World in States, S.Shlaer andS.J.Mellor, Yourdon Press, 1992.

Page 364: ESA Software Engineering Standards

ESA PSS PSS-05-04 Issue 1 Revision 1 (March 1995) B-1MANDATORY PRACTICES

APPENDIX CMANDATORY PRACTICES

This appendix is repeated from ESA PSS-05-0, appendix D.4

AD01 AD phase activities shall be carried out according to the plans defined in theSR phase.

AD02 A recognised method for software design shall be adopted and appliedconsistently in the AD phase.

AD03 The developer shall construct a 'physical model', which describes the designof the software using implementation terminology.

AD04 The method used to decompose the software into its component parts shallpermit a top-down approach.

AD05 Only the selected design approach shall be reflected in the ADD.

For each component the following information shall be detailed in the ADD:

AD06 • data input;

AD07 • functions to be performed;

AD08 • data output.

AD09 Data structures that interface components shall be defined in the ADD.

Data structure definitions shall include the:

AD10 • description of each element (e.g. name, type, dimension);

AD11 • relationships between the elements (i.e. the structure);

AD12 • range of possible values of each element;

AD13 • initial values of each element.

AD14 The control flow between the components shall be defined in the ADD.

AD15 The computer resources (e.g. CPU speed, memory, storage, systemsoftware) needed in the development environment and the operationalenvironment shall be estimated in the AD phase and defined in the ADD.

Page 365: ESA Software Engineering Standards

C-2 ESA PSS-05-04 Issue 1 Revision 1 (March 1995))MANDATORY PRACTICES

AD16 The outputs of the AD phase shall be formally reviewed during theArchitectural Design Review.

AD17 The ADD shall define the major components of the software and theinterfaces between them.

AD18 The ADD shall define or reference all external interfaces.

AD19 The ADD shall be an output from the AD phase.

AD20 The ADD shall be complete, covering all the software requirementsdescribed in the SRD.

AD21 A table cross-referencing software requirements to parts of the architecturaldesign shall be placed in the ADD.

AD22 The ADD shall be consistent.

AD23 The ADD shall be sufficiently detailed to allow the project leader to draw upa detailed implementation plan and to control the overall project during theremaining development phases.

AD24 The ADD shall be compiled according to the table of contents provided inAppendix C.

Page 366: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) D-1REQUIREMENTS TRACEABILITY MATRIX

APPENDIX D REQUIREMENTS TRACEABILITY MATRIX

REQUIREMENTS TRACEABILITY MATRIX ADD TRACED TO SRD

DATE: <YY-MM-DD>PAGE 1 OF <nn>

PROJECT: <TITLE OF PROJECT>SRD IDENTIFIER ADD IDENTIFIER SOFTWARE REQUIREMENT

Page 367: ESA Software Engineering Standards

D-2 ESA PSS-05-04 Issue 1 Revision 1 (March 1995))

This page is intentionally left blank

Page 368: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) E-1CASE TOOL SELECTION CRITERIA

APPENDIX ECASE TOOL SELECTION CRITERIA

This appendix lists selection criteria for the evaluation of CASE tools forbuilding a physical model.

The tool should:

1. enforce the rules of each method it supports;

2. allow the user to construct diagrams according to the conventions of theselected method;

3. support consistency checking (e.g. balancing);

4. store the model description;

5. be able to store multiple model descriptions;

6. support top-down decomposition (e.g. by allowing the user to create and editlower-level component designs by 'exploding' those in a higher-level diagram);

7. minimise line-crossing in a diagram;

8. minimise the number of keystrokes required to add a symbol;

9. allow the user to 'tune' a method by adding and deleting rules;

10. support concurrent access by multiple users;

11. permit access to the model database (usually called a repository) to becontrolled;

12. permit reuse of all or part of existing model descriptions, to allow the bottom-upintegration of models (e.g. rather than decompose a component such as'READ_ORBIT_FILE', it should be possible to import a specification of thiscomponent from another model);

13. support document generation according to user-defined templates andformats;

14. support traceability of software requirements to components and code;

Page 369: ESA Software Engineering Standards

E-2 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)CASE TOOL SELECTION CRITERIA

15. support conversion of a diagram from one style to another (e.g. Yourdon toSADT and vice-versa);

16. allow the user to execute the model (to verify real-time behaviour by animationand simulation);

17. support all configuration management functions (e.g. to identify items, controlchanges to them, storage of baselines etc);

18. keep the users informed of changes when part of a design is concurrentlyaccessed;

19. support consistency checking in any of three checking modes, selectable bythe user:

• interpreter, i.e. check each change as it is made; reject any illegalchanges;

• compiler, i.e. accept all changes and check them all at once at the end ofthe session;

• monitor, i.e. check each change as it is made; issue warnings aboutillegal changes.

20. link the checking mode with the access mode when there are multiple users,so that local changes can be done in any checking mode, but changes thathave non-local effects are checked immediately and rejected if they are in error;

21. support scoping and overloading of data item names;

22. support the production of module shells from component specifications;

23. support the insertion and editing of a description of the processing in themodule shell;

24. provide context-dependent help facilities;

25. make effective use of colour to allow parts of a display to be easilydistinguished;

26. have a consistent user interface;

27. permit direct hardcopy output;

28. permit data to be imported from other CASE tools;

Page 370: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) E-3CASE TOOL SELECTION CRITERIA

29. permit the exporting of data in standard graphics file formats (e.g. CGM);

30. permit the exporting of data to external word-processing applications andeditors;

31. describe the format of the tool database or repository in the userdocumentation, so that users can manipulate the database by means ofexternal software and packages (e.g. tables of ASCII data).

Page 371: ESA Software Engineering Standards

E-4 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)CASE TOOL SELECTION CRITERIA

This page is intentionally left blank.

Page 372: ESA Software Engineering Standards

ESA PSS-05-04 Issue 1 Revision 1 (March 1995) F-1INDEX

APPENDIX FINDEX

abstraction, 21acceptance-testing requirement, 10active safety, 14AD/R, 1, 31AD01, 4, 61AD02, 5AD03, 5AD04, 6AD05, 49AD06, 57AD07, 56AD08, 57AD09, 57AD10, 26, 59AD11, 26, 59AD12, 26, 59AD13, 26, 59AD14, 54AD15, 29, 59AD16, 31AD17, 51AD18, 54AD19, 49AD20, 3AD21, 59AD22, 50AD23, 49AD24, 51ADD, 3ANSI/IEEE Std 1016-1987, 51architectural design, 25asynchronous control flow, 28audit, 63baseline, 48Booch, 38Buhr, 42CASE tool, 6, 47, 62change control, 48, 50class diagram, 42class structure chart, 42Coad and Yourdon, 40cohesion, 15, 16complexity metric, 21configuration management, 47control flow, 27corrective action, 63couple, 18

coupling, 15, 18cyclomatic complexity, 15, 22data dictionary, 47data management component, 40data structure, 26data structure match, 15, 20DD phase, 31, 61, 64decomposition, 7dependency diagram, 42design metric, 16design quality, 15documentation requirement, 10experimental prototype, 24factoring, 15, 21fail safe, 14fan-in, 15, 20fan-out, 15, 21formal method, 45formal review, 31HOOD, 39human interaction component, 40ICD, 9, 58information clustering, 23information hiding, 23inheritance diagram, 42inspection, 6integration test, 61Integration Test Plan, 30interface requirement, 8internal review, 31JSD, 43layers, 41maintainability requirement, 13media control, 63method, 33, 63metric, 15, 63model, 5modularity, 23module diagram, 39MTBF, 11non-functional, 7Object-Oriented Design, 36OMT, 41operational requirement, 10parallel control flow, 28partitions, 41passive safety, 14

Page 373: ESA Software Engineering Standards

F-2 ESA PSS-05-04 Issue 1 Revision 1 (March 1995)INDEX

performance requirement, 8physical model, 5portability requirement, 11problem domain component, 40problem reporting, 63process diagram, 39programming language, 29progress report, 4, 61Project History Document, 25prototype, 62prototyping, 24quality assurance, 61quality requirement, 11recursive design’, 43reliability requirement, 11resource, 64resource requirement, 10reuse, 24review, 63RID, 31risk analysis, 63risk management, 63SADT, 35safety requirement, 14SCM06, 55SCM46, 62SCM47, 62SCMP/AD, 61SCMP/DD, 3, 62security requirement, 10sequential control flow, 28Shlaer-Mellor, 42simulation, 64

SPM08, 61SPM09, 61SPM10, 61SPM11, 61SPM12, 61SPMP/AD, 61SPMP/DD, 3, 61, 62SQA, 63SQA08, 62SQA09, 63SQAP/AD, 61SQAP/DD, 3, 62SRD, 4SSADM, 36SVV15, 62SVV17, 64SVVP/AD, 61SVVP/DD, 3, 62SVVP/IT, 3synchronous control flow, 28system shape, 15, 20task management component, 40technical review, 6test, 62, 63tool, 47, 48, 63traceability, 47training, 63validation, 61verification, 61verification requirement, 10walkthrough, 6, 31Yourdon, 34

Page 374: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1March 1995

european space agency / agence spatiale européenne8-10, rue Mario-Nikis, 75738 PARIS CEDEX, France

Guideto the softwaredetailed design andproductionphasePrepared by:ESA Board for SoftwareStandardisation and Control(BSSC)

Page 375: ESA Software Engineering Standards

ii ESA PSS-05-05 Issue 1 Revision 1 (March 1995)DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

1. DOCUMENT TITLE: ESA PSS-05-05 Guide to the Software Detailed Design and Production Phase

2. ISSUE 3. REVISION 4. DATE 5. REASON FOR CHANGE

1 0 1992 First issue

1 1 1995 Minor revisions for publication

Issue 1 revision 1 approved, May 1995Board for Software Standardisation and ControlM. Jones and U. Mortensen, co-chairmen

Issue 1 approved, 10th June 1992Telematics Supervisory Board

Issue 1 approved by:The Inspector General, ESA

Published by ESA Publications Division,ESTEC, Noordwijk, The Netherlands.Printed in the Netherlands.ESA Price code: E1ISSN 0379-4059

Copyright © 1992 by European Space Agency

Page 376: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) iiiTABLE OF CONTENTS

TABLE OF CONTENTS

CHAPTER 1 INTRODUCTION..................................................................................11.1 PURPOSE ................................................................................................................11.2 OVERVIEW...............................................................................................................1

CHAPTER 2 THE DETAILED DESIGN AND PRODUCTION PHASE......................32.1 INTRODUCTION......................................................................................................32.2 EXAMINATION OF THE ADD..................................................................................42.3 DETAILED DESIGN .................................................................................................5

2.3.1 Definition of design standards .........................................................................52.3.2 Decomposition of the software into modules..................................................52.3.3 Reuse of the software.......................................................................................62.3.4 Definition of module processing ......................................................................72.3.5 Defensive design ..............................................................................................82.3.6 Optimisation....................................................................................................102. 3.7 Prototyping......................................................................................................112.3.8 Design reviews................................................................................................112.3.9 Documentation ...............................................................................................122.4 TEST SPECIFICATIONS .....................................................................................132.4.1 Unit test planning............................................................................................132.4 2 Test designs, test cases and test procedures ..............................................13

2.5 CODING AND UNIT TESTING ...............................................................................142.5.1 Coding..............................................................................................................14

2.5.1.1 Module headers.....................................................................................152.5.1.2 Declarations of variables.......................................................................162.5.1.3 Documentation of variables ..................................................................162.5.1.4 Names of variables................................................................................162.5.1.5 Mixing data types ..................................................................................162.5.1.6 Temporary variables..............................................................................172.5.1.7 Parentheses...........................................................................................172.5.1.8 Layout and presentation .......................................................................172.5.1.9 Comments .............................................................................................172.5.1.10 Diagnostic code ..................................................................................182.5.1.11 Structured programming.....................................................................182.5.1.12 Consistent programming ....................................................................182.5.1.13 Module size .........................................................................................182.5.1.14 Code simplicity ....................................................................................18

2.5.2 Coding standards............................................................................................192.5.3 Unit testing.......................................................................................................19

2.6 INTEGRATION AND SYSTEM TESTING................................................................202.6.1 Integration ........................................................................................................20

Page 377: ESA Software Engineering Standards

iv ESA PSS-05-05 Issue 1 Revision 1 (March 1995)TABLE OF CONTENTS

2.6.2 Integration testing............................................................................................212.6.3 System testing .................................................................................................21

2.7 PLANNING THE TRANSFER PHASE.....................................................................222.8 THE DETAILED DESIGN PHASE REVIEW.............................................................23

CHAPTER 3 METHODS FOR DETAILED DESIGN AND PRODUCTION..............253.1 INTRODUCTION.....................................................................................................253.2 DETAILED DESIGN METHODS .............................................................................25

3.2.1 Flowcharts........................................................................................................263.2.2 Stepwise refinement ........................................................................................263.2.3 Structured programming.................................................................................263.2.4 Program Design Languages ...........................................................................283.2.5 Pseudo-code ...................................................................................................293.2.6 Jackson Structured Programming ..................................................................293.3 PRODUCTION METHODS .................................................................................303.3.1 Programming languages.................................................................................30

3.3.1.1 Feature classification ............................................................................303.3.1.1.1 Procedural languages.................................................................303.3.1.1.2 Object-oriented languages.........................................................323.3.1.1.3 Functional languages .................................................................323.3.1.1.4 Logic programming languages..................................................33

3.3.1.2 Applications...........................................................................................343.3.1.3 FORTRAN ..............................................................................................343.3.1.4 COBOL ..................................................................................................353.3.1.5 Pascal ....................................................................................................363.3.1.6 C.............................................................................................................363.3.1.7 Modula-2................................................................................................373.3.1.8 Ada.........................................................................................................373.3.1.9 Smalltalk ................................................................................................383.3.1.10 C++ ....................................................................................................383.3.1.11 LISP......................................................................................................383.3.1.12 ML ........................................................................................................393.3.1.13 Prolog...................................................................................................393.3.1.14 Summary .............................................................................................40

3.3.2 Integration methods ........................................................................................403.3.2.1 Function-by-function .............................................................................413.3.2.2 Top-down integration ............................................................................423.3.2.3 Bottom-up integration ...........................................................................42

CHAPTER 4 TOOLS FOR DETAILED DESIGN AND PRODUCTION....................434.1 INTRODUCTION.....................................................................................................434.2 DETAILED DESIGN ................................................................................................43

4.2.1 CASE tools ......................................................................................................43

Page 378: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) vTABLE OF CONTENTS

4.2.2 Configuration managers.................................................................................444.2.3 Precompilers ..................................................................................................44

4.3 PRODUCTION .......................................................................................................444.3.1 Modelling tools ................................................................................................454.3.2 Code generators..............................................................................................454.3.3 Editors ..............................................................................................................454.3.4 Language-sensitive editors .............................................................................464.3.5 Static analysers................................................................................................464.3.6 Compilers.........................................................................................................474.3.7 Linkers..............................................................................................................484.3.8 Debuggers .......................................................................................................494.3.9 Dynamic analysers ..........................................................................................494.3.10 Test tools .......................................................................................................504.3.11 Word processors ...........................................................................................514.3.12 Documentation generators ...........................................................................514.3.13 Configuration managers................................................................................52

CHAPTER 5 THE DETAILED DESIGN DOCUMENT.............................................535.1 INTRODUCTION....................................................................................................535.2 STYLE.....................................................................................................................53

5.2.1 Clarity ...............................................................................................................535.2.2 Consistency .....................................................................................................545.2.3 Modifiability ......................................................................................................54

5.3 EVOLUTION .........................................................................................................555.4 RESPONSIBILITY .................................................................................................555.5 MEDIUM...............................................................................................................555.6 CONTENT ............................................................................................................55

CHAPTER 6 THE SOFTWARE USER MANUAL....................................................676.1 INTRODUCTION.....................................................................................................676.2 STYLE......................................................................................................................67

6.2.1 Matching the style to the user characteristics................................................686.2.1.1 The end user's view ..............................................................................686.2.1.2 The operator's view...............................................................................696.2.1.3 The trainee's view .................................................................................69

6.2.2 Matching the style to the HCI characteristics.................................................696.2.2.1 Command-driven systems...................................................................706.2.2.2 Menu-driven systems...........................................................................706.2.2.3 Graphical User Interface systems .......................................................706.2.2.4 Natural-language and speech interface systems ...............................716.2.2.5 Embedded systems .............................................................................71

6.3 EVOLUTION............................................................................................................716.4 RESPONSIBILITY....................................................................................................72

Page 379: ESA Software Engineering Standards

vi ESA PSS-05-05 Issue 1 Revision 1 (March 1995)PREFACE

6.5 MEDIUM..................................................................................................................726.5.1 Online help ......................................................................................................726.5.2 Hypertext .........................................................................................................73

6.6 CONTENT..............................................................................................................74

CHAPTER 7 LIFE CYCLE MANAGEMENT ACTIVITIES .......................................857.1 INTRODUCTION.....................................................................................................857.2 PROJECT MANAGEMENT PLAN FOR THE TR PHASE .......................................857.3 CONFIGURATION MANAGEMENT PLAN FOR THE TR PHASE..........................857.4 QUALITY ASSURANCE PLAN FOR THE TR PHASE.............................................867.5 ACCEPTANCE TEST SPECIFICATION..................................................................87

APPENDIX A GLOSSARY ....................................................................................A-1APPENDIX B REFERENCES................................................................................B-1APPENDIX C MANDATORY PRACTICES ...........................................................C-1APPENDIX D REQUIREMENTS TRACEABILITY MATRIX..................................D-1APPENDIX E INDEX .............................................................................................E-1

Page 380: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) viiPREFACE

PREFACE

This document is one of a series of guides to software engineering produced bythe Board for Software Standardisation and Control (BSSC), of the European SpaceAgency. The guides contain advisory material for software developers conforming toESA's Software Engineering Standards, ESA PSS-05-0. They have been compiled fromdiscussions with software engineers, research of the software engineering literature,and experience gained from the application of the Software Engineering Standards inprojects.

Levels one and two of the document tree at the time of writing are shown inFigure 1. This guide, identified by the shaded box, provides guidance aboutimplementing the mandatory requirements for the software detailed design andproduction phase described in the top level document ESA PSS-05-0.

Guide to theSoftware Engineering

Guide to theUser Requirements

Definition Phase

Guide toSoftware Project

Management

PSS-05-01

PSS-05-02 UR GuidePSS-05-03 SR Guide

PSS-05-04 AD GuidePSS-05-05 DD Guide

PSS-05-07 OM Guide

PSS-05-08 SPM GuidePSS-05-09 SCM Guide

PSS-05-11 SQA Guide

ESASoftware

EngineeringStandards

PSS-05-0

Standards

Level 1

Level 2

PSS-05-10 SVV Guide

PSS-05-06 TR Guide

Figure 1: ESA PSS-05-0 document tree

The Guide to the Software Engineering Standards, ESA PSS-05-01, containsfurther information about the document tree. The interested reader should consult thisguide for current information about the ESA PSS-05-0 standards and guides.

The following past and present BSSC members have contributed to theproduction of this guide: Carlo Mazza (chairman), Bryan Melton, Daniel de Pablo,Adriaan Scheffer and Richard Stevens.

Page 381: ESA Software Engineering Standards

viii ESA PSS-05-05 Issue 1 Revision 1 (March 1995)PREFACE

The BSSC wishes to thank Jon Fairclough for his assistance in the developmentof the Standards and Guides, and to all those software engineers in ESA and Industrywho have made contributions.

Requests for clarifications, change proposals or any other comment concerningthis guide should be addressed to:

BSSC/ESOC Secretariat BSSC/ESTEC SecretariatAttention of Mr C Mazza Attention of Mr B MeltonESOC ESTECRobert Bosch Strasse 5 Postbus 299D-64293 Darmstadt NL-2200 AG NoordwijkGermany The Netherlands

Page 382: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 1INTRODUCTION

CHAPTER 1INTRODUCTION

1.1 PURPOSE

ESA PSS-05-0 describes the software engineering standards to beapplied for all deliverable software implemented for the European SpaceAgency (ESA), either in house or by industry [Ref 1].

ESA PSS-05-0 defines the second phase of the softwaredevelopment life cycle as the ‘Architectural Design Phase' (AD phase). Theoutput of this phase is the Architectural Design Document (ADD). The thirdphase of the life cycle is the ‘Detailed Design and Production Phase' (DDphase). Activities and products are examined in the ‘DD review' (DD/R) atthe end of the phase.

The DD phase can be called the ‘implementation phase' of the lifecycle because the developers code, document and test the software afterdetailing the design specified in the ADD.

This document provides guidance on how to produce the DetailedDesign Document (DDD), the code and the Software User Manual (SUM).This document should be read by all active participants in the DD phase,e.g. designers, programmers, project managers and product assurancepersonnel.

1.2 OVERVIEW

Chapter 2 discusses the DD phase. Chapters 3 and 4 discussmethods and tools for detailed design and production. Chapter 5 and 6describe how to write the DDD and SUM, starting from the templates.Chapter 7 summarises the life cycle management activities, which arediscussed at greater length in other guides.

All the mandatory practices in ESA PSS-05-0 relevant to the DDphase are repeated in this document. The identifier of the practice is addedin parentheses to mark a repetition. No new mandatory practices aredefined.

Page 383: ESA Software Engineering Standards

2 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)INTRODUCTION

This page is intentionally left blank.

Page 384: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 3THE DETAILED DESIGN AND PRODUCTION PHASE

CHAPTER 2THE DETAILED DESIGN AND PRODUCTION PHASE

2.1 INTRODUCTION

Software design is the ‘process of defining the architecture,components, interfaces, and other characteristics of a system orcomponent’ [Ref 2]. Detailed design is the process of defining the lower-level components, modules and interfaces. Production is the process of:• programming - coding the components;• integrating - assembling the components;• verifying - testing modules, subsystems and the full system.

The physical model outlined in the AD phase is extended toproduce a structured set of component specifications that are consistent,coherent and complete. Each specification defines the functions, inputs,outputs and internal processing of the component.

The software components are documented in the Detailed DesignDocument (DDD). The DDD is a comprehensive specification of the code. Itis the primary reference for maintenance staff in the Transfer phase (TRphase) and the Operations and Maintenance phase (OM phase).

The main outputs of the DD phase are the:• source and object code;• Detailed Design Document (DDD);• Software User Manual (SUM);• Software Project Management Plan for the TR phase (SPMP/TR);• Software Configuration Management Plan for the TR phase (SCMP/TR);• Software Quality Assurance Plan for the TR phase (SQAP/TR);• Acceptance Test specification (SVVP/AT).

Progress reports, configuration status accounts, and audit reportsare also outputs of the phase. These should always be archived.

The detailed design and production of the code is the responsibilityof the developer. Engineers developing systems with which the software

Page 385: ESA Software Engineering Standards

4 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE DETAILED DESIGN AND PRODUCTION PHASE

interfaces may be consulted during this phase. User representatives andoperations personnel may observe system tests.

DD phase activities must be carried out according to the plansdefined in the AD phase (DD01). Progress against plans should becontinuously monitored by project management and documented at regularintervals in progress reports.

Figure 2.1 is an ideal representation of the flow of software productsin the DD phase. The reader should be aware that some DD phase activitiescan occur in parallel as separate teams build the major components andintegrate them. Teams may progress at different rates; some may beengaged in coding and testing while others are designing. The followingsubsections discuss the activities shown in Figure 2.1.

SCMP/TR

ExaminedADD

VerifiedSubsystems

VerifiedSystem

ApprovedDDD,SUM

DetailDesign

WriteTest Specs

Codeand test

Integrateand test

DD/R

WriteTR Plans

Accepted RID

CASE toolsMethodsPrototyping

SPMP/TR

SQAP/TR

SVVP/AT

SCMP/DDSPMP/DD

SQAP/DDSVVP/DD

Accepted SPRcode

Test tools

Static AnalysersDynamic Analysers

Debuggers

CM tools

DraftSVVP/ATSVVP/STSVVP/ITSVVP/UT

DraftDDDSUM

ExamineADD

ApprovedADD

WalkthroughsInspections

Figure 2.1: DD phase activities

2.2 EXAMINATION OF THE ADD

If the developers have not taken part in the AD/R they shouldexamine the ADD and confirm that it is understandable. Developers shouldconsult ESA PSS-05-04, ‘Guide to the Software Architectural Design Phase'for help on ADDs. The examination should be carried out by staff familiarwith the architectural design method. The developers should also confirmthat adequate technical skill is available to produce the outputs of the DDphase.

Page 386: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 5THE DETAILED DESIGN AND PRODUCTION PHASE

2.3 DETAILED DESIGN

Design standards must be set at the start of the DD phase byproject management to coordinate the collective efforts of the team. This isespecially necessary when development team members are working inparallel.

The developers must first complete the top-down decomposition ofthe software started in the AD phase (DD02) and then outline the processingto be carried out by each component. Developers must continue thestructured approach and not introduce unnecessary complexity. They mustbuild defences against likely problems.

Developers should verify detailed designs in design reviews, level bylevel. Review of the design by walkthrough or inspection before coding is amore efficient way of eliminating design errors than testing.

The developer should start the production of the userdocumentation early in the DD phase. This is especially important when theHCI component is significantly large: writing the SUM forces the developerto keep the user's view continuously in mind.

The following subsections discuss these activities in more detail.

2.3.1 Definition of design standards

Wherever possible, standards and conventions used in the ADphase should be carried over into the DD phase. They should bedocumented in part one of the DDD. Standards and conventions should bedefined for:• design methods;• documentation;• naming components;• Computer Aided Software Engineering (CASE) tools;• error handling.

2.3.2 Decomposition of the software into modules

As in architectural design, the first stage of detailed design is todefine the functions, inputs and outputs of each software component.Whereas architectural design only considered the major, top-level

Page 387: ESA Software Engineering Standards

6 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE DETAILED DESIGN AND PRODUCTION PHASE

components, this stage of detailed design must specify all the softwarecomponents.

The developer starts with the major components defined in the ADDand continues to decompose them until the components can be expressedas modules in the selected programming languages.

Decomposition should be carried out using the same methods andtools employed in the AD phase. CASE tools and graphical methodsemployed in architectural design can still be used.

Component processing is only specified to the level of detailnecessary to answer the question: ‘is further decomposition necessary?'.Decomposition criteria are:• will the module have too many statements?• will the module be too complex?• does the module have low cohesion?• does the module have high coupling?• does the module contain similar processing to other modules?

ESA PSS-05-04 ‘Guide to the Software Architectural Design Phase',discusses complexity, cohesion and coupling.

2.3.3 Reuse of the software

Software reuse questions can arise at all stages of design. In the ADphase decisions may have been taken to reuse software for all or somemajor components, such as:• application generators;• database management systems;• human-computer interaction utilities;• mathematical utilities;• graphical utilities.

In the DD phase developers may have to:• decide which library modules to use;• build shells around the library modules to standardise interfaces (e.g.

for error handling) and enhance portability;

Page 388: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 7THE DETAILED DESIGN AND PRODUCTION PHASE

• define conventions for the library modules to use (more than onecombination of library modules might do the same job).

Developers should resist the temptation to clone a library moduleand make minor modifications, because of the duplication that results.Instead they should put any extra processing required in a shell module thatcalls the library module.

A common reason for reusing software is to save time at the codingand unit testing stage. Reused software is not normally unit tested becausethis has already been carried out in a previous project or by an externalorganisation. Nevertheless developers should convince themselves that thesoftware to be reused has been tested to the standards appropriate for theirproject. Integration testing of reused software is always necessary to checkthat it correctly interfaces with the rest of the software. Integration testing ofreused software can identify performance and resource problems.

2.3.4 Definition of module processing

The developer defines the processing steps in the second stage ofdetailed design. The developer should first outline the module processing ina Program Design Language (PDL) or pseudo-code and refine it, step-by-step, into a detailed description of the processing in the selectedprogramming language.

The processing description should reflect the type of programminglanguage. When using a procedural language, the description of theprocessing should contain only:• sequence constructs (e.g. assignments, invocations);• selection constructs (e.g. conditions, case statements);• iteration constructs (e.g. do loops)

The definition of statements that do not affect the logic (e.g. i/ostatements, local variable declarations) should be deferred to the codingstage.

Each module should have a single entry point and exit point. Controlshould flow from the entry point to exit point. Control should flow back onlyin an iteration construct, i.e. a loop. Branching, if used at all, should berestricted to a few standard situations (e.g. on error), and should alwaysjump forward, not backward, in the control flow.

Page 389: ESA Software Engineering Standards

8 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE DETAILED DESIGN AND PRODUCTION PHASE

Recursion is a useful technique for processing a repeating datastructure such as a tree, or a list, and for evaluating a query made up ofarithmetic, relational, or logical expressions. Recursion should only be usedif the programming language explicitly supports it.

PDLs and pseudo-code can be included in the code as comments,easing maintenance, whereas flowcharts cannot. Flowcharts are notcompatible with the stepwise refinement technique, and so PDLs andpseudo-code are to be preferred to flowcharts for detailed design.

2.3.5 Defensive design

Developers should anticipate possible problems and includedefences against them. Myers [Ref. 14] describes three principles ofdefensive design:• mutual suspicion;• immediate detection;• redundancy.

The principle of mutual suspicion says that modules should assumeother modules contain errors. Modules should be designed to handleerroneous input and error reports from other modules.

Every input from another module or component external to theprogram (e.g. a file) should be checked. When input data is used in acondition (e.g. CASE statement or IF... THEN.. ELSE...), an outcome shouldbe defined for every possible input case. Every IF condition should have anELSE clause.

When modules detect errors and return control to the caller, theyshould always inform the caller that an error has occurred. The callingmodule can then check the error flag for successful completion of the calledmodule. It should be unnecessary for the caller to check the other moduleoutputs.

It is possible for a subordinate to fail to return control. Modulesshould normally set a ‘timeout' when waiting for a message, a rendezvousto be kept, or an event flag to be set. Modules should always actappropriately after a timeout (e.g. retry the operation). For a full discussion oferror recovery actions, see reference 14.

Page 390: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 9THE DETAILED DESIGN AND PRODUCTION PHASE

The principle of immediate detection means that possible errorsshould be checked for immediately. If the reaction is not immediate, the errorshould be flagged for later action.

Conventions for taking action after detection of an error should beestablished. Normally error control responsibilities of a module should be atthe same level as normal control responsibilities. Error logging, if done at all,should be done at the point where action is taken, and not at the point oferror detection, since the significance of the error may not be apparent at thepoint of detection. It may be appropriate to insert diagnostic code at thepoint of detection, however.

Only the top level module should have responsibility for stoppingthe program. Developers should always check that there are no ‘STOP'statements lurking in an otherwise apparently useful module. It may beimpossible to reuse modules that suddenly take over responsibility for thecontrol flow of the whole system. Halting the program and giving atraceback may be acceptable in prototype software (because it helps faultdiagnosis), but not in operational software.

Redundancy has been discussed in ESA PSS-05-04, ‘Guide to theSoftware Architectural Design Phase'. In detailed design, redundancyconsiderations can lead designers to include checksums in records andidentity tags to confirm that an item is really what it is assumed to be (e.g.header record).

Myers also makes a useful distinction between ‘passive faultdetection' and ‘active fault detection'. The passive fault detection approachis to check for errors in the normal flow of execution. Examples are modulesthat always check their input, and status codes returned from system calls.The active fault detection approach is to go looking for problems instead ofwaiting for them to arise. Examples are ‘monitor' programs thatcontinuously check disk integrity and attempts to violate system security.

Often most system code is dedicated to error handling. Librarymodules for error reporting should be made available to prevent duplicationof error handling code. When modules detect errors, they should call errorlibrary modules to perform standard error handling functions such as errorlogging and display.

Defensive design principles have influenced the design of mostmodern languages. Strong type-checking languages automatically checkthat the calling data type matches the called data type, for example. Ada

Page 391: ESA Software Engineering Standards

10 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE DETAILED DESIGN AND PRODUCTION PHASE

goes further and builds range checking into the language. The degree towhich a language supports defensive design can be a major factor in itsfavour.

2.3.6 Optimisation

Conventionally, optimisation means to make the best compromisebetween opposing tendencies. Improvement in one area is often associatedwith degradation in another. Software performance is often traded-offagainst maintainability and portability, for example.

The optimisation process is to:• define the attributes to change (e.g. execution time);• measure the attribute values before modifying the software;• measure the attribute values after modifying the software;• analyse the change in attribute values before deciding whether to

modify the software again.

Optimisation can stop when the goals set in the SRD have beenmet. Every change has some risk, and the costs and benefits of eachchange should be clearly defined.

The ‘law of diminishing returns' can also be used to decide when tostop optimisation. If there are only slight improvements in the values ofattribute values after optimisation, the developers should stop trying to seekimprovements.

Failure to get a group of people to agree about the solution to anoptimisation problem is itself significant. It means that the attribute isprobably optimised, and any improvement in one attribute results in anunacceptable degradation in another.

The structured programming method discourages optimisationbecause of its effect on reliability and maintainability. Code should be clearand simple, and its optimisation should be left to the compiler. Compilersare more likely to do a better job of optimisation than programmers,because compilers incorporate detailed knowledge of the machine. Oftenthe actual causes of inefficiency are quite different from what programmersmight suspect, and can only be revealed with a dynamic analysis tool (seeSection 4.3.9).

Page 392: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 11THE DETAILED DESIGN AND PRODUCTION PHASE

In summary, developers should define what they are trying tooptimise and why, before starting to do it. If in doubt, remember Jackson'stwo rules of optimisation [Ref. 6]:• don't do it, but if you must:• don't do it yet.

2.3.7 Prototyping

Experimental prototyping can be useful for:• comparing alternative designs;• checking the feasibility of the design.

The high-level design will normally have been identified during theAD phase. Detailed designs may have to be prototyped in the DD phase tofind out which designs best meet the requirements.

The feasibility of a novel design idea should be checked byprototyping. This ensures that an idea not only works, but also that it workswell enough to meet non-functional requirements for quality andperformance.

2.3.8 Design reviews

Detailed designs should be reviewed top-down, level by level, asthey are generated during the DD phase. Reviews may take the form ofwalkthroughs or inspections. Walkthroughs are useful on all projects forinforming and passing on expertise. Inspections are efficient methods foreliminating defects before production begins.

Two types of walkthrough are useful:• code reading;• ‘what-if?' analysis.

In a code reading, reviews trace the logic of a module frombeginning to end. In ‘what-if?' analysis, component behaviour is examinedfor specific inputs.

Static analysis tools evaluate modules without executing them.Static analysis functions are built in to some compilers. Output from staticanalysis tools may be input to a code review.

Page 393: ESA Software Engineering Standards

12 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE DETAILED DESIGN AND PRODUCTION PHASE

When the detailed design of a major component is complete, acritical design review must certify its readiness for implementation (DD10).The project leader should participate in these reviews, with the team leaderand team members concerned.

2.3.9 Documentation

The developers must produce the DDD and the SUM. While theADD specifies tasks, files and programs, the DDD specifies modules. TheSUM describes how to use the software, and may be affected bydocumentation requirements in the SRD.

The recommended approach to module documentation is:• create the module template to contain headings for the standard DDD

entries:

n Component identifiern.1 Typen.2 Purposen.3 Functionn.4 Subordinatesn.5 Dependenciesn.6 Interfacesn.7 Resourcesn.8 Referencesn.9 Processingn.10 Data

• detail the design by filling in the sections, with the processing sectioncontaining the high-level definition of the processing in a PDL orpseudo-code;

• assemble the completed templates for insertion in the DDD.

A standard module template enables the DDD componentspecifications to be generated automatically. Tools to extract the moduleheader from the source code and create an entry for the DDD greatlysimplify maintenance. When a module is modified, the programmer edits,compiles and verifies the source module, and then runs the tool to generatethe new DDD entry.

The SUM contains the information needed by the users of thesoftware to operate it. The SUM should be gradually prepared during the DD

Page 394: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 13THE DETAILED DESIGN AND PRODUCTION PHASE

phase. The developers should exercise the procedures described in theSUM when testing the software.

2.4 TEST SPECIFICATIONS

The purpose of testing is to prove empirically that the system, orcomponent under test, meets specified requirements. Normally it is notpossible to prove by testing that a component contains no errors.

According to Myers, an important aspect of testing is to execute aprogram with the intention of finding errors [Ref. 14]. Testers should adoptthis ‘falsificationist' approach because it encourages testers to devise teststhat the software fails, not passes. This idea originates from Karl Popper, theinfluential philosopher who first proposed the ‘falsificationist' approach as ascientific method.

Testers must be critical and objective for testing to be effective. Onlarge projects, system and acceptance test specifications should not bewritten by the analysts, designers and programmers responsible for theSRD, ADD, DDD and code. On small and medium-size projects, it isacceptable for the developer to write the test specifications and for the useror product assurance representatives to review them. Users should run theacceptance tests, not the developers.

2.4.1 Unit test planning

Unit test plans must be generated in the DD phase anddocumented in the Unit Test section of the Software Verification andValidation Plan (SVVP/UT). The Unit Test Plan should describe the scope,approach and resources required for the unit tests, and take account of theverification requirements in the SRD (see Section 2.5.3 and Chapter 7).

2.4 Test designs, test cases and test procedures

The developer must write specifications for the:• acceptance tests,• system tests,• integration tests,• unit tests

Page 395: ESA Software Engineering Standards

14 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE DETAILED DESIGN AND PRODUCTION PHASE

in the DD phase and document them in the SVVP. The specifications shouldbe based on the test plans, and comply with the verification and acceptancetesting requirements in the SRD. The specifications should define the:• test designs (SVV19);• test cases (SVV20);• test procedures (SVV21);

see Section 2.6 below and Chapter 7.

When individual modules have been coded and unit tested,developers have to integrate them into larger components, test the largercomponents, integrate the larger components and so on. Integration istherefore inextricably linked with testing.

The Software Project Management Plan for the DD phase (SPMP/DD) should contain a delivery plan for the software based on the lifecycle approach adopted. The delivery plan influences the integration testsdefined in the Software Verification and Validation Plan (SVVP/IT). For agiven delivery plan, the SVVP/IT should make integration testing efficient byminimising the number of test aids (e.g. drivers and stubs) and test datafiles required.

2.5 CODING AND UNIT TESTING

Coding is both the final stage of the design process and also thefirst stage of production. Coding produces modules, which must be unittested. Unit testing is the second stage of the production process.

2.5.1 Coding

The transition from the detailed design stage to the coding stagecomes when the developer begins to write modules that compile in theprogramming language. This transition is obvious when detailed design hasbeen performed with flowcharts, but is less so when the developer has useda PDL or pseudo-code.

Coding must be based on the principles of:• structured programming (DD03);• concurrent production and documentation (DD04).

Page 396: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 15THE DETAILED DESIGN AND PRODUCTION PHASE

Every module should be understandable to a reviewer ormaintenance programmer, moderately familiar with the programminglanguage and unfamiliar with the program, the compiler and operatingsystem. Understandability can be achieved in a variety of ways:• including an introductory header for each module;• declaring all variables;• documenting all variables;• using meaningful, unambiguous names;• avoiding mixing data types;• avoiding temporary variables;• using parentheses in expressions;• laying out code legibly;• adding helpful comments;• avoiding obscuring the module logic with diagnostic code;• adhering to structured programming rules;• being consistent in the use of the programming language;• keeping modules short;• keeping the code simple.

2.5.1.1 Module headers

Each module should have a header that introduces the module.This should include:• title;• configuration item identifier (SCM15);• original author (SCM16);• creation date (SCM17);• change history (SCM18).

If tools to support consistency between source code and DDD areavailable, the introduction should be followed by explanatory commentsfrom the component description in the DDD part 2.

The header usually comes after the module title statement (e.g.SUBROUTINE) and before the variable declarations.

Page 397: ESA Software Engineering Standards

16 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE DETAILED DESIGN AND PRODUCTION PHASE

The standard header should be made available so that it can beedited, completed and inserted at the head of each module.

2.5.1.2 Declarations of variables

Programmers should declare the type of all variables, whatever theprogramming language. The possible values of variables should either bestated in the variable declaration (as in Ada), or documented. Somelanguages, such as FORTRAN and Prolog, allow variables to be usedwithout explicit declaration of their type.

Programmers using weakly typed languages should declare allvariables. Where strong typing is a compiler option, it should always beused (e.g. IMPLICIT NONE in some FORTRAN compilers). Declarationsshould be grouped, with argument list variables, global variables and localvariable declarations clearly separated.

2.5.1.3 Documentation of variables

Programmers should document the meaning of all variables.Documentation should be integrated with the code (e.g. as a comment tothe declaration). The possible values of variables should either be stated inthe variable declaration (as in Ada), or documented.

2.5.1.4 Names of variables

Finding meaningful names for variables exercises the imagination ofevery programmer, but it is effort well worth spending. Names should reflectthe purpose of the variable. Natural language words should be usedwherever possible. Abbreviations and acronyms, if used, should be defined,perhaps as part of the comment to a variable's declaration.

Similar names should be avoided so that a single typing error in thename of a variable does not identify another variable. Another reason not touse similar names is to avoid ambiguity.

2.5.1.5 Mixing data types

Some programming languages prevent the mixing of data types bythe strong type checking feature (e.g. Ada). Mixing data types inexpressions should always be avoided, even if the programming languageallows it (e.g. FORTRAN). Examples of mixing data types include:• mixing data types in expressions;

Page 398: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 17THE DETAILED DESIGN AND PRODUCTION PHASE

• mismatching data types in an argument list;• equivalencing different data types.

2.5.1.6 Temporary variables

The temptation to define temporary variables to optimise codeshould be avoided. Although the use of temporary variables can simplify astatement, more statements are required, and more variables have to bedeclared. The net effect is that the module as a whole appears morecomplex.

2.5.1.7 Parentheses

Parentheses should be used in programming language expressionsto avoid ambiguity, and to help the reader identify the sequences ofoperations to be performed. They should not be used solely to override theprecedence rules of the language.

2.5.1.8 Layout and presentation

The layout of the code should allow the control logic to be easilyappreciated. The usual technique is to separate blocks of sequential codefrom other statements by blank lines, and to indent statements in conditionor iteration blocks. Statements that begin and end sequence, iteration andcondition constructs should be aligned vertically (e.g. BEGIN... END, DO...ENDDO, and IF... ELSEIF... ELSE... ENDIF). Long statements should bebroken and continued on the next line at a clause in the logic, not at the endof the line. There should never be more than one statement on a line.

2.5.1.9 Comments

Comments increase understandability, but they are no substitute forwell-designed, well-presented and intelligible code. Comments should beused to explain difficult or unusual parts of the code. Trivial commentsshould be avoided.

PDL statements and pseudo-code may be preserved in the code ascomments to help reviewers. Comments should be clearly distinguishablefrom code. It is not adequate only to use a language keyword; commentsshould be well separated from code by means of blank lines, and perhapswritten in mixed case (if upper case is used for the code).

Page 399: ESA Software Engineering Standards

18 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE DETAILED DESIGN AND PRODUCTION PHASE

2.5.1.10 Diagnostic code

Diagnostic code is often inserted into a module to:• make assertions about the state of the program;• display the contents of variables.

Care should be taken to prevent diagnostic code obscuring themodule logic. There is often a temptation to remove diagnostic code topresent a clean ‘final' version of the source code. However routinediagnostic code that allows verification of correct execution can beinvaluable in the maintenance phase. It is therefore recommended thatroutine diagnostic code be commented out or conditionally compiled (e.g.included as ‘debug lines'). Ad hoc diagnostic code added to help discoverthe cause of a particular problem should be removed after the problem hasbeen solved.

2.5.1.11 Structured programming

The rules of structured programming are given in section 3.2.3.These rules should always be followed when a procedural language is used(such as FORTRAN, COBOL, Pascal or Ada). It is easy to break the rules ofstructured programming when the older procedural languages (e.g.FORTRAN, COBOL) are used, but less so with the more modern ones(Pascal, Ada).

2.5.1.12 Consistent programming

Language constructs should be used consistently. Inconsistencyoften occurs when modules are developed and maintained by differentpeople. Coding standards can help achieve consistency, but it is notrealistic for them to cover every situation. Modifications to the code shouldpreserve the style of the original.

2.5.1.13 Module size

Modules should be so short that the entire module can be seen atonce. This allows the structure, and the control logic, to be appreciatedeasily. The recommended maximum size of modules is about 50 lines,excluding the header and diagnostic code.

2.5.1.14 Code simplicity

Modules should be ‘simple'. The principle of ‘Occam's razor', (i.e.that an idea should be expressed by means of the minimum number of

Page 400: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 19THE DETAILED DESIGN AND PRODUCTION PHASE

entities), should be observed in programming. Simplicity can be checkedformally by applying complexity measures [Ref. 11]. Simplicity can bechecked informally using the rule of seven: the number of separate thingsthat have to be held in mind when examining a part of the module shouldnot exceed seven. Whatever method of evaluation is used, allmeasurements of simplicity should be confirmed by peer review.

2.5.2 Coding standards

Coding standards should be established for all the languages used,and documented or referenced in the DDD. They should provide rules for:• presentation, (e.g. header information and comment layout);• naming programs, subprograms, files, variables and data;• limiting the size of modules;• using library routines, especially:

- operating system routines;- commercial library routines (e.g. numerical analysis);- project-specific utility routines;

• defining constants;• defining data types;• using global data;• using compiler specific features not in the language standard;• error handling.

2.5.3 Unit testing

Unit tests verify the design and implementation of all componentsfrom the lowest level defined in the detailed design up to the lowest level inthe architectural design (normally the task level). Test harnesses, composedof a collection of drivers and stubs, need to be constructed to enable unittesting of modules below the top level.

Unit tests verify that a module is doing what it is supposed to do (‘black box' testing), and that it is doing it in the way it was intended ( ‘whitebox' testing). The traditional technique for white box testing is to insertdiagnostic code. Although this may still be necessary for testing real-timebehaviour, debuggers are now the preferred tool for white box testing. Theusual way to test a new module is to step through a few test cases with the

Page 401: ESA Software Engineering Standards

20 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE DETAILED DESIGN AND PRODUCTION PHASE

debugger and then to run black box tests for the rest. Later, black box testsare run to fully exercise the module. The input test data should be realisticand sample the range of possibilities. Programmers revert to ‘white box'testing mode when they have to debug a problem found in a black box test.

Before a module can be accepted, every statement shall besuccessfully executed at least once (DD06). Coverage data should becollected during unit tests. Tools and techniques for collecting coveragedata are:• debuggers;• dynamic analysers;• diagnostic code.

The inclusion of diagnostics can clutter up the code (see Section2.5.1.10) and debuggers and dynamic analysers are much preferable.Coverage should be documented in a debug log, a coverage report, or aprintout produced by the diagnostic code.

Unit testing is normally carried out by individuals or teamsresponsible for producing the component.

Unit test plans, test designs, test cases, test procedures and testreports are documented in the Unit Test section of the Software Verificationand Validation Plan (SVVP/UT).

2.6 INTEGRATION AND SYSTEM TESTING

The third stage of the production process is to integrate majorcomponents resulting from coding and unit testing into the system. In thefourth stage of production, the fully integrated system is tested.

2.6.1 Integration

Integration is the process of building a software system bycombining components into a working entity. Integration of componentsshould proceed in an orderly function-by-function sequence. This allows theoperational capabilities of the software to be demonstrated early, and thusgives visible evidence that the project is progressing.

Normally, the first components to be integrated support input andoutput. Once these components have been integrated and tested, they canbe used to test others. Whatever sequence of functions is used, it should

Page 402: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 21THE DETAILED DESIGN AND PRODUCTION PHASE

minimise the resources required for testing (e.g. testing effort and test tools),while ensuring that all source statements are verified. The SoftwareVerification and Validation Plan section for the integration tests (SVVP/IT)should define the assembly sequence for integration.

Within the functional groupings, components may be integrated andtested top-down or bottom-up. In top-down integration, ‘stubs' simulatelower-level modules. In bottom-up integration, ‘drivers' simulate the higher-level modules. Stubs and drivers can be used to implement test cases, notjust enable the software to be linked and loaded.

The integration process must be controlled by the softwareconfiguration management procedures defined in the SCMP (DD05). GoodSCM procedures are essential for correct integration. All deliverable codemust be identified in a configuration item list (DD12).

2.6.2 Integration testing

Integration testing is done in the DD phase when the majorcomponents defined in the ADD are assembled. Integration tests shouldverify that major components interface correctly.

Integration testing must check that all the data exchanged acrossan interface comply with the data structure specifications in the ADD(DD07). Integration testing must confirm that the control flows defined in theADD have been implemented (DD08).

Integration test designs, test cases, test procedures and testreports are documented in the Integration Test section of the SoftwareVerification and Validation Plan (SVVP/IT).

2.6.3 System testing

System testing is the process of testing an integrated softwaresystem. This testing can be done in the development or target environment,or a combination of the two. System testing must verify compliance withsystem objectives, as stated in the SRD (DD09). System testing shouldinclude such activities as:• passing data into the system, correctly processing and outputting it (i.e.

end-to-end system tests);• practice for acceptance tests (i.e. verification that user requirements will

be met);• stress tests (i.e. measurement of performance limits);

Page 403: ESA Software Engineering Standards

22 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE DETAILED DESIGN AND PRODUCTION PHASE

• preliminary estimation of reliability and maintainability;• verification of the Software User Manual.

Trends in the occurrence of defects should be monitored in systemtests; the behaviour of such trends is important for the estimation ofpotential acceptability.

For most embedded systems, as well as systems using specialperipherals, it is often useful or necessary to build simulators for the systemswith which the software will interface. Such simulators are often requiredbecause of:• late availability of the other systems;• limited test time with the other systems;• desire to avoid damaging delicate and/or expensive systems.

Simulators are normally a separate project in themselves. Theyshould be available on time, and certified as identical, from an interfacepoint of view, with the systems they simulate.

System test designs, test cases, test procedures and test reportsare documented in the System Test section of the Software Verification andValidation Plan (SVVP/ST).

The SUM is a key document during system (and acceptance)testing. The developers should verify it when testing the system.

2.7 PLANNING THE TRANSFER PHASE

Plans of TR phase activities must be drawn up in the DD phase.Generation of TR phase plans is discussed in Chapter 7. These plans coverproject management, configuration management, verification and validationand quality assurance. Outputs are the:• Software Project Management Plan for the TR phase (SPMP/TR);• Software Configuration Management Plan for the TR phase ( SCMP/TR);• Software Quality Assurance Plan for the TR phase (SQAP/TR).

Page 404: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 23THE DETAILED DESIGN AND PRODUCTION PHASE

2.8 THE DETAILED DESIGN PHASE REVIEW

The development team should hold walkthroughs and internalreviews of a product before its formal review. After production, the DDReview (DD/R) must consider the results of the verification activities anddecide whether to transfer the software (DD11). This should be a technicalreview. The recommended procedure is described in ESA PSS-05-10, whichis based closely on the IEEE standard for Technical Reviews [Ref. 4].

Normally, only the code, DDD, SUM and SVVP/AT undergo the fulltechnical review procedure involving users, developers, management andquality assurance staff. The Software Project Management Plan (SPMP/TR),Software Configuration Management Plan (SCMP/TR), and Software QualityAssurance Plan (SQAP/TR) are usually reviewed by management andquality assurance staff only.

In summary, the objective of the DD/R is to verify that:• the DDD describes the detailed design clearly, completely and in

sufficient detail to enable maintenance and development of the softwareby qualified software engineers not involved in the project;

• modules have been coded according to the DDD;• modules have been verified according to the unit test specifications in

the SVVP/UT;• major components have been integrated according to the ADD;• major components have been verified according to the integration test

specifications in the SVVP/IT;• the software has been verified against the SRD according to the system

test specifications in the SVVP/ST;• the SUM explains what the software does and instructs the users how

to operate the software correctly;• the SVVP/AT specifies the test designs, test cases and test procedures

so that all the user requirements can be validated.

The DD/R begins when the DDD, SUM, and SVVP, including the testresults, are distributed to participants for review. A problem with a documentis described in a ‘Review Item Discrepancy' (RID) form. A problem withcode is described in a Software Problem Report (SPR). Review meetings arethen held that have the documents, RIDs and SPRs as input. A reviewmeeting should discuss all the RIDs and SPRs and decide an action foreach. The review meeting may also discuss possible solutions to the

Page 405: ESA Software Engineering Standards

24 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE DETAILED DESIGN AND PRODUCTION PHASE

problems raised by them. The output of the meeting includes the processedRIDs, SPRs and Software Change Requests (SCR).

The DD/R terminates when a disposition has been agreed for all theRIDs. Each DD/R must decide whether another review cycle is necessary, orwhether the TR phase can begin.

Page 406: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 25METHODS FOR DETAILED DESIGN AND PRODUCTION

CHAPTER 3 METHODS FOR DETAILED DESIGN AND PRODUCTION

3.1 INTRODUCTION

This chapter summarises a range of design methods andprogramming languages. The details about each method and programminglanguage should be obtained from the references. This guide does notmake any particular method or language a standard, nor does it define a setof acceptable methods and languages. Each project should examine itsneeds, choose appropriate methods and programming languages, andjustify the selection in the DDD.

3.2 DETAILED DESIGN METHODS

Detailed design first extends the architectural design to the bottomlevel components. Developers should use the same design method thatthey employed in the AD phase. ESA PSS-05-04, ‘Guide to the SoftwareArchitectural Design Phase', discusses:• Structured Design;• Object Oriented Design;• Jackson System Development;• Formal Methods.

The next stage of design is to define module processing. This isdone by methods such as:• flowcharts;• stepwise refinement;• structured programming;• program design languages (PDLs);• pseudo coding;• Jackson Structured Programming (JSP).

Page 407: ESA Software Engineering Standards

26 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)METHODS FOR DETAILED DESIGN AND PRODUCTION

3.2.1 Flowcharts

A flowchart is ‘a control flow diagram in which suitably annotatedgeometrical figures are used to represent operations, data, equipment, andarrows are used to indicate the sequential flow from one to another’ [Ref. 2].It should represent the processing.

Flowcharts are an old software design method, dating from a timewhen the only tools available to a software designer were a pencil, paper,and stencil. A box is used to represent process steps and diamonds areused to represent decisions. Arrows are used to represent control flow.

Flowcharts predate structured programming and they are difficult tocombine with a stepwise refinement approach. Flowcharts are not wellsupported by tools and so their maintenance can be a burden. Althoughdirectly related to module internals, they cannot be integrated with the code,unlike PDLs and pseudo-code. For all these reasons, flowcharts are nolonger a recommended technique for detailed design.

3.2.2 Stepwise refinement

Stepwise refinement is the most common method of detaileddesign. The guidelines for stepwise refinement are:• start from functional and interface specifications;• concentrate on the control flow;• defer data declarations until the coding phase;• keep steps of refinement small to ease verification;• review each step as it is made.

Stepwise refinement is closely associated with structuredprogramming (see Section 3.2.3).

3.2.3 Structured programming

Structured programming is commonly associated with the name ofE.W. Dijkstra [Ref. 10]. It is the original ‘structured method' and proposed:• hierarchical decomposition;• the use of only sequence, selection and iteration constructs;• avoiding jumps in the program.

Page 408: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 27METHODS FOR DETAILED DESIGN AND PRODUCTION

Myers emphasises the importance of writing code with the intentionof communicating with people instead of machines [Ref.14].

The Structured Programming method emphasises that simplicity isthe key to achieving correctness, reliability, maintainability and adaptability.Simplicity is achieved through using only three constructs: sequence,selection and iteration. Other constructs are unnecessary.

Structured programming and stepwise refinement (see Section3.2.3) are inextricably linked. The goal of refinement is to define a procedurethat can be encoded in the sequence, selection and iteration constructs ofthe selected programming language.

Structured programming also lays down the following rules formodule construction:• each module should have a single entry and exit point;• control flow should proceed from the beginning to the end;• related code should be blocked together, not dispersed around the

module;• branching should only be performed under prescribed conditions (e.g.

on error).

The use of control structures other than sequence, selection anditeration introduces unnecessary complexity. The whole point about banning‘GOTO' was to prevent the definition of complex control structures. Jumpingout of loops causes control structures only to be partially contained withinothers and makes the code fragile.

Modern block-structured languages, such as Pascal and Ada,implement the principles of structured programming, and enforce the threebasic control structures. Ada supports branching only at the same logicallevel and not to arbitrary points in the program.

The basic rules of structured programming can lead to controlstructures being nested too deeply. It can be quite difficult to follow the logicof a module when the control structures are nested more than three or fourlevels. Three common ways to minimise this problem are to:• define more lower-level modules;• put the error-handling code in blocks separate to the main code.• branching to the end of the module on detecting an error.

Page 409: ESA Software Engineering Standards

28 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)METHODS FOR DETAILED DESIGN AND PRODUCTION

3.2.4 Program Design Languages

Program Design Languages (PDL) are used to develop, analyseand document a program design [from Ref. 7]. A PDL is often obtained fromthe essential features of a high-level programming language. A PDL maycontain special constructs and verification protocols.

It is possible to use a complete programming language (e.g.Smalltalk, Ada) as a PDL [Ref. 7]. ANSI/IEEE Std 990-1987 ‘IEEERecommended Practice for Ada As a Program Design Language', providesrecommendations ‘reflecting the state of the art and alternative approachesto good practice for characteristics of PDLs based on the syntax andsemantics of Ada' [Ref. 9]. Using Ada as a model, it says that a PDL shouldprovide support for:• abstraction;• decomposition;• information hiding;• stepwise refinement;• modularity;• algorithm design;• data structure design;• connectivity;• adaptability.

Adoption of a standard PDL makes it possible to define interfaces toCASE tools and programming languages. The ability to generate executablestatements from a PDL is desirable.

Using an entire language as a PDL increases the likelihood of toolsupport. However, it is important that a PDL be simple. Developers shouldestablish conventions for the features of a language that are to be used indetailed design.

PDLs are the preferred detailed design method on larger projects,where the existence of standards and the possibility of tool support makesthem more attractive than pseudo-code.

Page 410: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 29METHODS FOR DETAILED DESIGN AND PRODUCTION

3.2.5 Pseudo-code

Pseudo-code is a combination of programming languageconstructs and natural language used to express a computer programdesign [Ref. 2]. Pseudo-code is distinguished from the code proper by thepresence of statements that do not compile. Such statements only indicatewhat needs to be coded. They do not affect the module logic.

Pseudo-code is an informal PDL (see Section 3.2.4) that gives thedesigner greater freedom of expression than a PDL, at the sacrifice of toolsupport. Pseudo-code is acceptable for small projects and in prototyping,but on larger projects a PDL is definitely preferable.

3.2.6 Jackson Structured Programming

Jackson Structured Programming (JSP) is a program designtechnique that derives a program's structure from the structures of its inputand output data [Ref. 6]. The JSP dictum is that ‘the program structureshould match the data structure'.

In JSP, the basic procedure is to:• consider the problem environment and define the structures for the data

to be processed;• form a program structure based on these data structures;• define the tasks to be performed in terms of the elementary operations

available, and allocate each of those operations to suitable componentsin the program structure.

The elementary operations (i.e. statements in the programminglanguage) must be grouped into one of the three composite operations:sequence, iteration and selection. These are the standard structuredprogramming constructs, giving the technique its name.

JSP is suitable for the detailed design of software that processessequential streams of data whose structure can be described hierarchically.JSP has been quite successful for information systems applications.

Jackson System Development (JSD) is a descendant of JSP. Ifused, JSD should be started in the SR phase (see ESA PSS-05-03, Guide tothe Software Requirements Definition Phase).

Page 411: ESA Software Engineering Standards

30 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)METHODS FOR DETAILED DESIGN AND PRODUCTION

3.3 PRODUCTION METHODS

Software production involves writing code in a programminglanguage, verifying the code and integrating it with other code to make aworking system. This section therefore discusses programming languagesand integration methods.

3.3.1 Programming languages

Programming languages are best classified by their features andapplication domains. Classification by ‘generation' (e.g. 3GL, 4GL) can bevery misleading because the generation of a language can be completelyunrelated to its age (e.g. Ada, LISP). Even so, study of the history ofprogramming languages can give useful insights into the applicability andfeatures of particular languages [Ref. 13].

3.3.1.1 Feature classification

The following classes of programming languages are widelyrecognised:• procedural languages;• object-oriented languages;• functional languages;• logic programming languages.

Application-specific languages based on database managementsystems are not discussed here because of their lack of generality. Controllanguages, such as those used to command operating systems, are alsonot discussed for similar reasons.

Procedural languages are sometimes called ‘imperative languages'or ‘algorithmic languages'. Functional and logic programming languagesare often collectively called ‘declarative languages' because they allowprogrammers to declare ‘what' is to be done rather than ‘how'.

3.3.1.1.1 Procedural languages

A ‘procedural language' should support the following features:• sequence (composition);• selection (alternation);• iteration;• division into modules.

Page 412: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 31METHODS FOR DETAILED DESIGN AND PRODUCTION

The traditional procedural languages such as COBOL andFORTRAN support these features.

The sequence construct, also known as the composition construct,allows programmers to specify the order of execution. This is trivially doneby placing one statement after another, but can imply the ability to branch(e.g. GOTO).

The sequence construct is used to express the dependenciesbetween operations. Statements that come later in the sequence depend onthe results of previous statements. The sequence construct is the mostimportant feature of procedural languages, because the program logic isembedded in the sequence of operations, instead of in a data model (e.g.the trees of Prolog, the lists of LISP and the tables of RDBMS languages).

The selection construct, also known as the condition or alternationconstruct, allows programmers to evaluate a condition and take appropriateaction (e.g. IF... THEN and CASE statements).

The iteration construct allows programmers to construct loops (e.g.DO...). This saves repetition of instructions.

The module construct allows programmers to identify a group ofinstructions and utilise them elsewhere (e.g. CALL...). It saves repetition ofinstructions and permits hierarchical decomposition.

Some procedural languages also support:• block structuring;• strong typing;• recursion.

Block structuring enforces the structured programming principlethat modules should have only one entry point and one exit point. Pascal,Ada and C support block structuring.

Strong typing requires the data type of each data object to bedeclared [Ref. 2]. This stops operators being applied to inappropriate dataobjects and the interaction of data objects of incompatible data types (e.g.when the data type of a calling argument does not match the data type of acalled argument). Ada and Pascal are strongly typed languages. Strongtyping helps a compiler to find errors and to compile efficiently.

Page 413: ESA Software Engineering Standards

32 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)METHODS FOR DETAILED DESIGN AND PRODUCTION

Recursion allows a module to call itself (e.g. module A calls moduleA), permitting greater economy in programming. Pascal, Ada and C supportrecursion.

3.3.1.1.2 Object-oriented languages

An object-oriented programming language should support allstructured programming language features plus:• inheritance;• polymorphism;• messages.

Examples of object-oriented languages are Smalltalk and C++.Reference 14 provides a useful review of object-oriented programminglanguages.

Inheritance is the technique by which modules can acquirecapabilities from higher-level modules, i.e. simply by being declared asmembers of a class, they have all the attributes and services of that class.

Polymorphism is the ability of a process to work on different datatypes, or for an entity to refer at runtime to instances of specific classes.Polymorphism cuts down the amount of source code required. Ideally, alanguage should be completely polymorphic, so the need to formulatesections of code for each data type is unnecessary. Polymorphism impliessupport for dynamic binding.

Object-oriented programming languages use ‘messages' toimplement interfaces. A message encapsulates the details of an action to beperformed. A message is sent from a ‘sender object' to a ‘receiver object'to invoke the services of the latter.

3.3.1.1.3 Functional languages

Functional languages, such as LISP and ML, support declarativestructuring. Declarative structuring allows programmers to specify only‘what' is required, without stating how it is to be done. It is an importantfeature, because it means standard processing capabilities are built into thelanguage (e.g. information retrieval) .

With declarative structuring, procedural constructs are unnecessary.In particular, the sequence construct is not used for the program logic. Anunderlying information model (e.g. a tree or a list) is used to define the logic.

Page 414: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 33METHODS FOR DETAILED DESIGN AND PRODUCTION

If some information is required for an operation, it is automatically obtainedfrom the information model. Although it is possible to make one operationdepend on the result of a previous one, this is not the usual style ofprogramming.

Functional languages work by applying operators (functions) toarguments (parameters). The arguments themselves may be functionalexpressions, so that a functional program can be thought of as a singleexpression applying one function to another. For example if DOUBLE is thefunction defined as DOUBLE(X) = X + X, and APPLY is the function thatexecutes another function on each member of a list, then the expressionAPPLY(DOUBLE, [1, 2, 3]) returns [2, 4, 6].

Programs written in functional languages appear very different fromthose written in procedural languages, because assignment statements areabsent. Assignment is unnecessary in a functional language, because allrelationships are implied by the information model.

Functional programs are typically short, clear, and specification-like,and are suitable both for specification and for rapid implementation, typicallyof design prototypes. Modern compilers have reduced the performanceproblems of functional languages.

A special feature of functional languages is their inherent suitabilityfor parallel implementation, but in practice this has been slow to materialise.

3.3.1.1.4 Logic programming languages

Logic programming languages implement some form of classicallogic. Like functional languages, they have a declarative structure. Inaddition they support:• backtracking;• backward chaining;• forward chaining.

Prolog is the foremost logic programming language.

Backtracking is the ability to return to an earlier point in a chain ofreasoning when an earlier conclusion is subsequently found to be false. It isespecially useful when traversing a knowledge tree. Backtracking isincompatible with assignment, since assignment cannot be undone

Page 415: ESA Software Engineering Standards

34 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)METHODS FOR DETAILED DESIGN AND PRODUCTION

because it erases the contents of variables. Languages which supportbacktracking are, of necessity, non-procedural.

Backward chaining starts from a hypothesis and reasonsbackwards to the facts that cause the hypothesis to be true. For example ifthe fact A and hypothesis B are chained in the expression IF A THEN B,backwards chaining enables the truth of A to be deduced from the truth of B(note that A may be only one of a number of reasons for B to be true).

Forward chaining is the opposite of backward chaining. Forwardchaining starts from a collection of facts and reasons forward to aconclusion. For example if the fact X and conclusion Y are chained in theexpression IF X THEN Y, forward chaining enables the truth of Y to bededuced from the truth of X.

Forward chaining means that a change to a data item isautomatically propagated to all the dependent items. It can be used tosupport ‘data-driven' reasoning.

3.3.1.2 Applications

Commonly recognised application categories for programminglanguages are:• real-time systems, including control systems and embedded systems;• transaction processing, including business and information systems;• numerical analysis;• simulation;• human-computer interaction toolkits (HCI);• artificial intelligence (AI);• operating systems;

The application area of language strongly influences the featuresthat are built into it. There is no true ‘general purpose language', althoughsome languages are suitable for more than one of the application categorieslisted above.

3.3.1.3 FORTRAN

FORTRAN was the first widely-used high-level programminglanguage. Its structuring concepts include the now-familiar IF... THEN

Page 416: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 35METHODS FOR DETAILED DESIGN AND PRODUCTION

alternation, DO iteration, primitive data types such as integer, and procedurecomposition. FORTRAN does not support recursion.

FORTRAN is familiar to generations of programmers who havelearned to use it safely. It is a simple language, compared with Ada, and canbe compiled rapidly into efficient object code. Compilers are available formost hardware, but these are not always compatible, both because of theaddition of machine-dependent features, especially input/output, andbecause of successive versions of FORTRAN itself. However, goodstandards exist.

The language includes some features now generally consideredrisky, such as:• the GOTO statement, which allows the rules of structured programming

to be easily broken;• the EQUIVALENCE statement, which permits different names to be

used for the same storage location;• local storage, which permits possibly undesired values to be used from

a previous call;• multiple entry and exit points.

FORTRAN remains the primary language for scientific software.Most compilers allow access to operating system features, which, at theexpense of portability, allows FORTRAN to be used for developing real-timecontrol systems.

3.3.1.4 COBOL

COBOL remains the most widely used programming language forbusiness and administration systems. As a procedural language it providesthe usual sequence, selection (IF... OTHERWISE...) and iteration constructs(PERFORM...). COBOL has powerful data structuring and file handlingmechanisms (e.g. hierarchically arranged records, direct access andindexed files). Although its limited data manipulation facilities severelyrestricts the programmers' ability to construct complex arithmeticexpressions, it does allow fixed point arithmetic, necessary for accurateaccounting.

COBOL was conceived as the first ‘end-user' programminglanguage. It relies on verbal expressions, not symbols (e.g. ADD,SUBTRACT, MULTIPLY, DIVIDE instead of +, -, x and ). It has therefore beencriticised as forcing a verbose programming style. While this does help

Page 417: ESA Software Engineering Standards

36 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)METHODS FOR DETAILED DESIGN AND PRODUCTION

make COBOL programs self-explanatory, COBOL programs can still bedifficult to understand if they are not well-structured. Jackson's Principles ofProgram Design are explained with the aid of examples of good and badCOBOL programming [Ref. 6].

3.3.1.5 Pascal

Pascal was conceived by a single designer (Niklaus Wirth) to teachthe principles of structured programming. It is accordingly simple and direct,though perhaps idiosyncratic. It contains very clear constructs: IF... THEN...ELSE, WHILE... DO, REPEAT... UNTIL, CASE... OF, and so on. Sets,procedures, functions, arrays, records and other constructs are matched bya versatile and safe data typing system, which permits the compiler todetect many kinds of error.

Wirth omitted several powerful features to make Pascal compileefficiently. For example, functions may not return more than one argument,and this must be a primitive type (although some extensions permit recordsto be returned). String handling and input/output functions are simplistic andhave been the source of portability problems.

Pascal handles recursive data structures, such as lists and trees bymeans of explicit pointers and (sometimes) variant records. These defeatthe type checking mechanism and are awkward to handle.

Pascal is now well-defined in an ISO standard, and is very widelyavailable with good compilers and supporting tools. To a considerableextent the language enforces good programming practice.

3.3.1.6 C

C is a language in the same family as Pascal, with its roots in Algol.The language is less strongly typed than Pascal or Algol. It is easy andefficient to compile, and because of its low-level features, such as bitmanipulation, can exploit machine-dependent facilities.

C has a rich variety of operators that allow programmers to adopt avery terse style. This can unfortunately make programs difficult to read andunderstand. Discipline is required to produce code that is portable andmaintainable. Coding standards are especially necessary to make C codeconsistent and readable.

One advantage of C is its close connection with the Unix operatingsystem, which guarantees C a place as a (or the) major systems

Page 418: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 37METHODS FOR DETAILED DESIGN AND PRODUCTION

programming language. A very wide range of tools and library modules areavailable for the language.

3.3.1.7 Modula-2

Modula-2 is derived from Pascal. It offers a few new constructs andconsiderably more power and generality. Two important features aremodularity, including incremental compilation, and simulation ofconcurrency and inter-process communication (via co-routining).

Niklaus Wirth, the inventor of both Pascal and Modula-2, intendedthat Modula-2 replace Pascal. However Modula-2 is not widely usedbecause of the scarcity of development tools for popular platforms. Incontrast to Ada, its main competitor, it is a ‘small' language (its compiler isonly about 5000 lines, as opposed to Ada's several hundred thousand lines[Ref. 13]).

3.3.1.8 Ada

Ada is a powerful language well suited for the creation of large,reliable and maintainable systems. Unlike most languages it wassystematically developed (from a US Department of Defense specification[Ref. 7]). Its many features include tasks (processes) able to communicateasynchronously with each other, strong typing and type checking, andgeneric programming.

Ada derives ideas from many sources, including several of thelanguages mentioned here. Although it is a ‘large' language, and thereforedifficult to master, it can seem relatively familiar to C and Pascalprogrammers. Applications written in Ada may often be slower thancomparable applications written in other languages, perhaps largely due tothe immaturity of the compilers. This may become less of a problem in time(as has been seen with other languages).

Ada provides features normally called from the operating system byother languages, providing a valuable degree of device independence andportability for Ada programs. However Ada real time control features maynot be adequate for some applications, and direct access to operatingsystem services may be necessary.

Ada does not support inheritance, so it is not an object-orientedlanguage. Ada allows programmers to create ‘generic packages' which actas classes. However it is not possible to structure generic packages andinherit package attributes and services.

Page 419: ESA Software Engineering Standards

38 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)METHODS FOR DETAILED DESIGN AND PRODUCTION

3.3.1.9 Smalltalk

Smalltalk is the leading member of the family of object-orientedlanguages. The language is based on the idea of classes of object, whichcommunicate by passing messages, and inherit attributes from classes towhich they belong.

Smalltalk is inherently modular and extensible; indeed, everySmalltalk program is an extension of the core language. Large libraries areavailable offering a wide range of predefined classes for graphics, datahandling, arithmetic, communications and so on. Programs can often becreated by minor additions to, or modifications of, existing objects.

The language is thus very suitable for rapid iterative developmentand prototyping, but it is also efficient enough for many applications. It is thelanguage of choice for educating software engineers in object-orientedprogramming.

Smalltalk differs from other programming languages in being acomplete programming environment. This feature is also the language'sAchilles heel: using Smalltalk is very much an ‘all or nothing' decision.Interfacing to software written in other languages is not usually possible.

3.3.1.10 C++

C++ is an object-oriented and more strongly typed version of C. Ittherefore enforces greater discipline in programming than C. It is anoticeably more modern language, exploiting the strongest concepts ofPascal and Smalltalk, while providing all the features of its parent languageC. The rapid emergence of C++ has been helped by the appearance oftools and standards.

3.3.1.11 LISP

LISP was the first list processing language. It is mostly a functionallanguage but with some procedural features. Because a program is itself alist, it can readily be treated as data by another program. Hence, LISP is anexcellent language for writing interpreters and expert systems, and muchpioneering work in artificial intelligence has been done in the language.

LISP is now a mature language and accordingly has a rich set ofpredefined tools (functions) and environments available for it. LISPcompilers are available on most architectures, but these are not wellstandardised; so-called Common LISP is a widespread dialect with

Page 420: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 39METHODS FOR DETAILED DESIGN AND PRODUCTION

commercial backing. LISP is often run on specialised workstations withlanguage-sensitive editors, interpreters and debuggers, and sometimes withdedicated hardware. Interfaces to C and Prolog, and to proceduralcomponents like graphics packages, are frequently provided.

The language was once heavily criticised for inefficiency andillegibility. List manipulation is inherently more costly than, say, arraymanipulation, but careful optimisation and good compilers have greatlyreduced the overheads. The style of LISP programs is more verbose andless legible than that of modern functional languages such as ML. However,recent versions of LISP have again become more purely functional and haveupdated their syntax. The problem of understanding heavily bracketedexpressions (lists of lists) has largely been solved with pretty-printers andautomatic bracket checkers.

3.3.1.12 ML

ML (short for Meta-Language) is the most widely used of a family offunctional (i.e. non-procedural) programming languages that include Hopeand Miranda. Like Prolog, it is declarative and admirably free of side-effects.Functions are similar to those in Pascal, except that they may take anydesired form and return essentially any type of result as a structure. ML isnotably modular and polymorphic, permitting creation of reusablecomponents in a style analogous to that of Ada. ML lacks the quirkiness ofsome functional languages, and permits procedural constructs forefficiency.

3.3.1.13 Prolog

Prolog was the first of a family of logic programming languages. Fullpredicate logic is too complex to implement efficiently, so Prologcompromises by implementing a restricted but useful subset, namely HornClause logic. This permits a single affirmative conclusion, but from anynumber of premises. If a conclusion can be reached in more than one way,more than one clause is provided for it. Thus, groups of clauses act as theanalogues of IF... THEN or CASE statements, while a single clause iseffectively a procedure or function.

Because a conclusion may fail to be reached by one route, but thesame, or another conclusion may be reached by another route, Prologpermits backtracking. This provides a powerful search mechanism. Theneed to maintain an extensive stack means, however, that Prolog requires alot of run-time storage.

Page 421: ESA Software Engineering Standards

40 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)METHODS FOR DETAILED DESIGN AND PRODUCTION

The language is essentially declarative, not procedural, as theclauses relate one fact or conclusion to another: thus, if something is truethen it is always true. Prolog is very well suited to systems embodying rules,deduction, and knowledge (including databases), though it is capable ofgraphics, arithmetic and device control on a range of machines. It is notideally suited for highly numerical applications. Prolog itself is (virtually)typeless and freely polymorphic, making it insecure; many of its daughterlanguages embody varieties of type checking. The language is inherentlyextensible. With its interactive environment, it is very suitable for prototyping;it has also been used for specification. Most Prolog compilers permitmodules of C to be called, typically as if they were built-in predicates.

3.3.1.14 Summary

Table 3.3.1.14 lists the programming languages, their types, primaryapplications and standards.

Language Type Primary Applications StandardFORTRAN Procedural Numerical analysis

Real-time systemsISO 1539

COBOL Procedural Transactionprocessing

ISO 1989

Pascal Procedural Numerical analysisReal-time systems

ISO 7185

C Procedural Real-time systemsOperating systems

ANSI

Modula-2 Procedural Real-time systemsNumerical analysis

Ada Procedural Real-time systemsSimulation

MIL STD 1815A-1983

Smalltalk Object-oriented SimulationHCI toolkits

Smalltalk-80

C++ Object-oriented Real-time systemsSimulationHCI toolkits

AT&T

LISP Functional AI Common LISPML Functional AIProlog Logic programming AI Edinburgh syntax

Table 3.3.1.14 Summary of Programming Languages

3.3.2 Integration methods

A function-by-function integration method should be used that:• establishes the infrastructure functions first;• harmonises with the delivery plan.

Page 422: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 41METHODS FOR DETAILED DESIGN AND PRODUCTION

It is necessary to establish the infrastructure functions to minimisethe amount of test software needed. Examples of infrastructure functions arethose that provide data input and output. Building the infrastructure savesthe effort of providing drivers and stubs for the components that use it. Theinfrastructure provides the kernel system from which the rest of the systemcan grow.

In an incremental development, the delivery plan can constrain theintegration method by forcing functions to be delivered in a user-specifiedorder, not one that minimises the integration effort.

Each function will be provided by a group of components that maybe integrated ‘top-down' and ‘bottom-up'. The idea of integrating all thecomponents at one level before proceeding to the next is common to bothmethods. In top-down integration, the next level to integrate is always thenext lower one, while in bottom-up integration the next higher level isintegrated.

The integration methods are described in more detail below.

3.3.2.1 Function-by-function

The steps in the function-by-function method are to:

1) select the functions to be integrated;

2) identify the components that carry out the functions;

3) identify the component dependencies (i.e. input data flows or controlflows).

4) order the components by the number of dependencies (i.e. fewestdependencies first);

5) when a component depends on another later in the order, create adriver to simulate the input of the component later in the order;

6) introduce the components with the fewest dependencies first.

This approach minimises the number of stubs and drivers required.Stubs and drivers simulate the input to a component, so if the ‘real' testedcomponents provide the input, effort does not need to be expended onproducing stubs and drivers.

Page 423: ESA Software Engineering Standards

42 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)METHODS FOR DETAILED DESIGN AND PRODUCTION

Data flow diagrams are useful for representing componentdependencies. Output flows that are not input to any other componentshould be omitted from the diagram.

When the incremental delivery life cycle approach is being used, thebasic procedure above must be modified:

a) define the functions;

b) define when the functions are required;

c) for each release;

do steps 1) to 6) above.

Dependencies with components in a previous or a future release arenot counted. If one component depends upon another in a previous release,the existing software can satisfy the dependency. If the component dependsupon a component in a future release then a driver to simulate the inputmust be provided.

3.3.2.2 Top-down integration

The top-down approach to integration is to use ‘stub' modules torepresent lower-level modules. As modules are completed and tested, theyreplace the stubs. Stubs can be used to implement test cases.

3.3.2.3 Bottom-up integration

The bottom-up approach to integration is to use ‘driver' modules torepresent higher-level modules. As modules are completed and tested, theyreplace the drivers. Drivers can be used to implement test cases.

Page 424: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 43TOOLS FOR DETAILED DESIGN AND PRODUCTION

CHAPTER 4 TOOLS FOR DETAILED DESIGN AND PRODUCTION

4.1 INTRODUCTION

This chapter discusses the tools for detailing the design andproducing the software. Tools can be combined to suit the needs of aparticular project.

4.2 DETAILED DESIGN

4.2.1 CASE tools

In all but the smallest projects, CASE tools should be used duringthe DD phase. Like many general purpose tools (e.g. such as wordprocessors and drawing packages), CASE tools should provide:• windows, icons, menu and pointer (WIMP) style interface for the easy

creation and editing of diagrams;• what you see is what you get (WYSIWYG) style interface that ensures

that what is created on the display screen closely resembles what willappear in the document.

Method-specific CASE tools offer the following features not offeredby general purpose tools:• enforcement of the rules of the methods;• consistency checking;• easy modification;• automatic traceability of components to software requirements;• configuration management of the design information;• support for abstraction and information hiding;• support for simulation.

CASE Tools should have an integrated data dictionary or anotherkind or ‘repository'. This is necessary for consistency checking. Developersshould check that a tool supports the parts of the method that they intend touse. ESA PSS-05-04, ‘Guide to the Software Architectural Design Phase'contains a more detailed list of desirable tool capabilities. The STARTs

Page 425: ESA Software Engineering Standards

44 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)TOOLS FOR DETAILED DESIGN AND PRODUCTION

Guide provides a good survey of tools for requirements analysis andconfiguration management [Ref. 3].

4.2.2 Configuration managers

Configuration management of the physical model is essential. Themodel should evolve from baseline to baseline as it develops in the DDphase, and enforcement of procedures for the identification, change controland status accounting of the model are necessary. In large projects,configuration management tools should be used for the management of themodel database.

4.2.3 Precompilers

A precompiler generates code from PDL specifications. This isuseful in design, but less so in later stages of development unless softwarefaults can be easily traced back to PDL statements.

4.3 PRODUCTION

A range of production tools are available to help programmersdevelop, debug, build and test software. Table 4.3 lists the tools in order oftheir appearance in the production process.Tool PurposeCASE tools generate module shellscode generators translate formal relationships into source codeeditors create and modify source code and documentationlanguage-sensitive editors create syntactically correct source codestatic analysers examine source codecompilers translate source code to object codelinkers join object modules into executable programsdebuggers locate errors during executiondynamic analysers examine running programstest tools test modules and programsword processors document productiondocumentation generators derive user documentation from the source codeconfiguration managers store and track versions of modules and files, and record

the constituents of each build

Table 4.3: Production tools

Page 426: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 45TOOLS FOR DETAILED DESIGN AND PRODUCTION

4.3.1 Modelling tools

Many modelling tools automatically generate constant, type, andvariable declarations for inclusion in the source code of every module. Somemodelling tools can translate diagrams showing the calling tree of modulesinto fully commented function or procedure calls, though these may lackvalues for actual parameters.

If modelling tools are used, coding begins by completing theskeleton of the module. All the calls are completed; then iteration constructs(WHILE, REPEAT, LOOP, etc) are entered; then alternation constructs (IF,CASE, etc) are inserted. Finally, low-level details such as arithmetic,input/output, and other system calls are filled in.

4.3.2 Code generators

Numerous code generator packages are now available, claiming totake the work out of design and coding. They can help reduce the workloadin some areas, such as database management and human-computerinteraction. These areas are characterised by repetitive code, and the needto perform numerous trivial but essential operations in set sequences. Suchtasks are best automated for accuracy and efficiency.

As code generators become more closely integrated with designmethods, it will be possible to code a larger proportion of the components ofany given system automatically. Current design methods generally providelimited code generation, for example creating the data declarations andmodule skeletons; module bodies must then be coded by hand.

Even if parts of the system are to be coded manually, there are stilladvantages in using a code generator. Changes in software requirementscan result in automatic changes to data and module declarations,preserving and checking consistency across the phases of the life-cycle.

4.3.3 Editors

Largely replaced by the word processor for documentation, basictext editors are still widely used for source code creation and modification.Although language-sensitive editors (see Section 4.3.4) offer greaterfunctionality, basic text editors provide all the facilities that manyprogrammers require.

Page 427: ESA Software Engineering Standards

46 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)TOOLS FOR DETAILED DESIGN AND PRODUCTION

4.3.4 Language-sensitive editors

Language-sensitive editors are now available for manyprogramming languages. They contain an interpreter that helps the user towrite syntactically correct code. For example, if the user selects ‘open a file',a menu appears of the known files of the right type. Language-sensitiveeditors are not ideal for some programming languages, because of therichness of their syntax. Skilled programmers often find them restrictive.

Another approach provides the reserved words of a language froma keypad, which may be real (on a keyboard or tablet), or virtual (on ascreen panel). The selection of a keyword like WHILE with a single action isa convenience; the editor may also insert associated syntactic items, suchas brackets and comments such as /* END WHILE */ . This does notprevent errors, but is at least a useful guide and the freedom of theprogrammer is not sacrificed.

The simplest language-sensitive editors do little more thanrecognise brackets and provide indentation to match. For example, thePascal reserved words BEGIN and END bracket blocks in IF... THEN... ELSEstatements. Automatic indentation after these words have been typed helpsto make programs readable and reduces mistakes.

It is also possible to provide templates for program construction,containing standard headers, the obligatory sections for constant and typedeclarations, and so on. These may be generated automatically by CASEtools. Editors which recognise these templates (and the general form of alegal module), can speed development and help prevent errors. There isconsiderable scope for the further integration with CASE tools and codegenerators.

4.3.5 Static analysers

Static analysis is the process of scanning the text of a program tocheck for faults of construction, dangerous or undesirable features, and theapplication of software standards. They are especially useful for relativelyunstructured languages such as assembler and FORTRAN. Static analysersmay:• check for variables that are not used, or are used before being assigned

a value;• check that the ranges of variables stay within bounds;• provide a view of the structure of an application;

Page 428: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 47TOOLS FOR DETAILED DESIGN AND PRODUCTION

• provide a view of the internal logical structure of each module;• measure the complexity of the code with respect to a metric, such as

cyclomatic complexity [Ref. 11];• translate the source code to an intermediate language for formal

verification;• symbolically execute the code by substituting algebraic symbols for

program variables;• measure simple attributes of the code, such as the number of lines of

code, and the maximum level of nesting.

Although they can be used to expose poorly structured code, staticanalysers should not be relied upon to improve poor programming.Structuring should be done in architectural and detailed design, not afterimplementation.

Most compilers provide some simple static analysis features, suchas checking for variables that are not used (see Section 4.3.6). Dedicatedstatic analysis tools usually provide advanced static analysis functions, suchas analysis of code structure.

Static analysis tools are no substitute for code review. They justsupport the review process. Some bad programming practices, such as thechoice of meaningful identifiers, evade all known static analysis tools, forexample.

4.3.6 Compilers

The choice of compiler on a given computer or operating systemmay be limited. However, compilers vary widely in speed, thoroughness ofchecking, ease of use, handling of standard syntax and of languageextensions, quality of listings and code output, and programming supportfeatures. The choice of compiler is therefore crucial. Trade-offs can becomecomplex, leading to selection by reputation and not by analysis. Usersshould assess their compilation needs and compare the available optionson this basis.

Speed of compilation affects the ease and cost of developing,debugging, and maintaining the product, whereas code quality affects theruntime performance of the product itself. Benchmark source code shouldtherefore reflect the type of code likely to be found in the product to bedeveloped. Compilers should be compared on such data by measuring

Page 429: ESA Software Engineering Standards

48 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)TOOLS FOR DETAILED DESIGN AND PRODUCTION

compilation time, execution time, and object code size. Runtime stack andheap sizes may also be critical where memory is scarce.

Full checking of compiler compliance with language standards isbeyond the means of essentially all ESA software projects. Standardsorganisations such as ISO, ANSI and BSI certify compilers for a fewlanguages including FORTRAN, COBOL and Pascal. The US Department ofDefense certifies Ada compilers. Manufacturers' documentation should bescanned for extensions to, or omissions from, the standards.

Compilers vary widely in programming support features. Goodcompilers offer much help to the developer, for example:• full listings;• cross-reference;• data on module sizes;• diagnostics;• thorough checking;• switches (e.g. array bounds checking; strict language or extensions).

Some compilers now offer many of the features of developmentenvironments, including built-in editors and debuggers, trace tools, versioncontrol, linkers, and incremental compilation (with a built-in interpreter).These can substantially speed development.

Where there are switchable checks, project programming standards(DDD, section 2.4) should state which options are applicable.

The most advanced compilers perform a range of optimisations forsequential and parallel machines, attempting to discover and eliminatesource code inefficiencies. These may be switchable, for example bydirectives in the source code. Ada explicitly provides for such directives withthe ‘pragma' statement. Users should investigate whether the optimisationsthey require are implemented in candidate compilers.

4.3.7 Linkers

The linker may be provided with the machine, operating system, orcompiler; or, as with Ada, may be integrated with the compiler/interpreterand runtime system. The user therefore has little control over the choice oflinker. When there is a choice, it should be considered carefully, as modern

Page 430: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 49TOOLS FOR DETAILED DESIGN AND PRODUCTION

linkers vary considerably in performance, affecting especially the speed ofdebugging and maintenance.

It is convenient if the linker can automatically discover theappropriate libraries and directories to use, and which modules orcomponents to link. Most linkers can be controlled by parameters, whichcan be created by a build or make utility; some are closely integrated withsuch utilities, or indeed with compilers.

In a large system, linking can take significantly longer thancompilation, so users should compare linker performance on realisticbenchmarks before selecting one for a project. Speed is not the onlyparameter to consider: some linkers may generate better quality executablecode than others.

4.3.8 Debuggers

The use of interactive symbolic debuggers is strongly encouraged,especially for verification. A good debugger is integrated with an editor aswell as a compiler/interpreter, and permits a range of investigative modes.Convenient debugging modes include step-by-step execution, breakpoint(spypoint) tracing, variable value reporting, watch condition setting (e.g. avariable beyond a limit value), and interaction.

For graphics, windows, menus, and other software involving cursorcontrol, the debugger must be properly windowed to avoid confusion withthe software being debugged. On some devices, illegal calls, e.g. graphicscalls outside the frame area, cause processes to abort; debuggers shouldbe able to trap such calls and permit interactive recovery action, or at leastdiagnosis.

The debugging of real-time software, where it is not possible tostep through code, is a special challenge. The traditional diagnostic log isstill useful here. The requirement is, as with all debugging, to view thesoftware in a realistic environment. This may be possible with a simulator; ifnot, hardware-style techniques, in which bus or circuit signals are monitoredin real time, may be necessary. Implementers of real-time projects shouldconsider how they may debug and verify their software, and should allocateresources for this purpose.

4.3.9 Dynamic analysers

Dynamic analysis is the process of measuring how much machineresources (e.g. CPU time, i/o time, memory) each module, and line of code,

Page 431: ESA Software Engineering Standards

50 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)TOOLS FOR DETAILED DESIGN AND PRODUCTION

consumes. In contrast to static analysis (see Section 4.3.5), dynamicanalysis is performed on a running program.

Dynamic analysis tools are used for measuring test coverage, sincelines of code that are not executed consume no resources. The verificationof test coverage, i.e. that all statements have been executed during testing,is a requirement of ESA PSS-05-0. Test coverage is best measuredautomatically.

Dynamic analysis enables a variety of optimisations to be carriedout to ‘tune' the system. Optimisation can be a very inefficient processwithout detailed knowledge of the system performance. Dynamic analysistools can locate the parts of the system that are causing performanceproblems. Source-level modifications can often yield the desiredperformance gains.

Where there are precise performance requirements, developersshould use a dynamic analyser to verify that the system is satisfactorilytuned, and to direct optimisation effort. If there are no precise performancerequirements, and a system runs satisfactorily, dynamic analysis can stillhelp detect some types of coding errors (e.g. unnecessary initialisations).

If a dynamic analyser is not available, resource consumption can bemeasured by means of timing routines. An interactive debugger can beused to observe coverage.

4.3.10 Test tools

Test tools may support one or more of the following functions:• test data generation;• test driving and harnessing;• automated results checking;• fault diagnosis and debugging;• test data management.

General purpose test tools can sometimes generate large quantitiesof input based on simple assumptions. System-specific simulators andmodelling tools are needed if realistic test data is required.

Test drivers and harnesses normally have to be provided by thedevelopers. The integration plan should identify what test drivers and stubsare required for testing, and these normally have to be specially developed.

Page 432: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 51TOOLS FOR DETAILED DESIGN AND PRODUCTION

Interpreter systems permit test drivers and harnesses to be createdinteractively. Short test drivers, consisting of little more than such a call, canbe prepared and called as interpreted commands. Such test drivers do notneed to be compiled into the final version. Of the languages mentioned inSection 3.3.1, Pascal, ML, Prolog, LISP and Smalltalk are known to berunnable interpretatively.

Tools that provide automated results checking can greatly increasethe efficiency of regression testing, and thereby its depth and scope.

If the comparison reveals a discrepancy between past and presentresults, support for fault diagnosis, via tracebacks of the execution, reportsof the contents of variables, can ease the solution of the problem identifiedin the test.

Perhaps of all the functions, support for the management of testdata is the most important. Management of the input and output test data isrequired if regression testing is to be meaningful.

4.3.11 Word processors

A word processor or text processor should be used. Tools for thecreation of paragraphs, sections, headers, footers, tables of contents andindexes all ease the production of a document. A spelling checker isessential, and a grammar checker is desirable. An outliner may be founduseful for creation of sub-headings, for viewing the document at differentlevels of detail and for rearranging the document. The ability to handlediagrams is very important.

Documents invariably go through many drafts as they are created,reviewed and modified. Revised drafts should include change bars.Document comparison programs, which can mark changed textautomatically, are invaluable for easing the review process.

Tools for communal preparation of documents are now beginningto be available, allowing many authors to comment and add to a singledocument.

4.3.12 Documentation generators

Documentation generators allow the automatic production of helpand documentation from the information in the code. They help maintainconsistency between code and documentation, and make the process ofdocumentation truly concurrent with the coding.

Page 433: ESA Software Engineering Standards

52 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)TOOLS FOR DETAILED DESIGN AND PRODUCTION

Code generators (see Section 4.3.2) may include tools forautomatically generating documentation about the screens, windows andreports that the programmer creates.

4.3.13 Configuration managers

Configuration management is covered in ESA PSS-05-09, ‘Guideto Software Configuration Management'. Implementers should consider theuse of an automated configuration manager in the circumstances of eachproject.

A variety of tools, often centred on a database, is available to assistin controlling the development of software when many modules may exist inmany versions. Some tools allow the developer to specify a configuration (mmodules in n versions), and then automatically compile, link, and archive it.The use of configuration management tools becomes essential when thenumber of modules or versions becomes large.

Page 434: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 53THE DETAILED DESIGN DOCUMENT

CHAPTER 5 THE DETAILED DESIGN DOCUMENT

5.1 INTRODUCTION

The purpose of a DDD is to describe the detailed solution to theproblem stated in the SRD. The DDD must be an output of the DD phase(DD13). The DDD must be complete, accounting for all the softwarerequirements in the SRD (DD15). The DDD should be sufficiently detailed toallow the code to be implemented and maintained. Components (especiallyinterfaces) should be described in sufficient detail to be fully understood.

5.2 STYLE

The style of a DDD should be systematic and rigorous. Thelanguage and diagrams used in a DDD should be clear and constructed toa consistent plan. The document as a whole must be modifiable.

5.2.1 Clarity

A DDD is clear if it is easy to understand. The structure of the DDDmust reflect the structure of the software design, in terms of the levels andcomponents of the software (DD14). The natural language used in a DDDmust be shared by all the development team.

The DDD should not introduce ambiguity. Terms should be usedaccurately.

A diagram is clear if it is constructed from consistently usedsymbols, icons, or labels, and is well arranged. Important visual principlesare to:• emphasise important information;• align symbols regularly;• allow diagrams to be read left-to-right or top-to-bottom;• arrange similar items in a row, in the same style;• exploit visual symmetry to express functional symmetry;• avoid crossing lines and overlaps;• avoid crowding.

Page 435: ESA Software Engineering Standards

54 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE DETAILED DESIGN DOCUMENT

Diagrams should have a brief title, and be referenced by the textwhich they illustrate.

Diagrams and text should complement one another and be asclosely integrated as possible. The purpose of each diagram should beexplained in the text, and each diagram should explain aspects that cannotbe expressed in a few words. Diagrams can be used to structure thediscussion in the text.

5.2.2 Consistency

The DDD must be consistent. There are several types ofinconsistency:• different terms used for the same thing;• the same term used for different things;• incompatible activities happening simultaneously;• activities happening in the wrong order.

Where a term could have multiple meanings, a single meaningshould be defined in a glossary, and only that meaning should be used inthe DDD.

Duplication and overlap lead to inconsistency. Clues toinconsistency are a single functional requirement tracing to more than onecomponent. Methods and tools help consistency to be achieved.

Consistency should be preserved both within diagrams andbetween diagrams in the same document. Diagrams of different kindsshould be immediately distinguishable.

5.2.3 Modifiability

A DDD is modifiable if changes to the document can be madeeasily, completely, and consistently. Good tools make modification easier,although it is always necessary to check for unpredictable side-effects ofchanges. For example a global string search and replace capability can bevery useful, but developers should always guard against unintendedchanges.

Diagrams, tables, spreadsheets, charts and graphs are modifiable ifthey are held in a form which can readily be changed. Such items should beprepared either within the word processor, or by a tool compatible with the

Page 436: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 55THE DETAILED DESIGN DOCUMENT

word processor. For example, diagrams may be imported automatically intoa document: typically, the print process scans the document for symbolicmarkers indicating graphics and other files.

Where graphics or other data are prepared on the same hardwareas the code, it may be necessary to import them by other means. Forexample, a screen capture utility may create bitmap files ready for printing.These may be numbered and included as an annex. Projects using methodsof this kind should define conventions for handling and configurationmanagement of such data.

5.3 EVOLUTION

The DDD should be put under change control by the developer atthe start of the Transfer Phase. New components may need to be addedand old components modified or deleted. If the DDD is being developed bya team of people, the control of the document may be started at thebeginning of the DD phase.

The Software Configuration Management Plan defines a formalchange process to identify, control, track and report projected changes, assoon as they are first identified. Approved changes in components must berecorded in the DDD by inserting document change records and adocument status sheet at the start of the DDD.

5.4 RESPONSIBILITY

Whoever writes the DDD, the responsibility for it lies with thedeveloper. The developer should nominate people with proven design andimplementation skills to write the DDD.

5.5 MEDIUM

The DDD is usually a paper document. The DDD may be distributedelectronically when participants have access to the necessary equipment.

5.6 CONTENT

The DDD is the authoritative reference document on how thesoftware works. Part 2 of the DDD must have the same structure andidentification scheme as the code itself, with a 1:1 correspondence between

Page 437: ESA Software Engineering Standards

56 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE DETAILED DESIGN DOCUMENT

sections of the documentation and the software components (DD14). TheDDD must be complete, accounting for all the software requirements in theSRD (DD15).

The DDD should be compiled according to the table of contentsprovided in Appendix C of ESA PSS-05-0. This table of contents is derivedfrom ANSI/IEEE Std 1016-1987 ‘IEEE Recommended Practice for SoftwareDesign Descriptions' [Ref. 5]. This standard defines a Software DesignDescription as a ‘representation or model of the software system to becreated. The model should provide the precise design information neededfor the planning, analysis and implementation of the software system'. TheDDD should be such a Software Design Description.

The table of contents is reproduced below. Relevant materialunsuitable for inclusion in the contents list should be inserted in additionalappendices. If there is no material for a section then the phrase ‘NotApplicable' should be inserted and the section numbering preserved.

Service Information:a - Abstractb - Table of Contentsc - Document Status Sheetd - Document Change Records made since last issue

Part 1 - General Description

1 Introduction1.1 Purpose1.2 Scope1.3 Definitions, acronyms and abbreviations1.4 References1.5 Overview

2 Project Standards, Conventions and Procedures2.1 Design standards2.2 Documentation standards2.3 Naming conventions2.4 Programming standards2.5 Software development tools

Page 438: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 57THE DETAILED DESIGN DOCUMENT

Part 2 - Component Design Specifications

n [Component identifier]

n.1 Typen.2 Purposen.3 Functionn.4 Subordinatesn.5 Dependenciesn.6 Interfacesn.7 Resourcesn.8 Referencesn.9 Processingn.10 Data

Appendix A Source code listingsAppendix B Software Requirements vs Components Traceability matrix

A component may belong to a class of components that sharecharacteristics. To avoid repeatedly describing shared characteristics, asensible approach is to reference the description of the class.

References should be given where appropriate, but a DDD shouldnot refer to documents that follow it in the ESA PSS-05-0 life cycle.

Part 1 of the DDD must be completed before any coding is started.The component design specification in Part 2 must be complete (i.e. noTBCs or TBDs) before coding is started.

5.6.1 DDD/Part 1 - General description

5.6.1.1 DDD/Part 1/1 Introduction

This section should describe the purpose and scope, and provide aglossary, list of references and document overview.

5.6.1.1.1 DDD/Part 1/1.1 Purpose (of the document)

This section should:

(1) describe the purpose of the particular DDD;

(2) specify the intended readership of the DDD.

Page 439: ESA Software Engineering Standards

58 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE DETAILED DESIGN DOCUMENT

5.6.1.1.2 DDD/Part 1/1.2 Scope (of the software)

This section should:

(1) identify the software products to be produced;

(2) explain what the proposed software will do (and will not do, ifnecessary);

(3) define relevant benefits, objectives, and goals as precisely aspossible;

(4) be consistent with similar statements in higher-level specifications, ifthey exist.

5.6.1.1.3 DDD/Part 1/1.3 Definitions, acronyms and abbreviations

This section should define all terms, acronyms, and abbreviationsused in the DDD, or refer to other documents where the definitions can befound.

5.6.1.1.4 DDD/Part 1/1.4 References

This section should list all the applicable and reference documents,identified by title, author and date. Each document should be marked asapplicable or reference. If appropriate, report number, journal name andpublishing organisation should be included.

5.6.1.1.5 DDD/Part 1/1.5 Overview (of the document)

This section should:

(1) describe what the rest of the DDD contains;

(2) explain how the DDD is organised.

5.6.1.2 DDD/Part 1/2 Project standards, conventions and procedures

5.6.1.2.1 DDD/Part 1/2.1 Design standards

These should usually reference methods carried over from the ADphase and only describe DD phase specific methods.

Page 440: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 59THE DETAILED DESIGN DOCUMENT

The Detailed Design Standard might need to be different if morethan one method or programming language is involved: for example, if someC language design and programming takes place in an Ada project.

5.6.1.2.2 DDD/Part 1/2.2 Documentation standards

This section should describe the format, style, and tools adopted bythe project for DD and code documentation. Headers, footers, sectionformats and typefaces should be specified. They may be prepared as wordprocessor template files, for automatic inclusion in all project documents. Ifthe formats are new, they should be prototyped and reviewed (with users inthe case of the SUM).

This section should contain the standard module header andcontain instructions for its completion (see Section 2.3.9).

5.6.1.2.3 DDD/Part 1/2.3 Naming conventions

This section should explain all naming conventions used, and drawattention to any points a maintenance programmer would not expect. Atable of the file types and the permitted names or extensions for each isrecommended for quick reference. See the examples in Table 5.6.1.2.3 andSection 5.6.2.1.File Type Name Extensiondocument <<mnemonic>> .DOCAda source code IDENTIFIER .ADAFortran source code IDENTIFIER .FORdiagram <<mnemonic>> .PIC

Table 5.6.1.2.3: Names and extensions

Conventions for naming files, programs, modules, and possiblyother structures such as variables and messages, should all bedocumented here.

5.6.1.2.4 DDD/Part 1/2.4 Programming standards

This section should define the project programming standards.Whatever languages or standards are chosen, the aim should be to create aconvenient and easily usable method for writing good-quality software. Noteespecially the guidelines in Section 2.3.

If the programming language is described in an ESA PSS-05 level 3Guide, then the guidelines described in that document should be adopted.

Page 441: ESA Software Engineering Standards

60 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE DETAILED DESIGN DOCUMENT

Additional restrictions on the use of the language may be imposed ifnecessary: such additions should be justified here.

When programming in any other language, a standard for its useshould be written to provide guidance for programmers. This standard maybe referenced or included here.

In general, the programming standard should define a consistentand uniform programming style. Specific points to cover are:• modularity and structuring;• headers and commenting;• indenting and layout;• library routines to be used;• language constructs to use;• language constructs to avoid.

5.6.1.2.5 DDD/Part 1/2.5 Software development tools

This section should list the tools chosen to assist softwaredevelopment. Normally the list will include:• a CASE tool;• a source code editor;• a compiler;• a debugger;• a linker;• a configuration manager / builder;• a word processor for documentation;• a tool for drawing diagrams.

Many projects will also use a configuration management system tostore configuration items, such as documentation, code and test data.

Prototyping projects might make use of an interpretative tool, suchas an incremental compiler/interpreter/debugger.

Other tools that may be helpful to many projects include:• static analysers;

Page 442: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 61THE DETAILED DESIGN DOCUMENT

• dynamic analysers;• network and data communication tools;• graphics packages;• statistics and mathematical packages.

Real-time and embedded systems may require specialdevelopment tools, including:• cross-compilers;• diagnostic, logging, and probe tools.

5.6.2 DDD/Part 2 - Component design specifications

The descriptions of the components should be laid outhierarchically. There should be subsections dealing with the followingaspects of each component:

n Component identifier

n.1 Typen.2 Purposen.3 Functionn.4 Subordinatesn.5 Dependenciesn.6 Interfacesn.7 Resourcesn.8 Referencesn.9 Processingn.10 Data

The number ‘n' should relate to the place of the component in thehierarchy.

5.6.2.1 DDD/Part 2/5.n Component Identifier

Each component should have a unique identifier (SCM06) foreffective configuration management. The component should be namedaccording to the rules of the programming language or operating system tobe used. Where possible, a hierarchical naming scheme should be usedthat identifies the parent of the component (e.g. ParentName_ChildName)

Page 443: ESA Software Engineering Standards

62 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE DETAILED DESIGN DOCUMENT

The identifier should reflect the purpose and function of thecomponent and be brief yet meaningful. If abbreviation is necessary,abbreviations should be applied consistently and without ambiguity.

Abbreviations should be documented. Component identifiersshould be mutually consistent (e.g. if there is a routine calledREAD_RECORD then one might expect a routine called WRITE_RECORD,not RECORD_WRITING_ROUTINE).

A naming style that clearly distinguishes objects of different classesis good programming practice. In Pascal, for instance, it is traditional to useupper case for user-defined types, mixed case for modules, and lower casefor variables, giving the following appearance:• procedure Count_Chars; {a module}• type SMALL_INT = 1..255; {a type}• var count: SMALL_INT; {a variable}

Other styles may be appropriate in other languages. The namingstyle should be consistent throughout a project. It is wise to avoid styles thatmight confuse maintenance programmers accustomed to standardindustrial practices.

5.6.2.1.1 DDD/Part 2/5.n.1 Type

Component type should be defined by stating its logical andphysical characteristics. The logical characteristics should be defined bystating the package, library or class that the component belongs to. Thephysical characteristics should be defined by stating the type of component,using the implementation terminology (e.g. task, subroutine, subprogram,package, file).

The contents of some component-description sections depend onthe component type. For this guide the categories: executable (i.e. containscomputer instructions) or non-executable (i.e. contains only data) are used.

5.6.2.1.2 DDD/Part 2/5.n.2 Purpose

The purpose of a component should be defined by tracing it to thesoftware requirements that it implements.

Backward traceability depends upon each component descriptionexplicitly referencing the requirements that justify its existence.

Page 444: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 63THE DETAILED DESIGN DOCUMENT

5.6.2.1.3 DDD/Part 2/5.n.3 Function

The function of a component must be defined in the DDD. Thisshould be done by stating what the component does.

The function description depends upon the component type.Therefore it may be a description of the:• process;• information stored or transmitted.

Process descriptions may use such techniques as StructuredEnglish, Precondition-Postcondition specifications and State-TransitionDiagrams.

5.6.2.1.4 DDD/Part 2/5.n.4 Subordinates

The subordinates of a component should be defined by listing theimmediate children. The subordinates of a program are the subroutines thatare ‘called by' it. The subordinates of a database could be the files that‘compose' it. The subordinates of an object are the objects that are ‘usedby' it.

5.6.2.1.5 DDD/Part 2/5.n.5 Dependencies

The dependencies of a component should be defined by listing theconstraints placed upon its use by other components. For example:• ‘what operations must have taken place before this component is

called?'• ‘what operations are excluded while this operation is taking place?'• ‘what operations have to be carried out after this one?'.

5.6.2.1.6 DDD/Part 2/5.n.6 Interfaces

Both control flow and data flow aspects of each interface need to bespecified in the DDD for each ‘executable' component. Data aspects of‘non-executable' components should be defined in Subsection 10.

The control flow to and from a component should be defined interms of how execution of the component is to be started (e.g. subroutinecall) and how it is to be terminated (e.g. return). This may be implicit in thedefinition of the type of component, and a description may not be

Page 445: ESA Software Engineering Standards

64 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE DETAILED DESIGN DOCUMENT

necessary. Control flows may also take place during execution (e.g.interrupt) and these should be defined, if they exist.

The data flow input to and output from each component must bedetailed in the DDD. Data structures should be identified that:• are associated with the control flow (e.g. call argument list);• interface components through common data areas and files.

One component's input may be another's output and to avoidduplication of interface definitions, specific data components should bedefined and described separately (e.g. files, messages). The interfacedefinition should only identify the data component and not define itscontents.

The interfaces of a component should be defined by explaining‘how' the component interacts with the components that use it. This can bedone by describing the mechanisms for:• invoking or interrupting the component's execution;• communicating through parameters, common data areas or messages;

If a component interfaces to components in the same system, theinterface description should be defined in the DDD (if not already in theADD). If a component interfaces to components in other systems, theinterface description should be defined in an Interface Control Document(ICD).

5.6.2.1.7 DDD/Part 2/5.n.7 Resources

The resources a component requires should be defined by itemisingwhat the component needs from its environment to perform its function.Items that are part of the component interface are excluded. Examples ofresources that might be needed by a component are displays, printers andbuffers.

5.6.2.1.8 DDD/Part 2/5.n.8 References

Explicit references should be inserted where a componentdescription uses or implies material from another document.

Page 446: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 65THE DETAILED DESIGN DOCUMENT

5.6.2.1.9 DDD/Part 2/5.n.9 Processing

The DDD should describe in detail how processing is carried out.Algorithms that are fully described elsewhere may be summarised briefly,provided their sources are properly referenced.

The processing a component needs to do should be defined bydefining the control and data flow within it. For some kinds of component(e.g. files) there is no such flow. In practice it is often difficult to separate thedescription of function from the description of processing. Therefore adetailed description of function can compensate for a lack of detail in thespecification of the processing. Techniques of process specification moreoriented towards software design are Program Design Language, Pseudo-code and Flow Charts.

Software constraints may specify that the processing be performedusing a particular algorithm (which should be stated or referenced).

5.6.2.1.10 DDD/Part 2/5.n.10 Data

The data internal to a component should be defined. The amount ofdetail required depends strongly on the type of component. The logical andphysical data structure of files that interface components should have beendefined in the DDD (files and data structures that interface majorcomponents will have been defined in the ADD). The data structures internalto a program or subroutine should also be specified (contrast the ADD,where it is omitted).

Data structure definitions must include the:• description of each element (e.g. name, type, dimension);• relationships between the elements (i.e. the structure);• range of possible values of each element;• initial values of each element.

5.6.3 DDD/Appendix A: Source code listings

This section must contain either:• listings of the source code, or• a configuration item list identifying where the source code can be found

(DD12).

Page 447: ESA Software Engineering Standards

66 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE DETAILED DESIGN DOCUMENT

5.6.4 DDD/Appendix B: Software requirements traceability matrix

This section should contain a table that summarises how eachsoftware requirement has been met in the DDD (DD16). The tabular formatpermits one-to-one and one-to-many relationships to be shown. A templateis provided in Appendix D.

Page 448: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 67THE SOFTWARE USER MANUAL

CHAPTER 6 THE SOFTWARE USER MANUAL

6.1 INTRODUCTION

The purpose of the Software User Manual (SUM) is to describe howto use the software. It should both educate users about what the softwaredoes, and instruct them how to do it. The SUM should combine tutorials andreference information to meet the needs of both novices and experts. ASoftware User Manual (SUM) must be an output of the DD phase (DD17).

The rules for the style and content of the Software User Manual arebased on ANSI/IEEE Std 1063-1987, ‘Software User Documentation'.

6.2 STYLE

The author of the SUM needs to be aware of the basic rules of clearwriting:• keep sentences short (e.g. 20 words or less);• avoid using long words (e.g. 10 letters or more);• keep paragraphs short (e.g. 10 sentences or less);• avoid using the passive tense;• use correct grammar;• make each paragraph express one point;• tell the reader what you going to say, say it, and then tell the reader

what you have said.

A grammar checker can help the author obey the first five rules.

The concepts of clarity, consistency and modifiability apply to theSUM just as much as to the DDD (see Section 5.2). The rest of this sectionpresents some other considerations that should influence the style of theSUM.

The SUM should reflect the characteristics of the users. Differenttypes of users, e.g. end users and operators, will have different needs.Further, the user's view may change from that of a trainee to expert. The

Page 449: ESA Software Engineering Standards

68 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE SOFTWARE USER MANUAL

authors of the SUM should study the characteristics of the users. The URDnormally contains a summary.

The SUM should reflect the characteristics of the interface betweenhe system and the user. Embedded software for controlling an instrumenton board a satellite might have no interface to humans. At the oppositeextreme, a data reduction package might require a large amount of userinteraction, and the SUM might be absolutely critical to the success of thepackage.

6.2.1 Matching the style to the user characteristics

Different viewpoints that may need to be addressed are:• the end user's view;• the operator's view;• the trainee's view.

An individual user may hold more than one view.

6.2.1.1 The end user's view

The end user's view takes in every aspect of the use of the system.The SUM should describe the purpose of the system and should referencethe URD.

The SUM should contain an overview of the process to besupported, perhaps making reference to technical information, such as theunderlying physics and algorithms employed. Information should bepresented from the end user's view, not the developer's. Algorithms shouldbe represented in mathematical form, not in a programming language, forexample. Descriptions should not stray into the workings of the system; theyare in the SUM only to help the end user understand the intention of thesystem.

The end user's view is an external view, unconcerned with details ofthe implementation. Accordingly, the SUM should give an external view ofthe system, with examples of input required from end users, and the resultsthat would occur (e.g. output). The SUM should place the system in itsworking context.

Page 450: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 69THE SOFTWARE USER MANUAL

6.2.1.2 The operator's view

The operator's view focuses on successfully controlling thesoftware, whether in normal conditions, or when recovering from errors.

From the point of view of the operator, the SUM should contain clearinstructions about each task that has to be performed. In particular:• instructions should be easy to find;• instructions should be easy to understand.

Ease of location and understanding are important in emergencies.Instructions will be easy to find and understand if a regular structure isadopted for the description of each task.

Writers should ensure that wrong actions are not accidentallyselected, and that correct or undo actions are possible if the operator makesa mistake. The SUM should document such corrective actions withexamples.

Any irreversible actions must be clearly distinguished in the SUM(and in the system). Typically, the operator will be asked to confirm that anirreversible action is to be selected. All confirmation procedures must beexplained with examples in the SUM.

6.2.1.3 The trainee's view

A particularly important view is that of the trainee. There are alwaysusers who are not familiar with a system, and they are especially likely toneed the support of a well-written guide or tutorial. Since the first experiencewith a system is often decisive, it is just as vital to write a helpful tutorial forthe trainee.

Every SUM should contain a tutorial section, which should providea welcoming introduction to the software (e.g. ‘Getting Started').

6.2.2 Matching the style to the HCI characteristics

The SUM should reflect the Human Computer Interaction (HCI)characteristics of the software. This section presents some guidelines for:• command-driven systems;• menu-driven systems;• GUI, WIMPs and WYSIWYG systems;

Page 451: ESA Software Engineering Standards

70 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE SOFTWARE USER MANUAL

• natural language and speech interface systems;• embedded systems.

6.2.2.1 Command-driven systems

Command-driven systems require the user to type in commands.The keyboard input is echoed on a screen. Command driven systems aresimple to implement, and are often preferred by expert users.

Command-driven systems can take some time to learn, and theSUM has a critical role to play in educating the users of the software. A wellthought-out SUM can compensate for a difficult user interface.

The alphanumeric nature of the interaction can be directlyrepresented in the SUM. For example:

Commentary and instructions,

> user input

system output

should be clearly distinguished from each other.

6.2.2.2 Menu-driven systems

Traditional menu-driven systems present the user with a fixedhierarchy of menus. The user starts at the top of the tree and moves up anddown the tree. The structure of the menus normally follows the naturalsequence of operations (e.g. OPEN the file first, then EDIT it). Often the left-most or the top-most item on the menu is what is usually done first.Sometimes this temporal logic is abandoned in favour of ergonomicconsiderations (e.g. putting the most frequently used command first).

The SUM should describe the standard paths for traversing themenus. The structure of the SUM should follow the natural sequence ofoperations, which is normally the menu structure. An index should beprovided to give alphabetical access.

6.2.2.3 Graphical User Interface systems

Graphical User Interfaces (GUI) include Windows, Icons, Mouseand Pointer (WIMP) and What You See Is What You Get (WYSIWYG) styleinterfaces. An aim of their inventors was to make the operation of a system

Page 452: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 71THE SOFTWARE USER MANUAL

so simple and intuitive that the reading of a user manual is unnecessary.Unfortunately, making information about the system only available throughthe system can be very restrictive, and may not support the mode of inquiryof the user. SUMs separate to the software are always required, no matterhow sophisticated the interface.

SUMs for a GUI should not assume basic GUI skills. They shouldinstruct the user how to operate the interface and should describe what theuser looks at and ‘feels'. Pictures and diagrams of the behaviour should begiven, so that the reader is left in no doubt about what the intendedbehaviour is. This is especially important when the user does not haveaccess to the system when reading the tutorial.

6.2.2.4 Natural-language and speech interface systems

Natural-language and speech interfaces are coming into use. Unlikemenu-driven systems and WIMPs, WYSIWYG and GUI type systems, theoptions are not normally defined on a screen. There may be many options.

For natural language and speech interface systems, the SUMshould provide:• a full list of facilities, explaining their purposes and use;• examples showing how the facilities relate to each other;• a list of useful command verbs and auxiliary words for each facility;• a clear statement of types of sentence that are NOT recognised.

6.2.2.5 Embedded systems

Software that is embedded within a system may not require anyhuman interaction at the software level. Nevertheless a SUM must besupplied with the software and provide at least:• an overview;• error messages;• recovery procedures.

6.3 EVOLUTION

The development of the SUM should start as early as possible.Establishing the potential readership for the SUM should be the first step.

Page 453: ESA Software Engineering Standards

72 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE SOFTWARE USER MANUAL

This information is critical for establishing the style of the document. Usefulinformation may be found in the section ‘User Characteristics' in the URD.

The Software Configuration Management Plan defines a formalchange process to identify, control, track and report projected changeswhen they are first identified. Approved changes to the SUM must berecorded by inserting document change records and a document statussheet at the start of the document.

The SUM is an integral part of the software. The SUM and the rest ofthe software must evolve in step. The completeness and accuracy of theSUM should be reviewed whenever a new release of the software is made.

6.4 RESPONSIBILITY

Whoever writes the SUM, the responsibility for it must be thedeveloper's. The developer should nominate people with proven authorshipskills to write the SUM.

6.5 MEDIUM

Traditionally, manuals have been printed as books. There aresubstantial advantages to be gained from issuing manuals in machine-readable form. Implementers should consider whether their system wouldbenefit from such treatment. Two possibilities are:• online help;• hypertext.

There should be specific requirements in the URD and SRD foronline help and hypertext, since their implementation can absorb bothcomputer resources (e.g. storage) and development effort.

6.5.1 Online help

Information in the SUM may be used to construct an ‘online help'system. Online help has the advantages of always being available with thesystem, whereas paper manuals may be mislaid. Three possibilities, in orderof increasing desirability are:• a standalone help system, not accessible from inside the• application;

Page 454: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 73THE SOFTWARE USER MANUAL

• an integrated help system, accessible from within the application;• an integrated, context-sensitive help system, that gives the help

information about the task being carried out.

In the last option, the user should be able to continue the search forhelp outside the current context (e.g. to get background information).

6.5.2 Hypertext

A hypertext is a structure of pages of text or graphics (andsometimes of other media). Pages are linked by keywords or phrases. Forexample, the word ‘file' might give access to a page containing the words‘store', ‘search', and ‘retrieve'; in turn, each of these might give access to apage explaining a command within the documented system. Keywords maybe displayed as text (usually highlighted), or may be indicated by icons orregions of a diagram which respond to a pointing device (such as a mouse).

Hypertext readily accommodates the hierarchical structure of mostprograms, but it can equally represent a network of topics linked by arbitraryconnections.

This gives hypertext the advantage of a richer pattern of accessthan printed text, which is inherently linear (despite hypertext-like helps, suchas indexes).

Hypertext is also good for guided tours, which simulate theoperation of the program, while pointing out interesting features and settingexercises.

The quality of graphics within a hypertext must enable tables,charts, graphs and diagrams to be read quickly and without strain. Olderterminals and personal computers may not be suitable for this purpose.Modern bit-mapped workstations, and personal computers with higher-resolution graphics cards, rarely cause legibility problems.

Stand-alone hypertext has the advantage of being simple and safe;it does not increase the complexity or risk of system software, because it isnot connected directly to it.

A disadvantage of stand-alone hypertext is that it is notautomatically context-sensitive. On a multitasking operating system, it canbe made context-sensitive by issuing a message from the (documented)

Page 455: ESA Software Engineering Standards

74 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE SOFTWARE USER MANUAL

system to the hypertext mechanism, naming the entry point (for example,‘file names' or ‘data display').

Reliable proprietary hypertext packages are available on manyprocessors; alternatively, simple hypertext mechanisms can be implementedby means of a database, a specialised tool, or if necessary a programminglanguage.

6.6 CONTENT

The recommended table of contents for a SUM is given below. If it isfelt necessary to depart from this structure, implementers should justify anydifferences briefly, and if possible preserve the standard section numberingfor easy reference.

Service Information:a - Abstractb - Table of Contentsc - Document Status Sheetd - Document Change Records made since last issue.

1 INTRODUCTION1.1 Intended readership1.2 Applicability statement1.3 Purpose1.4 How to use this document1.5 Related documents (including applicable documents)1.6 Conventions1.7 Problem reporting instructions

2 [OVERVIEW SECTION]

(The section ought to give the user a general understanding of what parts ofsoftware provide the capabilities needed)

3 [INSTRUCTION SECTION]

(For each operation, provide...

(a) Functional description(b) Cautions and warnings(c) Procedures, including,

- Set-up and initialisation

Page 456: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 75THE SOFTWARE USER MANUAL

- Input operations - What results to expect (d) Probable errors and possible causes)

4 [REFERENCE SECTION]

(Describe each operation, including:

(a) Functional description(b) Cautions and warnings(c) Formal description, including as appropriate:

- required parameters - optional parameters - default options - order and syntax (d) Examples (e) Possible error messages and causes (f) Cross references to other operations)

Appendix A Error messages and recovery procedures Appendix B Glossary Appendix C Index (for manuals of 40 pages or more)

In the instruction section of the SUM, material is ordered accordingto the learning path, with the simplest, most necessary operationsappearing first and more advanced, complicated operations appearing later.The size of this section depends on the intended readership. Some usersmay understand the software after a few examples (and can switch to usingthe reference section) while other users may require many worked examples.

The reference section of the SUM presents the basic operations,ordered for easy reference (e.g. alphabetically). Reference documentationshould be more formal, rigorous and exhaustive than the instructionalsection. For example a command may be described in the instructionsection in concrete terms, with a specific worked example. The descriptionin the reference section should describe all the parameters, qualifiers andkeywords, with several examples.

The overview, instruction, and reference sections should be givenappropriate names by the SUM authors. For example, an orbit modellingsystem might have sections separately bound and titled:• Orbit Modelling System Overview;• Orbit Modelling System Tutorial;

Page 457: ESA Software Engineering Standards

76 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE SOFTWARE USER MANUAL

• Orbit Modelling System Reference Manual.

At lower levels within the SUM, authors are free to define a structuresuited to the readership and subject matter. Particular attention should bepaid to:• ordering subjects (e.g. commands, procedures);• providing visual aids.

‘Alphabetical' ordering of subjects acts as a built-in index andspeeds access. It has the disadvantage of separating related items, such as‘INSERT' and ‘RETRIEVE'. Implementers should consider which method ofstructuring is most appropriate to an individual document. Where a body ofinformation can be accessed in several ways, it should be indexed andcross-referenced to facilitate the major access methods.

Another possible method is ‘procedural' or ‘step-by-step' ordering,in which subjects are described in the order the user will execute them. Thisstyle is appropriate for the instruction section.

Visual aids in the form of diagrams, graphs, charts, tables, screendumps and photographs should be given wherever they materially assist theuser. These may illustrate the software or hardware, procedures to befollowed by the user, data, or the structure of the SUM itself.

Implementers may also include illustrations for other purposes. Forexample, instructional material may contain cartoons to facilitateunderstanding or to maintain interest.

Icons may be used to mark differing types of section, such asdefinitions and procedures. All icons should be explained at the start of theSUM.

6.6.1 SUM\Table of Contents

A table of contents is an important help and should be provided inevery SUM. Short manuals may have a simple list of sections. Manuals over40 pages should provide a fuller list, down to at least the third level ofsubsections (1.2.3, etc). Where the SUM is issued in several volumes, thefirst volume should contain a simple table of contents for the entire SUM,and each volume should contain its own table of contents.

There is no fixed style for tables of contents, but they shouldreference page as well as section numbers. Section headings should be

Page 458: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 77THE SOFTWARE USER MANUAL

quoted exactly as they appear in the body of the manual. Subsections maybe indented to group them visually by containing a section.

A convenient style is to link the title to the page number with aleader string of dots:

1.2.3 Section Title ...................................................................................... 29

Lists of figures and tables should be provided in manuals with manyof them (e.g. more than twenty).

6.6.2 SUM\1 Introduction

6.6.2.1 SUM\1.1 Intended readership

This section should define the categories of user (e.g. end user andoperator) and, for each category:• define the level of experience assumed;• state which sections of SUM are most relevant to their needs.

6.6.2.2 SUM\1.2 Applicability statement

This section should define the software releases that the issue of theSUM applies to.

6.6.2.3 SUM\1.3 Purpose

This section should define both the purpose of the SUM and thepurpose of the software. It should name the process to be supported bysoftware and the role of the SUM in supporting that process.

6.6.2.4 SUM\1.4 How to use this document

This section should describe what each section of the documentcontains, its intended use, and the relationship between sections.

6.6.2.5 SUM\1.5 Related documents

This section should list related documents and define therelationship of each document to the others. All document trees that theSUM belongs to should be defined in this section. If the SUM is amultivolume set, each member of the set should be separately identified.

Page 459: ESA Software Engineering Standards

78 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE SOFTWARE USER MANUAL

6.6.2.6 SUM\1.6 Conventions

This section should summarise symbols, stylistic conventions, andcommand syntax conventions used in the document.

Examples of stylistic conventions are boldface and Courier Font todistinguish user input. Examples of syntax conventions are the rules forcombining commands, keywords and parameters, or how to operate aWIMP interface.

6.6.2.7 SUM\1.7 Problem Reporting Instructions

This section should summarise the procedures for reportingsoftware problems. ESA PSS-05-0 specifies the problem reportingprocedure [Ref. 1, part 2, section 3.2.3.2.2]. The SUM should not refer usersto ESA PSS-05-0.

6.6.3 SUM\2 Overview Section

This section should give an overview of the software to all users andsummarises:• the process to be supported to by software;• the fundamental principles of the process;• what the software does to support the process;• what the user needs to supply to the software.

The description of the process to be supported, and its fundamentalprinciples, may be derived from the URD. It should not use softwareterminology.

The software should be described from an external ‘black box' pointof view. The discussion should be limited to functions, inputs and outputsthat the user sees.

The overview can often become much clearer if a good metaphor isused for the system. Some GUIs, for example, use the ‘office system'metaphor of desk, filing cabinets, folders, indexes etc. Often a metaphor willhave been defined very early in the system development, to help capture therequirements. The same metaphor can be very useful in explaining thesystem to new users.

Page 460: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 79THE SOFTWARE USER MANUAL

6.6.4 SUM\3 Instruction Section

This section should aim to teach new users how to operate thesoftware. The viewpoint is that of the trainee.

The section should begin by taking the user through short andsimple sessions. Each session should have a single purpose, such as ‘toenable the user to edit a data description record'. The sessions should bedesigned to accompany actual use of the system, and should containexplicit references to the user's actions and the system's behaviour. Thetrainee should derive a sense of achievement by controlling a practicalsession that achieves a result.

Diagrams, plans, tables, drawings, and other illustrations should beused to show what the software is doing, and to enable the novice to form aclear and accurate mental model of the system.

The tutorial may be structured in any convenient style. It is wise todivide it into sessions lasting 30-40 minutes. Longer sessions are difficult toabsorb.

A session could have the goal of introducing the user to a set of‘advanced' facilities in a system. It may not be practical or desirable topresent a whole session. It may be more appropriate to give a ‘tutorial tour'.Even so, it is still helpful to fit all the examples into a single framework (e.g.how to store, search and retrieve).

For each session, this section should provide:

(a) Functional description

A description of what the session is supposed to achieve, in the user'sterms.

(b) Cautions and warnings

A list of precautions that may need to be taken; the user is not concernedwith all possibilities at this stage, but with gaining a working understandingof some aspect of the system.

(c) Procedures

- set-up and initialisation operations

A description of how to prepare for and start the task;

Page 461: ESA Software Engineering Standards

80 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE SOFTWARE USER MANUAL

- input operations

A step-by-step description what the user must do and a description of thescreens or windows that the system shows in response;

- what results to expect

A description of the final results that are expected.

(d) Likely errors and possible causes

An informal description of the major errors that are possible in this task, andhow to avoid them. The aim is to give the trainee user the confidence tocarry on when problems arise. This section should not simply provide a listof errors, but should set them in their context.

6.6.5 SUM\4 Reference Section

This section should give comprehensive information about all thesoftware capabilities. The viewpoint is that of the operator or expert user.

The section should contain a list of operations ordered for easyaccess. The order may be alphabetical (e.g. for command-driven systems)or correspond directly to the structure of the user interface (e.g. for menu-driven systems).

In contrast to the informal style of the instruction section, thereference section should be formal, rigorous and exhaustive.

For each operation, this section should provide:

(a) Functional description

A concise description of what the operation achieves.

(b) Cautions and warnings

A list of cautions and warnings that apply to the operation.

(c) Formal description, including as appropriate:- required parameters- optional parameters- defaults- syntax & semantics

Page 462: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 81THE SOFTWARE USER MANUAL

Describe precisely what the operation does and how it is used. The meansof doing this should be decided in advance. Syntax may be defined formallyusing the Backus-Naur Form (BNF). Semantics may be described by meansof tables, diagrams, equations, or formal language.

(d) Examples

Give one or more worked examples to enable operators to understand theformat at a glance. Commands should be illustrated with several shortexamples, giving the exact form of the most common permutations ofparameters and qualifiers in full, and stating what they achieve.

(e) Possible errors and their causes

List all the errors that are possible for this operation and state what causeseach one. If error numbers are used, explain what each one means.

(f) References to related operations

Give references to other operations which the operator may need tocomplete a task, and to logically related operations (e.g. refer to RETRIEVEand DELETE when describing the INSERT operation).

The operations should be described in a convenient order for quickreference: for example, alphabetically, or in functionally related groups. If thesection is separately bound, then it should contain its own tables of errormessages, glossary, and index; otherwise they should be provided asappendices to the body of the SUM. These appendices are describedbelow.

6.6.6 SUM\Appendix A - Error messages and recovery procedures

This section should list all the error messages. It should not simplyrepeat the error message: referral to this section means that the userrequires help. For each error message the section should give a diagnosis,and suggest recovery procedures. For example:

file ‘TEST.DOC' does not exist.

There is no file called ‘TEST.DOC' in the current directory and drive. Check that youare in the correct directory to access this file.

If recovery action is likely to involve loss of data, remind the userabout possible backup or archiving procedures.

Page 463: ESA Software Engineering Standards

82 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE SOFTWARE USER MANUAL

6.6.7 SUM\Appendix B - Glossary

A glossary should be provided if the manual contains terms thatoperators are unlikely to know, or that are ambiguous.

Care should be taken to avoid redefining terms that operatorsusually employ in a different sense in a similar context. References may beprovided to fuller definitions of terms, either in the SUM itself or in othersources.

Pairs of terms with opposite meanings, or groups of terms withassociated meanings, should be cross-referenced within the glossary.Cross-references may be indicated with a highlight, such as italic type.

List any words used in other than their plain dictionary sense, anddefine their specialised meanings.

All the acronyms used in the SUM should be listed with briefexplanations of what they mean.

6.6.8 SUM\Appendix C - Index

An index helps to make manuals easier to use. Manuals over 40pages should have an index, containing a systematic list of topics from theuser's viewpoint.

Indexes should contain major synonyms and variants, especially ifthese are well known to users but are not employed in the SUM for technicalreasons. Such entries may point to primary index entries.

Index entries should point to topics in the body of the manual by:• page number;• section number;• illustration number;• primary index entry (one level of reference only).

Index entries can usefully contain auxiliary information, especiallycross-references to contrasting or related terms. For example, the entry forINSERT could say ‘see also DELETE'.

Indexes can provide the most help to users if attention is drawnprimarily to important keywords, and to important locations in the body of

Page 464: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 83THE SOFTWARE USER MANUAL

the manual. This may be achieved by highlighting such entries, and bygrouping minor entries under major headings. Indexes should not containmore than two levels of entry.

If a single index points to different kinds of location, such as pagesand illustration numbers, these should be unambiguously distinguished, e.g.• page 35• figure 7

as the use of highlighting (35, 7) is not sufficient to preventconfusion in this case.

Page 465: ESA Software Engineering Standards

84 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)THE SOFTWARE USER MANUAL

This page is intentionally left blank.

Page 466: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 85LIFE CYCLE MANAGEMENT ACTIVITIES

CHAPTER 7 LIFE CYCLE MANAGEMENT ACTIVITIES

7.1 INTRODUCTION

DD phase activities must be carried out according to the plansdefined in the AD phase (DD01). These are:• Software Project Management Plan for the DD phase (SPMP/DD);• Software Configuration Management Plan for the DD phase

(SCMP/DD);• Software Verification and Validation Plan for the DD phase (SVVP/DD);• Software Quality Assurance Plan for the DD phase (SQAP/DD).

Progress against plans should be continuously monitored byproject management and documented at regular intervals in progressreports.

Plans of TR phase activities must be drawn up in the DD phase.These plans should cover project management, configuration management,quality assurance and acceptance tests.

7.2 PROJECT MANAGEMENT PLAN FOR THE TR PHASE

By the end of the DD review, the TR phase section of the SPMP(SPMP/TR) must be produced (SPM13). The SPMP/TR describes, in detail,the project activities to be carried out in the TR phase.

Guidance on writing the SPMP/TR is provided in ESA PSS-05-08,Guide to Software Project Management.

7.3 CONFIGURATION MANAGEMENT PLAN FOR THE TR PHASE

During the DD phase, the TR phase section of the SCMP(SCMP/TR) must be produced (SCM48). The SCMP/TR must cover theconfiguration management procedures for deliverables in the operationalenvironment (SCM49).

Guidance on writing the SCMP/TR is provided in ESA PSS-05-09,Guide to Software Configuration Management.

Page 467: ESA Software Engineering Standards

86 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)LIFE CYCLE MANAGEMENT ACTIVITIES

7.4 QUALITY ASSURANCE PLAN FOR THE TR PHASE

During the DD phase, the TR phase section of the SQAP (SQAP/TR)must be produced (SQA10). The SQAP/TR must describe, in detail, thequality assurance activities to be carried out in the TR phase until finalacceptance in the OM phase (SQA11).

SQA activities include monitoring the following activities:• management;• documentation;• standards, practices, conventions, and metrics;• reviews and audits;• testing activities;• problem reporting and corrective action;• tools, techniques and methods;• code and media control;• supplier control;• record collection, maintenance and retention;• training;• risk management.

Guidance on writing the SQAP/TR is provided in ESA PSS-05-11,Guide to Software Quality Assurance.

The SQAP/TR should take account of all the software requirementsrelated to quality, in particular:• quality requirements;• reliability requirements;• maintainability requirements;• safety requirements;• verification requirements;• acceptance-testing requirements.

Page 468: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) 87LIFE CYCLE MANAGEMENT ACTIVITIES

The level of monitoring planned for the TR phase should beappropriate to the requirements and the criticality of the software. Riskanalysis should be used to target areas for detailed scrutiny.

7.5 ACCEPTANCE TEST SPECIFICATION

The developer must construct an acceptance test specification inthe DD phase and document it in the SVVP (SVV17). The specificationshould be based on the acceptance test plan produced in the UR phase.This specification should define the acceptance test:• designs (SVV19);• cases (SVV20);• procedures (SVV21).

Guidance on writing the SVVP/AT is provided in ESA PSS-05-10,Guide to Software Verification and Validation.

Page 469: ESA Software Engineering Standards

88 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)LIFE CYCLE MANAGEMENT ACTIVITIES

This page is intentionally left blank.

Page 470: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) A-1GLOSSARY

APPENDIX AGLOSSARY

Terms used in this document are consistent with ESA PSS-05-0[Ref. 1] and ANSI/IEEE Std 610.12-1990 [Ref. 2].

A.1 LIST OF ACRONYMS

AD Architectural DesignADD Architectural Design DocumentAD/R Architectural Design ReviewANSI American National Standards InstituteAT Acceptance TestBSI British Standards InstituteBSSC Board for Software Standardisation and ControlCASE Computer Aided Software EngineeringDD Detailed Design and productionDDD Detailed Design DocumentDD/R Detailed Design and production ReviewESA European Space AgencyHCI Human-Computer InteractionIEEE Institute of Electrical and Electronics EngineersISO International Standards OrganisationIT Integration TestICD Interface Control DocumentJSD Jackson System DevelopmentJSP Jackson Structured ProgrammingOOA Object-Oriented AnalysisOOD Object-Oriented DesignPA Product AssurancePDL Program Design LanguagePERT Program Evaluation and Review TechniquePSS Procedures, Specifications and StandardsRID Review Item DiscrepancySADT Structured Analysis and Design TechniqueSCM Software Configuration ManagementSCMP Software Configuration Management PlanSCR Software Change RequestSPM Software Project ManagementSPMP Software Project Management PlanSPR Software Problem Report

Page 471: ESA Software Engineering Standards

A-2 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)GLOSSARY

SQA Software Quality AssuranceSQAP Software Quality Assurance PlanSRD Software Requirements DocumentST System TestSUM Software User ManualSVVP Software Verification and Validation PlanTBC To Be ConfirmedTBD To Be DefinedTR TransferURD User Requirements DocumentUT Unit Test

Page 472: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) B-1REFERENCES

APPENDIX BREFERENCES

1. ESA Software Engineering Standards, ESA PSS-05-0 Issue 2 February1991.

2. IEEE Standard Glossary for Software Engineering Terminology,ANSI/IEEE Std 610.12-1990.

3. The STARTs Guide - a guide to methods and software tools for theconstruction of large real-time systems, NCC Publications, 1987.

4. IEEE Standard for Software Reviews and Audits, IEEE Std 1028-1988.

5. IEEE Recommended Practice for Software Design Descriptions,ANSI/IEEE Std 1016-1987.

6. Principles of Program Design, M. Jackson, Academic Press, 1975.

7. Software Engineering with Ada, second edition, G.Booch,Benjamin/Cummings Publishing Company Inc, 1986.

8. Petri Nets, J.L.Petersen, in ACM Computing Surveys, 9(3) Sept 1977.

9. IEEE Recommended Practice for Ada as Program Design Language,ANSI/IEEE Std 990-1987.

10. Structured Programming, O.-J. Dahl, E.W. Dijkstra and C.A.R.Hoare,Academic Press, 1972.

11. T.J.McCabe, A Complexity Measure, IEEE Transactions on SoftwareEngineering, VOL SE-2, No 4, December 1976.

12. Structured Development for Real-Time Systems, P.T.Ward andS.J.Mellor, Yourdon Press, 1985 (Three Volumes).

13. Computer Languages, N.S.Baron, Penguin Books, 1986.

14. Software Reliability, G.J.Myers, Wiley, 1976

15. Object-Oriented Design, P.Coad and E.Yourdon, Prentice-Hall, 1991.

Page 473: ESA Software Engineering Standards

B-2 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)REFERENCES

This page is intentionally left blank.

Page 474: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) C-1MANDATORY PRACTICES

APPENDIX CMANDATORY PRACTICES

This appendix is repeated from ESA PSS-05-0 appendix D.5.DD01 DD phase activities shall be carried out according to the plans defined in the

AD phase.The detailed design and production of software shall be based on thefollowing three principles:

DD02 • top-down decomposition;DD03 • structured programming;DD04 • concurrent production and documentation.DD05 The integration process shall be controlled by the software configuration

management procedures defined in the SCMP.DD06 Before a module can be accepted, every statement in a module shall be

executed successfully at least once.DD07 Integration testing shall check that all the data exchanged across an

interface agrees with the data structure specifications in the ADD.DD08 Integration testing shall confirm that the control flows defined in the ADD

have been implemented.DD09 System testing shall verify compliance with system objectives, as stated in

the SRD.DD10 When the design of a major component is finished, a critical design review

shall be convened to certify its readiness for implementation.DD11 After production, the DD Review (DD/R) shall consider the results of the

verification activities and decide whether to transfer the software.DD12 All deliverable code shall be identified in a configuration item list.DD13 The DDD shall be an output of the DD phase.DD14 Part 2 of the DDD shall have the same structure and identification scheme

as the code itself, with a 1:1 correspondence between sections of thedocumentation and the software components.

DD15 The DDD shall be complete, accounting for all the software requirements inthe SRD.

DD16 A table cross-referencing software requirements to the detailed designcomponents shall be placed in the DDD.

DD17 A Software User Manual (SUM) shall be an output of the DD phase.

Page 475: ESA Software Engineering Standards

C-2 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)MANDATORY PRACTICES

This page is intentionally left blank.

Page 476: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) D-1REQUIREMENTS TRACEABILITY MATRIX

APPENDIX D REQUIREMENTS TRACEABILITY MATRIX

REQUIREMENTS TRACEABILITY MATRIX DDD TRACED TO SRD

DATE: <YY-MM-DD>PAGE 1 OF <nn>

PROJECT: <TITLE OF PROJECT>SRD IDENTIFIER DDD IDENTIFIER SOFTWARE REQUIREMENT

Page 477: ESA Software Engineering Standards

D-2 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)REQUIREMENTS TRACEABILITY MATRIX

This page is intentionally left blank.

Page 478: ESA Software Engineering Standards

ESA PSS-05-05 Issue 1 Revision 1 (March 1995) D-1INDEX

APPENDIX EINDEX

acceptance tests, 21Ada, 9, 37ANSI/IEEE Std 1016-1987, 56ANSI/IEEE Std 1063-1987, 67assertions, 18audits, 86backtracking, 33, 39backward chaining, 33baseline, 44black box testing, 19block structuring, 31bottom-up approach, 42bottom-up testing, 21C, 36C++, 38CASE tools, 43change control, 44, 55COBOL, 35complexity, 19configuration management, 43cyclomatic complexity, 47data dictionary, 43DD/R, 1DD01, 4, 85DD02, 5DD03, 14DD04, 14DD05, 21DD06, 20DD07, 21DD08, 21DD09, 21DD10, 12DD11, 23DD12, 21, 65DD13, 53DD14, 53, 56DD15, 53, 56DD16, 66DD17, 67DDD, 3debuggers, 20declarative structuring, 32defects, 22diagnostic code, 20

driver modules, 42dynamic analysers, 20dynamic binding, 32embedded systems, 22flowchart, 26formal review, 23FORTRAN, 34forward chaining, 33header, 16Horn Clause logic, 39hypertext, 73ICD, 64inheritance, 32integration testing, 21Jackson Structured Programming, 29LISP, 38media control, 86messages, 32methods, 25, 86metrics, 86ML, 39Modula-2, 37module, 20online help, 72Pascal, 36polymorphism, 32precompiler, 44problem reporting, 86Program Design Language, 28progress reports, 4, 85Prolog, 39prototyping, 11pseudo-code, 7, 12, 14, 26, 29quality assurance, 85recursion, 31repository, 43reviews, 86RID, 23risk analysis, 87risk management, 86SCM06, 61SCM15, 15SCM16, 15SCM17, 15SCM18, 15

Page 479: ESA Software Engineering Standards

D-2 ESA PSS-05-05 Issue 1 Revision 1 (March 1995)INDEX

SCM48, 85SCM49, 85SCMP/DD, 85SCMP/TR, 3, 22, 85simulators, 22Smalltalk, 38SPM13, 85SPMP/DD, 14, 85SPMP/TR, 3, 22, 85SQA, 86SQA10, 86SQA11, 86SQAP/DD, 85SQAP/TR, 3, 22stepwise refinement, 26strong typing, 31stub modules, 42SUM, 3, 67SVV17, 87SVV19, 14, 87SVV20, 14, 87SVV21, 14, 87SVVP/AT, 3SVVP/DD, 85SVVP/IT, 21SVVP/ST, 22SVVP/UT, 20System testing, 21techniques, 86test coverage, 50testing activities, 86tools, 86top-down approach, 42top-down testing, 21TR phase, 85traceability, 43training, 86unit test, 19white box testing, 19word processor, 51

Page 480: ESA Software Engineering Standards

ESA PSS-05-06 Issue 1 Revision 1March 1995

european space agency / agence spatiale européenne8-10, rue Mario-Nikis, 75738 PARIS CEDEX, France

Guideto the softwaretransferphasePrepared by:ESA Board for SoftwareStandardisation and Control(BSSC)

Approved by:The Inspector General, ESA

Page 481: ESA Software Engineering Standards

ESA PSS-05-06 Issue 1 Revision 1 (March 1995)DOCUMENT STATUS SHEET

ii

DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

1. DOCUMENT TITLE: ESA PSS-05-06 Guide to the Software Transfer Phase

2. ISSUE 3. REVISION 4. DATE 5. REASON FOR CHANGE

1 0 1994 First issue

1 1 1995 Minor updates for publication

Issue 1 Revision 1 approved, May 1995Board for Software Standardisation and ControlM. Jones and U. Mortensen, co-chairmen

Issue 1 approved, 15th June 1995Telematics Supervisory Board

Issue 1 approved by:The Inspector General, ESA

Published by ESA Publications Division,ESTEC, Noordwijk, The Netherlands.Printed in the Netherlands.ESA Price code: E1ISSN 0379-4059

Copyright © 1995 by European Space Agency

Page 482: ESA Software Engineering Standards

ESA PSS-05-06 Issue 1 Revision 1 (March 1995)TABLE OF CONTENTS

iii

TABLE OF CONTENTS

CHAPTER 1 INTRODUCTION..................................................................................11.1 PURPOSE............................................................................................................... 11.2 OVERVIEW ............................................................................................................. 1

CHAPTER 2 THE TRANSFER PHASE.....................................................................32.1 INTRODUCTION .................................................................................................... 32.2 BUILD AND INSTALL SOFTWARE........................................................................ 52.3 RUN ACCEPTANCE TESTS .................................................................................. 6

2.3.1 The Acceptance Test Specification ............................................................... 72.3.2 Participation in Acceptance Tests ................................................................. 82.3.3 Acceptance Test Tools................................................................................... 82.3.4 Acceptance Test Pass and Fail Criteria......................................................... 92.3.5 Acceptance Test Suspension and Resumption Criteria ............................... 9

2.4 WRITE SOFTWARE TRANSFER DOCUMENT...................................................... 92.5 PROVISIONAL ACCEPTANCE ............................................................................10

CHAPTER 3 THE SOFTWARE TRANSFER DOCUMENT.....................................133.1 INTRODUCTION ..................................................................................................133.2 STYLE...................................................................................................................133.3 EVOLUTION .........................................................................................................133.4 RESPONSIBILITY .................................................................................................143.5 MEDIUM...............................................................................................................143.6 CONTENT ............................................................................................................14

APPENDIX A GLOSSARY ....................................................................................A-1APPENDIX B REFERENCES................................................................................B-1APPENDIX C MANDATORY PRACTICES ...........................................................C-1APPENDIX D INDEX.............................................................................................D-1

Page 483: ESA Software Engineering Standards

ESA PSS-05-06 Issue 1 Revision 1 (March 1995)TABLE OF CONTENTS

iv

This page is intentionally left blank.

Page 484: ESA Software Engineering Standards

ESA PSS-05-06 Issue 1 Revision 1 (March 1995)PREFACE

v

PREFACE

This document is one of a series of guides to software engineering produced bythe Board for Software Standardisation and Control (BSSC), of the European SpaceAgency. The guides contain advisory material for software developers conforming toESA's Software Engineering Standards, ESA PSS-05-0. They have been compiled fromdiscussions with software engineers, research of the software engineering literature,and experience gained from the application of the Software Engineering Standards inprojects.

Levels one and two of the document tree at the time of writing are shown inFigure 1. This guide, identified by the shaded box, provides guidance aboutimplementing the mandatory requirements for the software Transfer Phase described inthe top level document ESA PSS-05-0.

Guide to theSoftware Engineering

Guide to theUser Requirements

Definition Phase

Guide toSoftware Project

Management

PSS-05-01

PSS-05-02 UR GuidePSS-05-03 SR Guide

PSS-05-04 AD GuidePSS-05-05 DD Guide

PSS-05-07 OM Guide

PSS-05-08 SPM GuidePSS-05-09 SCM Guide

PSS-05-11 SQA Guide

ESASoftware

EngineeringStandards

PSS-05-0

Standards

Level 1

Level 2

PSS-05-10 SVV Guide

PSS-05-06 TR Guide

Figure 1: ESA PSS-05-0 document tree

The Guide to the Software Engineering Standards, ESA PSS-05-01, containsfurther information about the document tree. The interested reader should consult thisguide for current information about the ESA PSS-05-0 standards and guides.

The following past and present BSSC members have contributed to theproduction of this guide: Carlo Mazza (chairman), Gianfranco Alvisi, Michael Jones,Bryan Melton, Daniel de Pablo and Adriaan Scheffer.

Page 485: ESA Software Engineering Standards

ESA PSS-05-06 Issue 1 Revision 1 (March 1995)PREFACE

vi

The BSSC wishes to thank Jon Fairclough for his assistance in the developmentof the Standards and Guides, and to all those software engineers in ESA and Industrywho have made contributions.

Requests for clarifications, change proposals or any other comment concerningthis guide should be addressed to:

BSSC/ESOC Secretariat BSSC/ESTEC SecretariatAttention of Mr M Jones Attention of Mr U MortensenESOC ESTECRobert Bosch Strasse 5 Postbus 299D-64293 Darmstadt NL-2200 AG NoordwijkGermany The Netherlands

Page 486: ESA Software Engineering Standards

ESA PSS-05-06 Issue 1 Revision 1 (March 1995) 1INTRODUCTION

CHAPTER 1INTRODUCTION

1.1 PURPOSE

ESA PSS-05-0 describes the software engineering standards to beapplied for all deliverable software implemented for the European SpaceAgency (ESA) [Ref 1].

ESA PSS-05-0 defines the third phase of the software developmentlife cycle as the ‘Detailed Design and Production Phase’ (DD phase). Theoutputs of this phase are the code, the Detailed Design Document (DDD)and the Software User Manual (SUM). The fourth and last phase of thesoftware development life cycle is the ‘Transfer’ (TR) phase.

The Transfer Phase can be called the ‘handover phase’ of the lifecycle because the developers release the software to the users. Thesoftware is installed on the target computer system and acceptance testsare run to validate it. Activities and products are reviewed by the SoftwareReview Board (SRB) at the end of the phase.

This document provides guidance on how to transfer the softwareand write the Software Transfer Document (STD). This document should beread by everyone who participates in the Transfer Phase, such as thesoftware project manager, software engineers, software quality assurancestaff, maintenance staff and users.

1.2 OVERVIEW

Chapter 2 discusses the Transfer Phase. Chapter 3 describes howto write the Software Transfer Document.

All the mandatory practices in ESA PSS-05-0 relevant to thesoftware Transfer Phase are repeated in this document. The identifier of thepractice is added in parentheses to mark a repetition. This documentcontains no new mandatory practices.

Page 487: ESA Software Engineering Standards

2 ESA PSS-05-06 Issue 1 Revision 1 (March 1995)INTRODUCTION

This page is intentionally left blank.

Page 488: ESA Software Engineering Standards

ESA PSS-05-06 Issue 1 Revision 1 (March 1995) 3THE TRANSFER PHASE

CHAPTER 2THE TRANSFER PHASE

2.1 INTRODUCTION

Software transfer is the process of handing-over the software to theusers. This process requires the participation of both developers and users.It should be controlled and monitored by a test manager, who representsthe users.

The Transfer Phase starts when the DD Phase Review Board hasdecided that the software is ready for transfer. The phase ends when thesoftware is provisionally accepted.

The software should have been system tested in an environmentrepresentative of the target environment before the DD phase review. Thesesystem tests should include rehearsals of the acceptance tests. Suchrehearsals are often referred to as ‘Factory Acceptance Tests’ (FATs).

The primary inputs to the phase are the code, Detailed DesignDocument (DDD) and the Software User Manual (SUM). Four plans are alsorequired:• Software Project Management Plan for the Transfer Phase (SPMP/TR);• Software Configuration Management Plan for the Transfer Phase

(SCMP/TR);• Software Verification and Validation Plan, Acceptance Test section

(SVVP/AT);• Software Quality Assurance Plan for the Transfer Phase (SQAP/TR).

The Transfer Phase must be carried out according to these plans(TR03). Figure 2.1 overleaf shows the information flows into and out of thephase, and between each activity.

The first activity of Transfer Phase is to build and install the softwarein the target environment. The build procedures should be described in theDDD and the installation procedures should be described in the SUM. Toolsupport should be provided for build and installation.

The users then run the acceptance tests, as defined by theSVVP/AT. Some members of the development team should observe the

Page 489: ESA Software Engineering Standards

4 ESA PSS-05-06 Issue 1 Revision 1 (March 1995)THE TRANSFER PHASE

tests. Test reports are completed for each execution of a test procedure.Software Problem Reports (SPRs) are completed for each problem thatoccurs. Tools may be employed for running the acceptance tests.SPMP/TR

SQAP/TR

SCMP/TRSVVP/AT

Code

Changed CI

SoftwareSystem

Installation andBuild tools

test tools

see SCM guidesection 2.4.3

Test report

SPR

Controlled CI

SCR

STD

DDD

RunAcceptanceTests

InstallSoftware

ControlChanges

WriteSTD

SUM

SRB

Statementof provisionalacceptanceProvisional

Acceptance

SMR

Checked CI listInstall & Build Report

Build and

Figure 2.1: Transfer Phase activities

Problems that arise during the Transfer Phase may have to besolved before the software can be provisionally accepted. This requires theuse of the change control process defined in the SCMP/TR. SoftwareChange Requests (SCRs), Software Modification Reports (SMRs), and newversions of configuration items are produced by this process. Changedconfiguration items are included in a rebuild of a software. The software isthen reinstalled and acceptance tests are rerun.

When problems occur, the sequence of activities of build and installsoftware, run acceptance tests and control changes must be repeated.Software project managers should reserve enough resources to ensure thatthe cost of repeating these activities can be contained within the projectbudget.

When the acceptance tests have been completed the developerwrites the Software Transfer Document (STD). This document shouldsummarise the build and installation, acceptance test and change controlactivities.

Page 490: ESA Software Engineering Standards

ESA PSS-05-06 Issue 1 Revision 1 (March 1995) 5THE TRANSFER PHASE

When the STD is ready, the Software Review Board (SRB) meets toreview the installation and acceptance test reports and any outstandingproblems with the software. If the SRB is satisfied with the results, and surethat no outstanding problems are severe enough to prevent safe operation,it recommends that the initiator provisionally accepts the software. Theinitiator then issues a statement of provisional acceptance.

The following sections discuss in more detail the activities ofsoftware build and installation, acceptance testing, writing the STD andprovisional acceptance. The change control process is discussed inSections 2.3.3 and 2.4.3 of ESA PSS-05-09, ‘Guide to SoftwareConfiguration Management’ [Ref 3].

2.2 BUILD AND INSTALL SOFTWARE

Building and installing the software consists of checking thedeliverables, compiling all the source files, linking the object files, andcopying the resulting executable files into the target environment foracceptance tests.

The deliverables should be listed in a ‘configuration item list’ . Aconfiguration item may be any type of software or hardware element, suchas a document, a source file, an object file, an executable file, a disk, or atape. The first action upon delivery is to compare the deliverables with theconfiguration item list. This is done by:• checking that all the necessary media have been supplied;• checking that the media contain the necessary files.

After the configuration check, the software is copied to where it is tobe built for the target environment.

Building the software consists of compiling the source files andlinking the resulting object files into executable programs. Two kinds ofbuilds are possible:• a complete build, in which all sources are compiled;• a partial build, in which only selected sources are compiled (e.g those

that have changed since the last build).

Partial builds often rely on the use of procedures (e.g. ‘make’ files)that contain knowledge of the dependencies between files.

Page 491: ESA Software Engineering Standards

6 ESA PSS-05-06 Issue 1 Revision 1 (March 1995)THE TRANSFER PHASE

ESA PSS-05-0 requires that the capability of building the systemfrom the components that are directly modifiable by the maintenance teambe established (TR04). This means that the Transfer Phase must begin witha complete build of the software for the target environment. A complete buildchecks that all sources in the system are present, and is part of the processof checking that all the necessary configuration items are present.

The procedures for building the software should be contained in theDDD1. Tools should be provided for building the software, and the buildprocedures should describe how to operate them.

The build tools should allow both complete builds and partial builds.The ability to rapidly modify the software during the TR and OM phases maydepend upon the ability to perform partial builds.

Installation consists of copying the ‘as-built’ software system to thetarget environment and configuring the target environment so that thesoftware can be run as described in the Software User Manual.

The installation procedures should be described in the SoftwareUser Manual, or an annex to it.

Installation should be supported by tools that automate the processas much as possible. Instructions issued by the tools should be simple andclear. Complicated installation procedures requiring extensive manual inputshould be avoided.

Any changes to the existing environment should be minimal andclearly documented. Installation software should prompt the user forpermission before making any changes to the system configuration (e.g.modifying the CONFIG.SYS file of a DOS-based PC).

Procedures should be provided for deinstallation, so that rollback ofthe system to the state before installation is possible.

2.3 RUN ACCEPTANCE TESTS

The purpose of the acceptance tests is to show that the softwaremeets the user requirements specified in the URD. This is called ‘validation’.

1 ESA PSS-05-0 Issue 2 omitted this section from part one of the DDD. This section will be added to the DDD template in the next issue of the ESA Software Engineering Standards.

Page 492: ESA Software Engineering Standards

ESA PSS-05-06 Issue 1 Revision 1 (March 1995) 7THE TRANSFER PHASE

The acceptance tests are performed on the software system installed in theprevious stage of the Transfer Phase.

This section addresses the following questions that have to beconsidered when validating the software:• where are the acceptance tests specified?• who should participate?• what tools are suitable?• what criteria should be used for passing the tests?• what criteria should be used for suspending the tests?

2.3.1 The Acceptance Test Specification

The acceptance tests are specified in the acceptance test section ofthe Software Verification and Validation Plan (SVVP/AT). The SVVP/ATcontains five subsections, the first four of which are prepared before theTransfer Phase:• test plan;• test designs;• test cases;• test procedures;• test reports.

The test plan is prepared as soon as the User RequirementsDocument (URD) is completed at the end of the UR phase (SVV11). The testplan outlines the scope, approach, resources and schedule of theacceptance tests.

In the DD phase, tests are designed for the user requirements(SVV19), although some acceptance test design may be done earlier. Testcases and procedures are defined in the DD phase when the detaileddesign of the software has sufficiently stabilised (SVV20, SVV21).

The acceptance tests necessary for provisional acceptance (seeSection 2.5) must be indicated in the SVVP (TR05). Some of the tests (e.g.of reliability) often require an extended period of operations, and can only becompleted by final acceptance. Such tests should be clearly identified.

Page 493: ESA Software Engineering Standards

8 ESA PSS-05-06 Issue 1 Revision 1 (March 1995)THE TRANSFER PHASE

Personnel that carry out the tests should be familiar with the SUMand the operational environment. Instructions in the SUM need not berepeated in the SVVP.

Acceptance test reports are prepared in the Transfer Phase as eachacceptance test is performed. They should be added to the SVVP/AT(SVV22). Possible formats include:• acceptance test report forms recording the date and outcome of the

test cases executed by the procedure;• execution log file.

Guidance on the preparation of the SVVP/AT is contained in‘Guide to Software Verification and Validation’, ESA PSS-05-10 [Ref 4].These guidelines will be of interest to developers, who have to prepare theSVVP/AT. The following sections deal with acceptance test issues relevant toboth users and developers.

2.3.2 Participation in Acceptance Tests

Users may be end users (i.e. a person who utilises the products orservices of the system) or operators (i.e. a person who controls andmonitors the hardware and software of a system). Both end users andoperators must participate in the acceptance tests (TR01). A test managershould be appointed to control and monitor the acceptance tests andrepresent the users.

Members of the development team and an SQA representativeshould participate in the acceptance tests.

Acceptance testing is not a substitute for a training programme,which should be scheduled separately.

2.3.3 Acceptance Test Tools

Acceptance test procedures should be automated as far aspossible. Tools that may be useful for acceptance testing are:• test harnesses for running test procedures;• performance analysers for measuring resource usage;• comparators for comparing actual results with expected results;• test management tools for organising the test data and scripts.

Page 494: ESA Software Engineering Standards

ESA PSS-05-06 Issue 1 Revision 1 (March 1995) 9THE TRANSFER PHASE

Simulators should be used when other parts of the system are notavailable or would be put at risk.

2.3.4 Acceptance Test Pass and Fail Criteria

The acceptance test plan (SVVP/AT/Plan) defines the test pass andfail criteria. These should be as objective as possible. The criterion that isnormally used is ‘the software will be judged to have passed an acceptancetest if no critical software problems that relate to the test occur during theTransfer Phase’. Projects should define criteria for criticality, for example:• critical: crash, incorrect result, unsafe behaviour;• non-critical: typing error, failure to meet a non-essential requirement.

2.3.5 Acceptance Test Suspension and Resumption Criteria

The acceptance test plan defines the test suspension andresumption criteria. The suspension criteria should define the events thatcause the abandonment of an acceptance test case, an acceptance testprocedure and the acceptance test run. Criteria may, for example, be basedupon test duration, percentage of tests failed, or failure of specific tests.

Flexibility in the application of the suspension criteria is usuallynecessary. For example a single critical software problem might cause a testprocedure or even a whole test run to be abandoned. In other cases it maybe possible to continue after experiencing problems, although it may not bean efficient use of resources to do so. In some situations, it may bepossible to complete the acceptance tests even if there are severalproblems, because the problems lie within a subsystem that can be testedindependently of the rest of the system.

When the problems that have caused the abandonment of anacceptance test procedure have been corrected, the entire test procedureshould be repeated. All the acceptance test procedures should normally berepeated if acceptance tests are resumed after the abandonment of a testrun. The decision about which acceptance tests are rerun lies with the users,in consultation with the developers.

2.4 WRITE SOFTWARE TRANSFER DOCUMENT

The purpose of the Software Transfer Document (STD) is to presentthe results of the Transfer Phase in a concise form, so that the Software

Page 495: ESA Software Engineering Standards

10 ESA PSS-05-06 Issue 1 Revision 1 (March 1995)THE TRANSFER PHASE

Review Board can readily decide upon the acceptability of the software.Further information about the STD is contained in Chapter 3.

2.5 PROVISIONAL ACCEPTANCE

When the acceptance tests have been completed, the software isreviewed by the SRB. Inputs to the review are the:• Software Transfer Document (STD);• acceptance test results;• Software Problem Reports made during the Transfer Phase;• Software Change Requests made during the Transfer Phase;• Software Modification Reports made during the Transfer Phase.

The Software Review Board (SRB) reviews these inputs andrecommends, to the initiator, whether the software can be provisionallyaccepted or not (TR02).

The responsibilities and constitution of the SRB are defined inSection 2.2.1.3 of ESA PSS-05-09 ‘Guide to Software ConfigurationManagement’ [Ref 3]. The software review should be a formal technicalreview. The procedures for a technical review are described in Section 2.3.1of ESA PSS-05-10, ‘Guide to Software Verification and Validation’ [Ref 4].

The SRB should confirm that the deliverables are complete and upto date by reviewing the checked configuration item list, which is part of theSoftware Transfer Document. The SRB should then evaluate the degree ofcompliance of the software with the user requirements by considering thenumber and nature of:• failed test cases;• tests cases not attempted or not completed;• critical software problems reported;• critical software problems not solved.

Solution of a problem means that a change request has beenapproved, modification has been done, and all tests repeated and passed.

The number of test cases, critical software problems and non-critical software problems should be evaluated for the whole system and for

Page 496: ESA Software Engineering Standards

ESA PSS-05-06 Issue 1 Revision 1 (March 1995) 11THE TRANSFER PHASE

each subsystem. The SRB might, for example, decide that somesubsystems are acceptable and others are not.

When the number of problems encountered in acceptance tests islow, the SRB should check the thoroughness of the acceptance tests.Experience has indicated that the existence of faults in delivered software isthe rule rather than the exception (typically 1 to 4 defects per 1000 lines ofcode [Ref 5]).

Where appropriate, the SRB should examine the trends in thenumber of critical software problems reported and solved. There should be adownward trend in the number of critical problems reported. An upwardtrend is a cause for concern.

If the SRB decides that the degree of compliance of the softwarewith the user requirements is acceptable, it should recommend to theinitiator that the software be provisionally accepted.

The statement of provisional acceptance is produced by theinitiator, on behalf of the users, and sent to the developer (TR06). Theprovisionally accepted software system consists of the outputs of allprevious phases and the modifications found necessary in the TransferPhase (TR07). Before it can be finally accepted, the software must enter theOperations and Maintenance phase and undergo a period of operations toprove its reliability.

Page 497: ESA Software Engineering Standards

12 ESA PSS-05-06 Issue 1 Revision 1 (March 1995)THE TRANSFER PHASE

This page is intentionally left blank.

Page 498: ESA Software Engineering Standards

ESA PSS-05-06 Issue 1 Revision 1 (March 1995) 13THE SOFTWARE TRANSFER DOCUMENT

CHAPTER 3THE SOFTWARE TRANSFER DOCUMENT

3.1 INTRODUCTION

ESA PSS-05-0 states that the Software Transfer Document (STD)shall be an output of the Transfer Phase (TR08). The purpose of the STD isto summarise:• the software that has been delivered;• the build of the software;• the installation of the software;• the acceptance test results;• the problems that occurred during the Transfer Phase;• the changes requested during the Transfer Phase;• the modifications done during the Transfer Phase.

3.2 STYLE

The STD should be concise, clear and consistent.

3.3 EVOLUTION

The developer should prepare the STD after the acceptance testsand before the software review.

If the software is provisionally accepted, the completed STD mustbe handed over by the developer to the maintenance organisation at theend of the Transfer Phase (TR09).

After the Transfer Phase the STD should not be modified. Newreleases of the software in the Operations and Maintenance Phase aredocumented in Software Release Notes (SRNs), rather than in new issues ofthe STD.

Page 499: ESA Software Engineering Standards

14 ESA PSS-05-06 Issue 1 Revision 1 (March 1995)THE SOFTWARE TRANSFER DOCUMENT

3.4 RESPONSIBILITY

The developer is responsible for the production of the STD. Thesoftware librarian is a suitable person to write it.

3.5 MEDIUM

The STD is normally a paper document.

3.6 CONTENT

ESA PSS-05-0 recommends the following table of contents for theSoftware Transfer Document:

1 Introduction 1.1 Purpose 1.2 Scope 1.3 Definitions, acronyms and abbreviations 1.4 References

2 Build Procedures2

3 Installation Procedures

4 Configuration Item List

5 Acceptance Test Report Summary

6 Software Problem Reports

7 Software Change Requests

8 Software Modification Reports

2 ESA PSS-05-0 Issue 2 states that Build Procedures must be section 3 and Installation Procedures must be Section 2. This order will be reversed in the next issue of the ESA Software Engineering Standards.

Page 500: ESA Software Engineering Standards

ESA PSS-05-06 Issue 1 Revision 1 (March 1995) 15THE SOFTWARE TRANSFER DOCUMENT

3.6.1 STD/1 INTRODUCTION

3.6.1.1 STD/1.1 Purpose

This section should define the purpose of the STD, for example:‘This Software Transfer Document (STD) reports in summary form theTransfer Phase activities for the [insert the name of the project here]’.

3.6.1.2 STD/1.2 Scope

This section should summarise the software that is beingtransferred.

Projects that adopt incremental delivery or evolutionary life cycleapproaches will have multiple Transfer Phases. Each Transfer Phase mustproduce an STD. Each STD should define the scope of the software beingtransferred in that particular Transfer Phase.

3.6.1.3 STD/1.3 Definitions, acronyms and abbreviations

This section should define all the terms, acronyms, andabbreviations used in the STD, or refer to other documents where thedefinitions can be found.

3.6.1.4 STD/1.4 References

This section should provide a complete list of all the applicable orreference documents, identified by title, author and date. Each documentshould be marked as applicable or reference. If appropriate, report number,journal name and publishing organisation should be included.

Page 501: ESA Software Engineering Standards

16 ESA PSS-05-06 Issue 1 Revision 1 (March 1995)THE SOFTWARE TRANSFER DOCUMENT

3.6.2 STD/2 BUILD PROCEDURES

This section should report what happened when the softwaresystem was built. For example this section may report:• when the software was built;• where the software was built;• the hardware and software environment in which the software was built;• which version of the software was built;• any problems encountered during the build;• the CPU time and elapsed time required for building the system;• the elapsed time required for building the system.

This section should reference the build procedures, which shouldhave been defined in the Detailed Design Document3.

3.6.3 STD/3 INSTALLATION PROCEDURES

This section should report what happened when the software wasinstalled. For example this section may report:• when the software was installed;• where the software was installed;• the hardware and software environment in which the software was

installed;• which version of the software was installed;• any problems encountered during the installation;• the elapsed time required for installing the system;• the disk space used;• the system parameters modified.

The organisation of the software files in the target environmentshould be described (e.g. root directory, subdirectories etc).

3 ESA PSS-05-0 Issue 2 omitted this section from part one of the DDD. This section will be added to the DDD template in the next issue of the ESA Software Engineering Standards.

Page 502: ESA Software Engineering Standards

ESA PSS-05-06 Issue 1 Revision 1 (March 1995) 17THE SOFTWARE TRANSFER DOCUMENT

This section should reference the installation procedures, whichshould have been defined in the Software User Manual, or an annex to it.

3.6.4 STD/4 CONFIGURATION ITEM LIST

This section should contain the list of the configuration itemstransferred. There should be evidence that the transferral of eachconfiguration item has been verified (e.g. a completed checklist).

3.6.5 STD/5 ACCEPTANCE TEST REPORT SUMMARY

This section must contain a summary of the acceptance test reports(TR10). An overall statement of pass or fail for each subsystem should beprovided. Specific test failures should be identified and discussed.

This section should identify all the user requirements that thesoftware does not comply with. The list should include the value of the needattribute of the requirement (e.g. essential, desirable).

This section should present data for the following metrics:• number of acceptance test cases passed;• number of acceptance test cases failed;• number acceptance test cases not attempted or completed.

Data should be presented for the whole system and individually foreach subsystem.

3.6.6 STD/6 SOFTWARE PROBLEM REPORTS

This section should list the Software Problem Reports (SPRs) raisedduring the Transfer Phase and their status at STD issue. Information aboutsoftware problems should be present in the project configuration statusaccounts.

This section should present data for the following metrics:• number of software problems reported;• number of software problems closed;• number of software problems open (i.e. status is action or update).

Data should be presented for all the SPRs, critical SPRs and non-critical SPRs.

Page 503: ESA Software Engineering Standards

18 ESA PSS-05-06 Issue 1 Revision 1 (March 1995)THE SOFTWARE TRANSFER DOCUMENT

3.6.7 STD/7 SOFTWARE CHANGE REQUESTS

This section should list the Software Change Requests (SCRs)raised during the Transfer Phase and their status at STD issue. Informationabout software changes should be present in the project configurationstatus accounts.

This section should present data for the following metrics:• number of software change requests closed;• number of software change requests open (i.e. status is action or

update).

Data should be presented for all the SCRs, critical SCRs and non-critical SCRs.

3.6.8 STD/8 SOFTWARE MODIFICATION REPORTS

This section must list the Software Modification Reports (SMRs)completed during the Transfer Phase (TR10). Information about softwaremodifications should be present in the project configuration statusaccounts.

This section should present data for the following metrics:• number of software modifications closed;• number of software modifications in progress.

Data should be presented for all the SMRs, critical SMRs and non-critical SMRs. An SMR is critical if it relates to a critical SCR.

Page 504: ESA Software Engineering Standards

ESA PSS-05-06 Issue 1 Revision 1 (March 1995) A-1GLOSSARY

APPENDIX AGLOSSARY

A.1 LIST OF TERMS

Terms used in this document are consistent with ESA PSS-05-0 [Ref1] and ANSI/IEEE Std 610.12 [Ref 2]. Additional terms not defined in thesestandards are listed below.

complete build

The process of compiling all the source files written by the developers andlinking the object files produced with each other, and external object files.

development environment

The environment of computer hardware, operating software and testsoftware in which the software system is developed.

end user

A person who utilises the products or services of a system.

installation

The process of copying a software system into the target environment andconfiguring the target environment to make the software system usable.

operational environment

The target environment, external software and users.

operator

A person who controls and monitors the hardware and software of a system.partial build

The process of compiling selected source files (e.g those that have changedsince the last build) and linking the object files produced with each other,and pre-existing object files.

Page 505: ESA Software Engineering Standards

A-2 ESA PSS-05-06 Issue 1 Revision 1 (March 1995)GLOSSARY

target environment

The environment of computer hardware and operating software in which thesoftware system is used.

user

A person who utilises the products or services of a system, or a person whocontrols and monitors the hardware and software of a system (i.e. an enduser, an operator, or both).

Page 506: ESA Software Engineering Standards

ESA PSS-05-06 Issue 1 Revision 1 (March 1995) A-3GLOSSARY

A.2 LIST OF ACRONYMS

ANSI American National Standards InstituteAT Acceptance TestBSSC Board for Software Standardisation and ControlCASE Computer Aided Software EngineeringDD Detailed Design and productionFAT Factory Acceptance TestSCMP Software Configuration Management PlanSCR Software Change RequestSMR Software Modification ReportSPMP Software Project Management PlanSPR Software Problem ReportSQAP Software Quality Assurance PlanSR Software Requirements definitionSRD Software Requirements DocumentSRN Software Release NoteSTD Software Transfer DocumentSVVP Software Verification and Validation PlanTR TransferUR User Requirements definitionURD User Requirements Document

Page 507: ESA Software Engineering Standards

A-4 ESA PSS-05-06 Issue 1 Revision 1 (March 1995)GLOSSARY

This page is intentionally left blank.

Page 508: ESA Software Engineering Standards

ESA PSS-05-06 Issue 1 Revision 1 (March 1995) B-1REFERENCES

APPENDIX BREFERENCES

1. ESA Software Engineering Standards, ESA PSS-05-0 Issue 2 February1991.

2. IEEE Standard Glossary of Software Engineering Terminology,ANSI/IEEE Std 610.12-1990

3. Guide to Software Configuration Management, ESA PSS-05-09, Issue 1,November 1992

4. Guide to Software Verification and Validation, ESA PSS-05-10, Issue 1,December 1993

5. Managing Computer Projects, R. Gibson, Prentice-Hall, 1992.

Page 509: ESA Software Engineering Standards

B-2 ESA PSS-05-06 Issue 1 Revision 1 (March 1995)REFERENCES

This page is intentionally left blank

Page 510: ESA Software Engineering Standards

ESA PSS-05-06 Issue 1 Revision 1 (March 1995) C-1MANDATORY PRACTICES

APPENDIX CMANDATORY PRACTICES

This appendix is repeated from ESA PSS-05-0 appendix D.6

TR01 Representatives of users and operations personnel shall participate inacceptance tests.

TR02 The Software Review Board (SRB) shall review the software'sperformance in the acceptance tests and recommend, to the initiator,whether the software can be provisionally accepted or not.

TR03 Transfer Phase activities shall be carried out according to the plans defined inthe DD phase.

TR04 The capability of building the system from the components that aredirectly modifiable by the maintenance team shall be established.

TR05 Acceptance tests necessary for provisional acceptance shall beindicated in the SVVP.

TR06 The statement of provisional acceptance shall be produced by theinitiator, on behalf of the users, and sent to the developer.

TR07 The provisionally accepted software system shall consist of the outputs of allprevious phases and modifications found necessary in the Transfer Phase.

TR08 An output of the Transfer Phase shall be the STD.

TR09 The STD shall be handed over from the developer to the maintenanceorganisation at provisional acceptance.

TR10 The STD shall contain the summary of the acceptance test reports, and alldocumentation about software changes performed during the Transfer Phase.

Page 511: ESA Software Engineering Standards

C-2 ESA PSS-05-06 Issue 1 Revision 1 (March 1995)MANDATORY PRACTICES

This page is intentionally left blank

Page 512: ESA Software Engineering Standards

ESA PSS-05-06 Issue 1 Revision 1 (March 1995) D-1INDEX

APPENDIX DINDEX

acceptance test, 6build, 6code, 3configuration item list, 5DDD, 3ESA PSS-05-09, 10ESA PSS-05-10, 8, 10Factory Acceptance Test, 3log file, 8Operations and Maintenance phase, 11reliability, 7, 11SCMP/TR, 3SCR, 10SMR, 10software change request, 10software modification report, 10software problem report, 10software release note, 13software review board, 10Software Transfer Document, 9, 13SPMP/TR, 3SPR, 10SQAP/TR, 3SRB, 10SRN, 13statement of provisional acceptance, 11STD, 13SUM, 3SVV11, 7SVV18, 7SVV19, 7SVV20, 7SVV21, 7SVV22, 8SVVP/AT, 7target environment, 3, 5, 6test design, 7test pass/fail criterion, 9test plan, 7test report, 8test suspension criteria, 9TR01, 8TR02, 10TR03, 3TR04, 6

TR05, 7TR06, 11TR07, 11TR08, 13TR09, 13TR10, 17, 18URD, 7validation, 6

Page 513: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1March 1995

european space agency / agence spatiale européenne8-10, rue Mario-Nikis, 75738 PARIS CEDEX, France

Guideto the softwareoperations andmaintenancephase

Prepared by:ESA Board for SoftwareStandardisation and Control(BSSC)

Approved by:The Inspector General, ESA

Page 514: ESA Software Engineering Standards

ii ESA PSS-05-07 Issue 1 Revision 1 (March 1995)DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

1. DOCUMENT TITLE: ESA PSS-05-07 Guide to the Software Operations and MaintenancePhase

2. ISSUE 3. REVISION 4. DATE 5. REASON FOR CHANGE

1 0 1994 First issue

1 1 1995 Minor updates for publication

Issue 1 Revision 1 approved, May 1995Board for Software Standardisation and ControlM. Jones and U. Mortensen, co-chairmen

Issue 1 approved, 15th June 1995Telematics Supervisory Board

Issue 1 approved by:The Inspector General, ESA

Published by ESA Publications Division,ESTEC, Noordwijk, The Netherlands.Printed in the Netherlands.ESA Price code: E1ISSN 0379-4059

Copyright © 1995 by European Space Agency

Page 515: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) iiiTABLE OF CONTENTS

TABLE OF CONTENTS

CHAPTER 1 INTRODUCTION..................................................................................11.1 PURPOSE ................................................................................................................. 11.2 OVERVIEW................................................................................................................ 1

CHAPTER 2 THE OPERATIONS AND MAINTENANCE PHASE ............................32.1 INTRODUCTION....................................................................................................... 32.2 OPERATE SOFTWARE............................................................................................. 6

2.2.1 User support ................................................................................................... 62.2.2 Problem reporting.............................................................................................. 7

2.3 MAINTAIN SOFTWARE ............................................................................................ 92.3.1 Change Software .............................................................................................10

2.3.1.1 Diagnose Problems...............................................................................112.3.1.2 Review Changes....................................................................................142.3.1.3 Modify Software.....................................................................................17

2.3.1.3.1 Evaluating the effects of a change.............................................172.3.1.3.2 Keeping documentation up to date ...........................................21

2.3.1.4 Verify software modifications ................................................................222.3.2 Release Software.............................................................................................23

2.3.2.1 Define release........................................................................................232.3.2.2 Document release .................................................................................25

2.3.2.2.1 Release number..........................................................................252.3.2.2.2 Changes in the release...............................................................262.3.2.2.3 List of configuration items included in the release ....................262.3.2.2.4 Installation instructions ...............................................................26

2.3.2.3 Audit release..........................................................................................262.3.2.4 Deliver release .......................................................................................27

2.3.3 Install Release .................................................................................................272.3.4 Validate release................................................................................................27

2.4 UPDATE PROJECT HISTORY DOCUMENT..........................................................282.5 FINAL ACCEPTANCE.............................................................................................28

CHAPTER 3 TOOLS FOR SOFTWARE MAINTENANCE......................................313.1 INTRODUCTION.....................................................................................................313.2 NAVIGATION TOOLS .............................................................................................313.3 CODE IMPROVEMENT TOOLS .............................................................................323.4 REVERSE ENGINEERING TOOLS.........................................................................33

CHAPTER 4 THE PROJECT HISTORY DOCUMENT............................................354.1 INTRODUCTION.....................................................................................................354.2 STYLE......................................................................................................................35

Page 516: ESA Software Engineering Standards

iv ESA PSS-05-07 Issue 1 Revision 1 (March 1995)PREFACE

4.3 EVOLUTION............................................................................................................354.4 RESPONSIBILITY....................................................................................................354.5 MEDIUM .................................................................................................................354.6 CONTENT...............................................................................................................35

4.6.1 PHD/1 DESCRIPTION OF THE PROJECT......................................................364.6.2 PHD/2 MANAGEMENT OF THE PROJECT....................................................37

4.6.2.1 PHD/2.1 Contractual approach ...........................................................374.6.2.2 PHD/2.2 Project organisation ..............................................................374.6.2.3 PHD/2.3 Methods and tools ................................................................374.6.2.4 PHD/2.4 Planning.................................................................................38

4.6.3 PHD/3 SOFTWARE PRODUCTION ................................................................384.6.3.1 PHD/3.1 Product size............................................................................384.6.3.2 PHD/3.2 Documentation.......................................................................394.6.3.3 PHD/3.3 Effort........................................................................................394.6.3.4 PHD/3.4 Computer resources ..............................................................394.6.3.5 PHD/3.5 Productivity .............................................................................39

4.6.4 PHD/4 QUALITY ASSURANCE REVIEW.........................................................404.6.5 PHD/5 FINANCIAL REVIEW ............................................................................414.6.6 PHD/6 CONCLUSIONS...................................................................................414.6.7 PHD/7 PERFORMANCE OF THE SYSTEM IN THE OM PHASE....................41

CHAPTER 5 LIFE CYCLE MANAGEMENT ACTIVITIES .......................................435.1 INTRODUCTION.....................................................................................................435.2 SOFTWARE PROJECT MANAGEMENT................................................................43

5.2.1 Organisation .................................................................................................445.2.2 Work packages................................................................................................445.2.3 Resources .................................................................................................45

5.2.3.1 Lehman's Laws......................................................................................465.2.3.2 Code size maintenance cost estimating method ................................47

5.2.4 Activity schedule..............................................................................................475.3 SOFTWARE CONFIGURATION MANAGEMENT ..................................................475.4 SOFTWARE VERIFICATION AND VALIDATION....................................................475.5 SOFTWARE QUALITY ASSURANCE .....................................................................49

APPENDIX A GLOSSARY ....................................................................................A-1APPENDIX B REFERENCES................................................................................B-1APPENDIX C MANDATORY PRACTICES ...........................................................C-1APPENDIX D INDEX.............................................................................................D-1

Page 517: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) vPREFACE

PREFACE

This document is one of a series of guides to software engineering produced bythe Board for Software Standardisation and Control (BSSC), of the European SpaceAgency. The guides contain advisory material for software developers conforming toESA's Software Engineering Standards, ESA PSS-05-0. They have been compiled fromdiscussions with software engineers, research of the software engineering literature,and experience gained from the application of the Software Engineering Standards inprojects.

Levels one and two of the document tree at the time of writing are shown inFigure 1. This guide, identified by the shaded box, provides guidance aboutimplementing the mandatory requirements for the software Operations andMaintenance Phase described in the top level document ESA PSS-05-0.

Guide to theSoftware Engineering

Guide to theUser Requirements

Definition Phase

Guide toSoftware Project

Management

PSS-05-01

PSS-05-02 UR GuidePSS-05-03 SR Guide

PSS-05-04 AD GuidePSS-05-05 DD Guide

PSS-05-08 SPM GuidePSS-05-09 SCM Guide

PSS-05-11 SQA Guide

ESASoftware

EngineeringStandards

PSS-05-0

Standards

Level 1

Level 2

PSS-05-10 SVV Guide

PSS-05-06 TR GuidePSS-05-07 OM Guide

Figure 1: ESA PSS-05-0 document tree

The Guide to the Software Engineering Standards, ESA PSS-05-01, containsfurther information about the document tree. The interested reader should consult thisguide for current information about the ESA PSS-05-0 standards and guides.

The following past and present BSSC members have contributed to theproduction of this guide: Carlo Mazza (chairman), Gianfranco Alvisi, Michael Jones,Bryan Melton, Daniel de Pablo and Adriaan Scheffer.

Page 518: ESA Software Engineering Standards

vi ESA PSS-05-07 Issue 1 Revision 1 (March 1995)PREFACE

The BSSC wishes to thank Jon Fairclough for his assistance in the developmentof the Standards and Guides, and to all those software engineers in ESA and Industrywho have made contributions.

Requests for clarifications, change proposals or any other comment concerningthis guide should be addressed to:

BSSC/ESOC Secretariat BSSC/ESTEC SecretariatAttention of Mr M Jones Attention of Mr U MortensenESOC ESTECRobert Bosch Strasse 5 Postbus 299D-64293 Darmstadt NL-2200 AG NoordwijkGermany The Netherlands

Page 519: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 1INTRODUCTION

CHAPTER 1INTRODUCTION

1.1 PURPOSE

ESA PSS-05-0 describes the software engineering standards to beapplied for all deliverable software implemented for the European SpaceAgency (ESA), either in house or by industry [Ref 1].

ESA PSS-05-0 defines the fourth phase of the software developmentlife cycle as the 'Transfer Phase' (TR phase). The outputs of this phase arethe provisionally accepted software system, the statement of provisionalacceptance, and the Software Transfer Document (STD). The softwareenters practical use in the next and last phase of the software life cycle, the'Operations and Maintenance' (OM) phase.

The OM Phase is the 'operational' phase of the life cycle in whichusers operate the software and utilise the end products and services itprovides. The developers provide maintenance and user support until thesoftware is finally accepted, after which a maintenance organisationbecomes responsible for it.

This document describes how to maintain the software, supportoperations, and how to produce the 'Project History Document' (PHD). Thisdocument should be read by everyone involved with software maintenanceor software operations support. The software project manager of thedevelopment organisation should read the chapter on the Project HistoryDocument.

1.2 OVERVIEW

Chapter 2 discusses the OM phase. Chapters 3 and 4 discussmethods and tools for software maintenance. Chapter 5 describes how towrite the PHD. Chapter 6 discusses life cycle management activities.

All the mandatory practices in ESA PSS-05-0 relevant to thesoftware operations and maintenance phase are repeated in this document.The identifier of the practice is added in parentheses to mark a repetition.This document contains no new mandatory practices.

Page 520: ESA Software Engineering Standards

2 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)INTRODUCTION

This page is intentionally left blank

Page 521: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 3THE OPERATIONS AND MAINTENANCE PHASE

CHAPTER 2THE OPERATIONS AND MAINTENANCE PHASE

2.1 INTRODUCTION

Operations provide a product or a service to end users. This guidediscusses operations from the point of view of their interactions withsoftware maintenance activities and the support activities needed to use thesoftware efficiently and effectively.

Software maintenance is 'the process of modifying a softwaresystem or component after delivery to correct faults, improve performance orother attributes, or adapt to a changed environment' [Ref 7]. Maintenance isalways necessary to keep software usable and useful. Often there are verytight constraints on changing software and optimum solutions can bedifficult to find. Such constraints make maintenance a challenge; contrastthe development phase, when designers have considerably more freedom inthe type of solution they can adopt.

Software maintenance activities can be classified as:• corrective;• perfective;• adaptive.

Corrective maintenance removes software faults. Correctivemaintenance should be the overriding priority of the software maintenanceteam.

Perfective maintenance improves the system without changing itsfunctionality. The objective of perfective maintenance should be to preventfailures and optimise the software. This might be done, for example, bymodifying the components that have the highest failure rate, or componentswhose performance can be cost-effectively improved [Ref 11].

Adaptive maintenance modifies the software to keep it up to datewith its environment. Users, hardware platforms and other systems all makeup the environment of a software system. Adaptive maintenance may beneeded because of changes in the user requirements, changes in the targetplatform, or changes in external interfaces.

Page 522: ESA Software Engineering Standards

4 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)THE OPERATIONS AND MAINTENANCE PHASE

Minor adaptive changes (e.g. addition of a new commandparameter) may be handled by the normal maintenance process. Majoradaptive changes (e.g. addition of costly new user requirements, or portingthe software to a new platform) should be carried out as a separatedevelopment project (See Chapter 5).

The operations and maintenance phase starts when the initiatorprovisionally accepts the software. The phase ends when the software istaken out of use. The phase is divided into two periods by the finalacceptance milestone (OM03). All the acceptance tests must have beensuccessfully completed before final acceptance (OM02).

There must be a maintenance organisation for every softwareproduct in operational use (OM04). The developers are responsible forsoftware maintenance and user support until final acceptance.Responsibility for these activities passes to a maintenance team upon finalacceptance. The software project manager leads the developers. Similarly,the 'software maintenance manager' leads the maintenance team. Softwareproject managers and software maintenance managers are collectivelycalled 'software managers'.

The mandatory inputs to the operations and maintenance phase arethe provisionally accepted software system, the statement of provisionalacceptance and the Software Transfer Document (STD). Some of the plansfrom the development phases may also be input.

The developer writes the Project History Document (PHD) during thewarranty period. This gives an overview of the whole project and a summaryaccount of the problems and performance of the software during thewarranty period. This document should be input to the Software ReviewBoard (SRB) that recommends about final acceptance. The PHD is deliveredto the initiator at final acceptance (OM10).

Before final acceptance, the activities of the development team arecontrolled by the SPMP/TR (OM01). The development team will normallycontinue to use the SCMP, SVVP and SQAP from the earlier phases. Afterfinal acceptance, the maintenance team works according to its own plans.The new plans may reuse plans that the development team made, ifappropriate. The maintenance team may have a different configurationmanagement system, for example, but continue to employ the test tools andtest software used by the developers.

Page 523: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 5THE OPERATIONS AND MAINTENANCE PHASE

Operate

Software

Update

PHD

Final

Acceptance

Software

System SPR

STD

Statement

of final

acceptance

PHD

CIs

SRB

users

project

SPMP

SCMP

SVVP

SQAP

Maintain

Software

developers

SCR, SMR

manager

Figure 2.1A: OM phase activities before final acceptance

Operate

Software

Update

PHD

Software

System SPR

PHD

CIsusers

software manager

SPMP

SCMP

SVVP

SQAP

Maintain

Software

SMR

maint. team

SCR

PHD

Updated

Figure 2.1B: OM phase activities after final acceptance

Page 524: ESA Software Engineering Standards

6 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)THE OPERATIONS AND MAINTENANCE PHASE

Figures 2.1A and 2.1B show the activities, inputs and outputs of thephase, and the information flows between them. The arrows into the bottomof each box indicate the group responsible for each activity. The followingsections discuss in more detail the activities of operate software, maintainsoftware, update Project History Document and final acceptance.

2.2 OPERATE SOFTWARE

The way software is operated varies from system to system andtherefore cannot be discussed in this guide. However there are two activitiesthat occur during most software operations:• user support;• problem reporting.

These activities are discussed in the following sections.

2.2.1 User support

There are two types of user:• end user;• operator.

An 'end user' utilises the products or services of a system. An'operator' controls and monitors the hardware and software of a system. Auser may be an end user, an operator, or both.

User support activities include:• training users to operate the software and understand the products and

services;• providing direct assistance during operations;• set-up;• data management.

Users should receive training. The amount of training depends uponthe experience of the users and the complexity or novelty of the software.The training may range from allowing the users time to read the SUM andfamiliarise themselves with the software, to a course provided by experts,possibly from the development or maintenance teams.

Page 525: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 7THE OPERATIONS AND MAINTENANCE PHASE

Software User Manuals and training are often insufficient to enableusers to operate the software in all situations and deal with problems. Usersmay require: • direct assistance from experts in the development or maintenance

teams; • help desks.

Full-time direct assistance from experts is normally required tosupport critical activities. Part-time direct assistance from experts is oftensufficient for software that is not critical and has few users.

When the number of users becomes too large for the experts to beable to combine user support activities with their software maintenanceactivities, a help desk is normally established to: • provide users with advice, news and other information about the

software; • receive problem reports and decide how they should be handled.

While help desk staff do not need to have expert knowledge of thedesign and code, they should have detailed knowledge of how to operate itand how to solve simple problems.

The set-up of a software system may be a user support activitywhen the software set-up is complex, shared by multiple users, or requiresexperts to change it to minimise the risk of error.

Data management is a common user support activity, and mayinclude: • configuring data files for users; • managing disk storage resources; • backup and archiving.

2.2.2 Problem reporting

Users should document problems in Software Problem Reports(SPRs). These should be genuine problems that the user believes lie in thesoftware, not problems arising from unfamiliarity with it.

Page 526: ESA Software Engineering Standards

8 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)THE OPERATIONS AND MAINTENANCE PHASE

Each SPR should report one and only one problem and contain: • software configuration item title or name; • software configuration item version or release number; • priority of the problem with respect to other problems; • a description of the problem; • operating environment; • recommended solution (if possible).

The priority of a problem has two dimensions: • criticality (critical/non-critical); • urgency (urgent/routine).

The person reporting the problem should decide whether it is criticalor non-critical. A problem is critical if the software or an essential feature ofthe software is unavailable. The person should also decide whether thesolution is required as soon as possible (urgent) or when the SoftwareReview Board decides is best (routine).

Users should attempt to describe the problem and the operatingenvironment as accurately as possible to help the software engineers toreproduce the problem. Printouts and log files may be attached to theSoftware Problem Report to assist problem diagnosis.

Problems may occur because the software does not have specificcapabilities or comply with some constraints. Users should document suchomissions in SPRs, not in a modification to the User RequirementsDocument. The URD may be modified at a later stage when and if theSoftware Review Board (SRB) approves the new user requirement.

Figure 2.2 shows the life cycle of a software problem report. An SPRis prepared and submitted to the maintenance team for diagnosis. Themaintenance team prepares a Software Change Request (SCR) if theydecide that a software change is required to solve the problem. The SPRand related SCR are then considered at a Software Review Board meeting.

The Software Review Board decides upon one of four possibleoutcomes for the SPR: • reject the SPR; • update the software based upon the related SCR;

Page 527: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 9THE OPERATIONS AND MAINTENANCE PHASE

• action someone to carry out further diagnosis; • close the SPR because the update has been completed.

Prepare

Diagnose

Review

CloseUpdate Reject Action

Figure 2.2: The life cycle of an SPR

2.3 MAINTAIN SOFTWARE

Software maintenance should be a controlled process that ensuresthat the software continues to meet the needs of the end user. This processconsists of the following activities: • change software; • release software; • install release• validate release.

Page 528: ESA Software Engineering Standards

10 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)THE OPERATIONS AND MAINTENANCE PHASE

Change

Software

Install

Release

Release

Software

SCR

SMR

Changed

CIs

SRN

Released

CIs

SPR

STD

SPMP

SCMP

SVVP

SQAP

Software

System

CIs

Validate

Release

Installed

CIs

Figure 2.3: The software maintenance process

Figure 2.3 shows the inputs and outputs of each activity, which arediscussed in more detail in the following sections.

2.3.1 Change Software

Software change should be governed by a change control processdefined in the Software Configuration Management Plan. All change controlprocesses should be based upon the code change control processdescribed in ESA PSS-05-09, Guide to Software Configuration Management[Ref 4], shown in Figure 2.3.1. Configuration management tools should beused to control software change. These are described in the Guide toSoftware Configuration Management.

Software change is a four stage process: • diagnose problems; • review change requests; • modify software;• verify software.

The following sections describe each stage.

Page 529: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 11THE OPERATIONS AND MAINTENANCE PHASE

diagnose SCR

reviewchanges

problems

expert

review board

changed CI

modifysoftware

developers

SMR

SCRapproved

SPR

controlled CI

maintainers

verifysoftware

developersmaintainers

changed CI

Figure 2.3.1: The basic change control process

2.3.1.1 Diagnose Problems

Software Problem Reports (SPRs) arising from software operationsare collected and assigned to an expert software engineer from themaintenance team, who examines software configuration items anddiagnoses the cause of the problem. The software engineer mayrecommend software changes in a Software Change Request (SCR).

The steps in problem diagnosis are:• examine SPR;• reproduce the problem, if possible;• examine code and/or documentation;• identify the fault;• identify the cause;• write a Software Change Request, if necessary.

Page 530: ESA Software Engineering Standards

12 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)THE OPERATIONS AND MAINTENANCE PHASE

Examination of code and documentation may require backtrackingfrom code and Software User Manual through the Detailed DesignDocument, the Architectural Design Document, the Software RequirementsDocument and ultimately the User Requirements Document.

Software engineers often diagnose a problem by building a variantconfiguration item that: • is able to work with debugging tools; • contains diagnostic code.

The software engineers run the variant and attempt to reproduce theproblem and understand why it happened. When they have made adiagnosis they may insert prototype code to solve the problem. They thentest that the problem does not recur. If it does not, the person responsiblefor modifying the software may use the prototype code as a starting point.The maintenance team should be wary of the first solution that works.

When the cause of a problem has been found, the softwareengineer should request a change to the software to prevent the problemrecurring. Every Software Change Request (SCR) should contain thefollowing information: • software configuration item title or name; • software configuration item version or release number; • changes required; • priority of the request; • responsible staff; • estimated start date, end date and manpower effort.

The software engineer provides the first three pieces of information.The software manager should define the last three.

The specification of the changes should be detailed enough to allowmanagement to decide upon their necessity and practicality. The SCR doesnot need to contain the detailed design of the change. Any changesidentified by the person who diagnosed the problem, such as marked-uplistings of source code and pages of documentation, should be attached tothe SCR.

Page 531: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 13THE OPERATIONS AND MAINTENANCE PHASE

Like software problems, the priority of a software change requesthas two dimensions: • criticality (critical/non-critical); • urgency (urgent/routine).

The software manager should decide whether the change request iscritical or non-critical. A change is critical if it has a major impact on eitherthe software behaviour or the maintenance budget. The criticality of asoftware change request is evaluated differently from the criticality of asoftware problem. For example an SPR may be classified as critical becausean essential feature of the software is not available. The corresponding SCRmay be non-critical because a single line of code is diagnosed as causingthe problem, and this can be easily modified.

The software manager should decide whether the change is urgentor routine. A change is urgent if it has to be implemented and released tothe user as soon as possible. Routine changes are released in convenientgroups according to the release schedule in the software projectmanagement plan. The software manager normally gives the same urgencyrating as the user, although he or she has the right to award a differentrating.

The SCR is normally presented as a form, but it may be adocument. A change request document should contain sections on: • user requirements document changes; • software requirements document changes; • architectural design document changes; • detailed design document changes; • software project management plan update; • software verification and validation plan for the change; • software quality assurance plan for the change.

Each section should include or reference the correspondingdevelopment documents as appropriate. For example a change to the codeto remedy a detailed design fault might: • put 'no change' in the URD, SRD and ADD change sections; • contain a new version of the DDD component description;

Page 532: ESA Software Engineering Standards

14 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)THE OPERATIONS AND MAINTENANCE PHASE

• contain a listing of the code with a prototype modification suggested bythe expert who made the change request;

• contain a unit test design, unit test case and unit test procedure forverifying the code change;

• identify the integration, system and acceptance test cases that shouldbe rerun;

• contain a work package description for the change; • identify the software release that will contain the change.

Investigation of a software problem may reveal the need toimplement a temporary 'workaround' solution to a problem pending theimplementation of a complete solution. The workaround and the completesolution should have a separate SCR.

2.3.1.2 Review Changes

The Software Review Board (SRB) must authorise all changes to thesoftware (OM08). The SRB should consist of people who have sufficientauthority to resolve any problems with the software. The software managerand software quality assurance engineer should always be members. Usersare normally members unless the software is part of a larger system. In thiscase the system manager is a member, and represents the interests of thewhole system and the users.

The SRB should delegate responsibility for filtering SPRs and SCRsto the software manager. The order of filtering is: • criticality (critical/non-critical); • urgency (urgent/routine).

The decision tree shown in Figure 2.3.1.2 illustrates the fourpossible combinations of urgency and criticality, and defines what actionshould be taken.

Page 533: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 15THE OPERATIONS AND MAINTENANCE PHASE

Urgent

Routine

Critical

Non-critical

Urgent

Routine

Software Manager decides, SRB confirms

SRB decides

Software Manager decides

Software Manager decides

Figure 2.3.1.2: SPR and SCR filtering

The three possible actions are: 1. critical and urgent: the software manager decides upon the

implementation of the change and passes the SCR and associatedSPRs to the SRB for confirmation;

2. critical and routine: the software manager passes the SCR andassociated SPRs to the SRB for review and decision;

3. non-critical: the software manager decides upon the implementation ofthe change and passes the SCR and associated SPRs to the SRB forinformation.

Software Review Board meetings should use the technical reviewprocess described in ESA PSS-05-10 Guide to Software Verification andValidation [Ref 5], with the following modifications: • the objective of an SRB review is to decide what software changes will

be implemented; • the inputs to the SRB review are the SCRs, SPRs and attachments such

as part of a document, a source code listing, a program traceback or alog file;

• the preparation activity is to examine the SPRs and SCRs, not RIDs; • SRB review meetings are concerned with the SPRs and SCRs, not RIDs,

and follow the typical agenda described below.

Page 534: ESA Software Engineering Standards

16 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)THE OPERATIONS AND MAINTENANCE PHASE

A typical SRB review meeting agenda consists of: 1. introduction; 2. review actions from the previous meeting; 3. review of the SPR and SCR criticality and urgency classification; 4. review of the critical SPRs; 5. review of the critical SCRs; 6. decision on the SPRs and SCRs; 7. conclusion.

Actions from the previous meeting may have included softwarechange requests. The SRB should close SPRs that are associated withsatisfied change requests. Every closed SPR must have one or moreassociated Software Modification Reports (SMRs) that describe what hasbeen done to solve the problem.

The criticality and urgency of SPRs and SCRs should be reviewed.Members may request that they be reclassified.

Critical SPRs that have not been closed are then discussed. Thediscussion should confine itself to describing the seriousness and extent ofthe problem. The Software Review Board decisions should be one of'update', 'action or 'reject' (See Figure 2.2). The status should become'update' when the SRB is satisfied that the problem exists and requires achange to the software. The status should become 'action' when there is nosatisfactory diagnosis. The status should become 'reject' if the SRB decidesthat the problem does not exist or that no update or action is necessary.

Critical SCRs are then discussed. The discussion should confineitself to reviewing the effects of the requested changes and the risksinvolved. Detailed design issues should be avoided. The SRB may discussthe recommendations of the software manager as to: • who should be the responsible staff; • how much effort should be allocated; • when the changes should be implemented and released.

Any part of the SCR may be modified by the SRB. After discussion,the SRB should decide upon whether to approve the change request.

Page 535: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 17THE OPERATIONS AND MAINTENANCE PHASE

2.3.1.3 Modify Software

When the software change request has been approved, themaintenance team implement the change. The remaining tasks are to: • modify the documents and code; • review the modified documents and code; • test the modified code.

The outputs of these tasks are a Software Modification Report(SMR) and modified configuration items. The SMR defines the: • names of configuration items that have been modified; • version or release numbers of the modified configuration items; • changes that have been implemented; • actual start date, end date and manpower effort.

Attachments to the SMR should include unit, integration and systemtest results, as appropriate.

Software systems can be ruined by poor maintenance, and peopleresponsible for it should: • evaluate the effects of every change; • verify all software modifications thoroughly; • keep documentation up to date.

2.3.1.3.1 Evaluating the effects of a change

Software engineers should evaluate the effect of a modification on: a. performance; b. resource consumption. c. cohesion; d. coupling; e. complexity; f. consistency; g. portability; h. reliability; i. maintainability;

Page 536: ESA Software Engineering Standards

18 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)THE OPERATIONS AND MAINTENANCE PHASE

j. safety; k. security.

The effect of changes may be evaluated with the aid of reverseengineering tools and librarian tools. Reverse engineering tools can identifythe modules affected by a change at program design level (e.g. identifyeach module that uses a particular global variable). Librarian tools withcross reference facilities can track dependencies at the source file level (e.g.identify every file that includes a specific file).

There is often more than one way of changing the software to solvea problem, and software engineers should examine the options, comparetheir effects on the software, and select the best solution. The followingsections provide guidance on the evaluation of the software in terms of theattributes listed above.

a. Performance

Performance requirements specify the capacity and speed of theoperations the software has to perform. Design specifications may specifyhow fast a component has to execute.

The effects of a change on software performance should bepredicted and later measured in tests. Performance analysis tools mayassist measurement. Prototyping may be useful.

b. Resource Consumption

Resource requirements specify the maximum amount of computerresources the software can use. Design specifications may specify themaximum resources available.

The effects of a change on resource consumption should bepredicted when it is designed and later measured in tests. Performanceanalysis tools and system monitoring tools should be used to measureresource consumption. Again, prototyping may be useful.

c. Cohesion

Cohesion measures the degree to which the activities within acomponent relate to one another. The cohesion of a software componentshould not be reduced by a change. Cohesion effects should be evaluatedwhen changes are designed.

Page 537: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 19THE OPERATIONS AND MAINTENANCE PHASE

ESA PSS-05-04 Guide to the Software Architectural Design Phaseidentifies seven types of cohesion ranging from functional (good) tocoincidental (bad) [Ref 2]. This scale can be used to evaluate the effect of achange on the cohesion of a software component.

d. Coupling

Coupling measures the interdependence of two or morecomponents. Changes that increase coupling should be avoided, as theyreduce information hiding. Coupling effects should be evaluated whenchanges are designed.

ESA PSS-05-04 Guide to the Software Architectural Design Phaseidentifies five types of coupling ranging from 'data coupling' (good) to'content coupling' (bad) [Ref 2]. This scale can be used to evaluate theeffect of a change on the coupling of a software component.

e. Complexity

During the operations and maintenance phase the complexity ofsoftware naturally tends to grow because its control structures have to beextended to meet new requirements [Ref 12]. Software engineers shouldcontain this natural growth of complexity because more complex softwarerequires more testing, and more testing requires more effort. Eventuallychanges become infeasible because they require more effort than isavailable to implement them. Reduced complexity means greater reliabilityand maintainability.

Software complexity can be measured by several metrics, the bestknown metric being cyclomatic complexity [Ref 8]. Procedures formeasuring cyclomatic complexity and other important metrics are containedin ESA PSS-05-04 Guide to the Software Architectural Design Phase andESA PSS-05-10, Guide to Software Verification and Validation [Ref 2, 5].McCabe has proposed an 'essential complexity metric' for measuring thedistortion of the control structure of a module caused by a software change[Ref 13].

f. Consistency

Coding standards define the style in which a programminglanguage should be used. They may enforce or ban the use of languagefeatures and define rules for layout and presentation. Changes to softwareshould conform to the coding standards and should be seamless

Page 538: ESA Software Engineering Standards

20 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)THE OPERATIONS AND MAINTENANCE PHASE

Polymorphism means that a function will perform the sameoperation on a variety of data types. Programmers extending the range ofdata types can easily create inconsistent variations of the same function.This can cause a polymorphic function to behave in unexpected ways.Programmers modifying a polymorphic function should fully understandwhat the variations of the function do.

g. Portability

Portability is measured by the ease of moving software from oneenvironment to another. Portability can be achieved by adhering tolanguage and coding standards. A common way to make software portableis to encapsulate platform-specific code in 'interface modules'. Only theinterface modules need to be modified when the software is ported.

Software engineers should evaluate the effect of a modification onthe portability of the software when it is designed. Changes that reduceportability should be avoided.

h. Reliability

Reliability is most commonly measured by the Mean Time BetweenFailures (MTBF). Changes that reduce reliability should be avoided.

Changes that introduce defects into the software make it lessreliable. The software verification process aims to detect and removedefects (see Section 2.3.1.). Walkthroughs and inspections should be usedto verify that defects are not introduced when the change is designed. Testsshould be used to verify that no defects have been introduced by thechange after it has been implemented.

The effect of a modification on software reliability can be estimatedindirectly by measuring its effect on the complexity of the software. Thiseffect can be measured when the change is designed.

Reliability can be reduced by reusing components that have notbeen developed to the same standards as the rest of the software. Softwareengineers should find out how reliable a software component is beforereusing it.

Page 539: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 21THE OPERATIONS AND MAINTENANCE PHASE

i. Maintainability

Maintainability measures the ease with which software can bemaintained. The most common maintainability metric is 'Mean Time ToRepair'. Changes that make software less maintainable should be avoided.

Examples of changes that make software less maintainable arethose that: • violate coding standards; • reduce cohesion; • increase coupling; • increase essential complexity [Ref 13].

The costs and benefits of changes that make software moremaintainable should be evaluated before such changes are made. Allmodified code must be tested, and the cost of retesting the new code mayoutweigh the reduced effort required to make future changes.

j. Safety

Changes to the software should not endanger people or propertyduring operations or following a failure. The effect on the safety of thesoftware should first be evaluated when the change is designed and laterduring the verification process. The behaviour of the software after a failureshould be analysed from the safety point of view.

k. Security

Changes to the software should not expose the system to threats toits confidentiality, integrity and availability. The effect of a change on thesecurity of the software should first be evaluated when the change isdesigned and later during the verification process.

2.3.1.3.2 Keeping documentation up to date

Consistency between code and documentation must be maintained(OM06). This is achieved by: • thorough analysis of the impact of every change before it is made, to

ensure that no inconsistencies are introduced; • concurrent update of code and documentation; • verification of the changes by peer review or independent review.

Page 540: ESA Software Engineering Standards

22 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)THE OPERATIONS AND MAINTENANCE PHASE

Tools for deriving detailed design information from source code areinvaluable for keeping documentation up to date. The detailed designcomponent specification is placed in the module header. When making achange, programmers: • modify the detailed design specification in the header; • modify the module code; • compile the module; • review the module modifications; • unit test the module; • run a tool to derive the corresponding detailed design document

section.

Programmers should use the traceability matrices to check forconsistency between the code, DDD, ADD, SRD and URD. Traceabilitymatrices are important navigational aids and should be kept up to date.Tools that support traceability are very useful in the maintenance phase.

2.3.1.4 Verify software modifications

Software modifications should be verified by: a. review of the detailed design and code; b. testing.

a. Detailed design and code reviews

The detailed design and code of all changes should be reviewedbefore they are tested. These reviews should be distinguished from earlierreviews done by the software manager or Software Review Board (SRB) thatexamine and approve the change request. The detailed design and codeare normally not available when the change request is approved.

The detailed design and code of all changes should be examinedby someone other than the software engineer who implemented them. Thewalkthrough or inspection process should be used [Ref 5].

b. Tests

Modified software must be retested before release (SCM39). This isnormally done by: • unit, integration and system testing each change;

Page 541: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 23THE OPERATIONS AND MAINTENANCE PHASE

• running regression tests at unit, integration and system level to verifythat there are no side-effects.

Changes that do not alter the control flow should be tested byrerunning the white box tests that execute the part of the software thatchanged. Changes that do alter the control flow should be tested bydesigning white box tests to execute every new branch in the control flowthat has been created. Black box tests should be designed to verify any newfunctionality.

Ideally system tests should be run after each change. This may be avery costly exercise for a large system, and an alternative less costlyapproach often adopted for large systems is to accumulate changes andthen run the system tests (including regression tests) just before release.The disadvantage of this approach is that it may be difficult to identify whichchange, if any, is the cause of a problem in the system tests.

2.3.2 Release Software

Changed configuration items are made available to users throughthe software release process. This consists of: • defining the release; • documenting the release; • auditing the release; • delivering the release.

2.3.2.1 Define release

Software managers should define the content and timing of asoftware release according to the needs of users. This means that: • solutions to urgent problems are released as soon as possible only to

the people experiencing the problem, or who are likely to experience theproblem;

• other changes are released when the users are ready to accommodatethem.

Software managers, in consultation with the Software Review Board,should allocate changes to one of three types of release: • major release; • minor release;

Page 542: ESA Software Engineering Standards

24 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)THE OPERATIONS AND MAINTENANCE PHASE

• emergency release (also called a ‘patch’).

Table 2.3.2.1 shows how they differ according to whether: • adaptive changes have been made; • perfective changes have been made; • corrective changes have been made; • all or selected software configuration items are included in the release; • all or selected users will receive the release.

AdaptiveChanges

PerfectiveChanges

CorrectiveChanges

CIs Users

MajorRelease

Yes Yes Yes All All

MinorRelease

Small Yes Yes All All

EmergencyRelease

No No Yes Selected Selected

Table 2.3.2.1: Major, minor and emergency releases

The purpose of a major release of a software system is to providenew capabilities. These require adaptive changes. Major releases alsocorrect outstanding faults and perfect the existing software. Operations mayhave to be interrupted for some time after a major release has been installedbecause training is required. Major releases should therefore not be madetoo frequently because of the disruption they can cause. A typical timeinterval between major releases is one year. Projects using evolutionary andincremental delivery life cycle approaches would normally make a majorrelease in each transfer phase.

The purpose of a minor release is to provide corrections to a groupof problems. Some low-risk perfective maintenance changes may beincluded. A minor release of a software system may also provide smallextensions to existing capabilities. Such changes can be easily assimilatedby users without training.

Page 543: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 25THE OPERATIONS AND MAINTENANCE PHASE

The frequency of minor releases depends upon: • the rate at which software problems are reported; • the urgency of solving the software problems.

The purpose of an emergency release is to get a modification to theusers who need it as fast as possible. Only the configuration items directlyaffected by the fault are released. Changes are nearly always corrective.

2.3.2.2 Document release

Every software release must be accompanied by a SoftwareRelease Note (SRN) (SCM14). Software Release Notes should describe the:

• software item title/name;

• software item version/release number;

• changes in the release;

• list of configuration items included in the release;

• installation instructions.

Forms are used for simple releases and documents for complexreleases.

2.3.2.2.1 Release number

The SRN should define the version number of the release. Thestructure of the version number should reflect the number of different typesof releases used. A common structure used is two or three integersseparated by a full stop:

major release number.minor release number[.emergency release number]

The square brackets indicate that the emergency release number isoptional because it is only included when it is not zero. When any releasenumber is incremented the succeeding numbers are set to zero.

Page 544: ESA Software Engineering Standards

26 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)THE OPERATIONS AND MAINTENANCE PHASE

2.3.2.2.2 Changes in the release

This section of the SRN should list all the changes in the release.For each change: • give a paragraph summarising the effects that the users will see

resulting from the change; • enumerate the related SPRs and SCRs (SCM36).

2.3.2.2.3 List of configuration items included in the release

This section of the SRN should list the configuration identifiers of allthe configuration items in the release.

2.3.2.2.4 Installation instructions

This section of the SRN should describe how to install the release.This is normally done by referencing the installation instructions in theSoftware User Manual1, or providing updated instructions, or both.

2.3.2.3 Audit release

Functional and physical audits shall be performed before therelease of the software (SVV03). The purpose of the audits is to verify that allthe necessary software configuration items are present, consistent andcorrect.

A functional audit verifies that the development of a configurationitem has been completed satisfactorily, that the item has achieved theperformance and functional characteristics specified in the softwarerequirements and design documents [Ref 7]. This is normally done bychecking the test reports for the software release. A functional audit alsoverifies that the operational and support documents are complete andsatisfactory.

A physical audit verifies that an as-built configuration conforms to itsdocumentation. This is done by checking that: • all the configuration items listed in the SRN are actually present; • documentation and code in a software release are consistent (SCM37).

1 In ESA PSS-05-0 Installation instructions are placed in the STD. The SUM is normally a more suitable place to put the installation

instructions.

Page 545: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 27THE OPERATIONS AND MAINTENANCE PHASE

2.3.2.4 Deliver release

The software can be delivered when the audits have been done. Themaintenance team is responsible for copying the software onto the releasemedia, packaging the media with the software documentation and deliveringthe package.

The maintenance team must archive every release (SCM38). This isbest done by keeping a copy of the delivered package. Although users maywish to retain old releases for reference, the responsibility for retaining acopy of every release lies with the maintenance team.

2.3.3 Install Release

Upon delivery, the contents of the release are checked against theconfiguration item list in the Software Release Note (SRN) and then thesoftware is installed. The installation procedures are also described oridentified in the SRN.

Installation should be supported by tools that • automate the installation process as much as possible; • issue simple and clear instructions; • reuse configuration information from the existing system and not require

users to reenter it; • make minimal changes to the system configuration (e.g. modifying the

CONFIG.SYS and AUTOEXEC.BAT files of a PC); • always get permission before making any changes to the system

configuration;

The installation process should be reversible, so that rollback of thesystem to the state before the installation process was started is alwayspossible. One way to do this is to make a backup copy of the existingsystem before starting the installation.

The installation software should remove obsolete configurationitems from the directories where the new version of the system is installed.

2.3.4 Validate release

After installation, users should run some or all of the acceptancetests to validate the software. The acceptance test specification should have

Page 546: ESA Software Engineering Standards

28 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)THE OPERATIONS AND MAINTENANCE PHASE

been updated to include tests of any new user requirements that have beenimplemented.

2.4 UPDATE PROJECT HISTORY DOCUMENT

The purpose of the Project History Document (PHD) is to provide asummary critical account of the project. The PHD should: • describe the objectives of the project; • summarise how the project was managed; • state the cost of the project and compare it with predictions; • discuss how the standards were applied; • describe the performance of the system in OM phase; • describe any lessons learned.

The benefits of the PHD are: • the maintenance team is informed about what the development team

did, so they can avoid repeating mistakes or trying out inappropriatesolutions;

• the maintenance team are told how well the system performs, as theymay have to make good any shortfall;

• managers of future projects will know how much a similar project islikely to cost, and problems and pitfalls they are likely to experience.

The software manager should write the PHD. Information should becollected throughout the project. Work on the PHD should start early in theproject, with updates at every major milestone. Detailed guidance on writingthe PHD is provided in chapter 4.

2.5 FINAL ACCEPTANCE

A review of the software should be held at the end of the warrantyperiod in order to decide whether the software is ready for final acceptance.All the acceptance tests must have been completed before the software canbe finally accepted (OM02).

The review team should consist of the Software Review Board (SRB)members. The responsibilities and constitution of the Software Review

Page 547: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 29THE OPERATIONS AND MAINTENANCE PHASE

Board are defined in ESA PSS-05-09 'Guide to Software ConfigurationManagement' Section 2.2.1.3 [Ref 4].

The software review should be a formal technical review. Theprocedures for a technical review are described in Section 2.3.1 of ESA PSS-05-10, 'Guide to Software Verification and Validation' [Ref 5].

The final acceptance review meeting may coincide with an ordinarySRB meeting. The decision upon final acceptance is added to the agendadescribed in Section 2.3.1.2.

Inputs to the review are the: • Project History Document (PHD); • results of acceptance tests held over to the OM phase; • Software Problem Reports made during the OM phase; • Software Change Requests made during the OM phase; • Software Modification Reports made during the OM phase.

The Software Review Board reviews these inputs and recommends,to the initiator, whether the software can be finally accepted.

The SRB should evaluate the degree of compliance of the softwarewith the user requirements by considering the number and nature of: • acceptance test cases failed; • acceptance tests cases not attempted or completed; • critical software problems reported; • critical software problems not solved.

Solution of a problem means that a change request has beenapproved, modification has been made, and all tests repeated and passed.

The number of test cases, critical software problems and non-critical software problems should be evaluated for the whole system and foreach subsystem. The SRB might, for example, decide that somesubsystems are acceptable and others are not.

The SRB should study the trends in the number of critical softwareproblems reported and solved. Although there are likely to be 'spikes' in thetrend charts associated with software releases, there should be a downward

Page 548: ESA Software Engineering Standards

30 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)THE OPERATIONS AND MAINTENANCE PHASE

trend in the number of critical problems during the OM phase. An upwardtrend should be cause for concern.

Sufficient data should have accumulated to evaluate the Mean TimeBetween Failures (MTBF) and Mean Time To Repair (MTTR). The MTBF maybe estimated by dividing the total number of critical software problemsraised during the phase by the total time spent operating the software duringthe phase. The MTTR may be estimated by averaging the differencebetween the start and end dates of the modifications completed.

If the SRB decides that the degree of compliance of the softwarewith the user requirements is acceptable, it should recommend to theinitiator that the software be finally accepted.

The statement of final acceptance is produced by the initiator, onbehalf of the users, and sent to the developer (OM09). The finally acceptedsoftware system consists of one or more sets of documentation, source,object and executable code corresponding to the current versions andreleases of the product.

Page 549: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 31TOOLS FOR SOFTWARE MAINTENANCE

CHAPTER 3 TOOLS FOR SOFTWARE MAINTENANCE

3.1 INTRODUCTION

Tools used during software development will continue to be usedduring the operations and maintenance phase. New versions of the toolsmay become available, or a tool may become unsupported and requirereplacement. Tools may be acquired to support new activities, or to supportactivities that previously took place without them.

The reader should refer to the appropriate guides for the tools thatsupport: • user requirements definition; • software requirements definition; • architectural design; • detailed design and production; • transfer; • software project management; • software configuration management; • software verification and validation; • software quality assurance.

This chapter is concerned with the tools that are normally used forthe first time in the life cycle during the operations and maintenance phase.These tools are: • navigation tools;• code improvement tools; • reverse engineering tools.

3.2 NAVIGATION TOOLS

Navigation tools enable software engineers to find quickly and easilythe parts of the software that they are interested in. Typical capabilities are: • identification of where variables are used;

Page 550: ESA Software Engineering Standards

32 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)TOOLS FOR SOFTWARE MAINTENANCE

• identification of the modules that use a module; • display of the call tree; • display of data structures.

Knowledge of where variables and modules are used is critical tounderstanding the effect of a change. Display of the call tree and datastructures supports understanding of the control and data flow.

In addition, navigation tools for object-oriented software need to beable to: • distinguish which meaning of an overloaded symbol is implied; • support the location of inherited attributes and functions.

The maintenance of object-oriented programs is an active researcharea [Ref 14, 15]. In particular, the behaviour of a polymorphic function canonly be known at runtime when argument types are known. This makes itdifficult to understand what the code will do by means of static analysis andinspection. Dynamic analysis of the running program may be the only way tounderstand what it is doing.

3.3 CODE IMPROVEMENT TOOLS

Code improvement tools may:• reformat source code;• restructure source code.

Code reformatters, also known as 'pretty printers', read source codeand generate output with improved layout and presentation. They can bevery useful for converting old code to the style of a new coding standard.

Code restructuring tools read source code and make it morestructured, reducing the control flow constructs as far as possible to onlysequence, selection and iteration.

Page 551: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 33TOOLS FOR SOFTWARE MAINTENANCE

3.4 REVERSE ENGINEERING TOOLS

Reverse engineering tools process code to produce another type ofsoftware item. They may for example:• generate source code from object code; • recover designs from source code.

Decompilers translate object code back to source code. Somedebuggers allow software engineers to view the source code alongside theobject code. Decompilation capabilities are sometimes useful fordiagnosing compiler faults, e.g. erroneous optimisations.

Tools that recover designs from source code examine moduledependencies and represent them in terms of a design method such asYourdon. Tools are available that can generate structure charts from C codefor example. These reverse engineering tools may be very useful whendocumentation of code is non-existent or out of date.

Page 552: ESA Software Engineering Standards

34 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)TOOLS FOR SOFTWARE MAINTENANCE

This page is intentionally left blank.

Page 553: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 35THE PROJECT HISTORY DOCUMENT

CHAPTER 4 THE PROJECT HISTORY DOCUMENT

4.1 INTRODUCTION

The Project History Document (PHD) summarises the main eventsand the outcome of the project. The software manager should collectappropriate information, summarise it, and insert it in the PHD phase-by-phase as the project proceeds. Much of the information will already exist inearlier plans and reports. When final acceptance is near, the softwaremanager should update the document taking into account what hashappened since the start of the operations and maintenance phase.

4.2 STYLE

The Project History Document should be plain, concise, clear andconsistent.

4.3 EVOLUTION

After delivery, the section of the Project History Document on theperformance of the software in the OM phase should be updated by themaintenance team at regular intervals (e.g. annually).

4.4 RESPONSIBILITY

The software project manager is responsible for the production ofthe Project History Document. The software maintenance manager isresponsible for the production of subsequent issues.

4.5 MEDIUM

The Project History Document is normally a paper document.

4.6 CONTENT

ESA PSS-05-0 recommends the following table of contents for theProject History Document.

Page 554: ESA Software Engineering Standards

36 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)THE PROJECT HISTORY DOCUMENT

1 Description of the project

2 Management of the project 2.1 Contractual approach 2.2 Project organisation 2.3 Methods and tools2

2.4 Planning

3 Software Production 3.1 Product size3

3.2 Documentation 3.3 Effort4 3.4 Computer resources 3.5 Productivity5

4 Quality Assurance Review

5 Financial Review

6 Conclusions

7 Performance of the system in OM phase

4.6.1 PHD/1 DESCRIPTION OF THE PROJECT

This section should: • describe the objectives of the project; • identify the initiator, developer and users; • identify the primary deliverables; • state the size of the software and the development effort; • describe the life cycle approach; • state the actual dates of all major milestones.

Information that appears in later parts of the document may besummarised in this section.

2 In ESA PSS-05-0 this section is called “Methods used”. 3 In ESA PSS-05-0 this section is called “Estimated vs. actual amount of code produced” 4 In ESA PSS-05-0 this section is called “Estimated vs. actual effort” 5 In ESA PSS-05-0 this section is called “Analysis of productivity factors”

Page 555: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 37THE PROJECT HISTORY DOCUMENT

Critical decisions, for example changes in objectives, should beclearly identified and explained.

4.6.2 PHD/2 MANAGEMENT OF THE PROJECT

4.6.2.1 PHD/2.1 Contractual approach

This section should reference the contract made (if any) betweeninitiator's organisation and the development organisation.

This section should state the type of contract (e.g. fixed price, timeand materials).

4.6.2.2 PHD/2.2 Project organisation

This section should describe the: • internal organisation of the project; • external interfaces of the project.

The description of the organisation should define for each role: • major responsibilities; • number of staff.

If the organisation and interfaces changed from phase to phase,this section describes the organisation and interfaces in each phase. If thenumber of staff varied within a phase, the staffing profile should bedescribed.

4.6.2.3 PHD/2.3 Methods and tools

This section should identify the methods used in the project, phaseby phase. The methods should be referenced. This section should notdescribe the rules and procedures of the methods, but critically discussthem from the point of view of the project. Aspects to consider are: • training requirements. • applicability.

Any tools used to support the methods should be identified and thequality of the tools discussed, in particular: • degree of support for the method; • training requirements;

Page 556: ESA Software Engineering Standards

38 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)THE PROJECT HISTORY DOCUMENT

• reliability; • ease of integration with other tools; • whether the benefits of the tools outweighed the costs.

4.6.2.4 PHD/2.4 Planning

This section should summarise the project plan by producing foreach phase the: • initial work breakdown structure (but not the work package

descriptions);• list of work packages added or deleted; • Gantt chart showing the predicted start and end dates of each activity; • Gantt chart showing the actual start and end dates of each activity; • Milestone trend charts showing the movement (if any) of major

milestones during the phase.

4.6.3 PHD/3 SOFTWARE PRODUCTION

4.6.3.1 PHD/3.1 Product size

This section should state the number of user requirements andsoftware requirements.

This section should state the number of subsystems, tasks orprograms in the architectural design.

This section should state the amount of code, both for the wholesystem and for each subsystem: • predicted at the end of the AD phase; • produced by the end of the TR phase; • produced by final acceptance.

The actual amount of code produced should be specified in termsof the number of: • lines of code; • modules.

This section should make clear the rules used to define a line ofcode. Comment lines are not usually counted as lines of code. Historically,

Page 557: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 39THE PROJECT HISTORY DOCUMENT

continuation lines have been counted as separate lines of code, but it maybe more meaningful to ignore them and count each complete statement asone line of code. Non-executable statements (e.g. data declarations) arealso normally counted as lines of code.

4.6.3.2 PHD/3.2 Documentation

This section should identify each document produced and state foreach document the number of pages and words.

These values should be summed to define the total amount ofdocumentation produced.

4.6.3.3 PHD/3.3 Effort

This section should state the estimated and actual effort required foreach work package. The unit should be man-hours, man-days or man-months. Significant differences between the estimated and actual effortshould be explained.

Values should be summed to give the total amount of effort requiredfor all activities in: • each of the SR, AD, DD and TR phases; • the whole development (i.e. sum of the SR, AD, DD and TR phases); • OM phase.

4.6.3.4 PHD/3.4 Computer resources

This section should state the estimated and actual hardware,operating software and ancillary software required to develop and operatethe software. Significant differences between actual and estimatedresources should be explained.

4.6.3.5 PHD/3.5 Productivity

This section should state the actual productivity in terms of: • total number of lines of code produced divided by the total number of

man-days in the SR, AD, DD and TR phases; • total number of lines of code in each subsystem divided by the total

number of man-days expended on that subsystem in the DD phase.

Page 558: ESA Software Engineering Standards

40 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)THE PROJECT HISTORY DOCUMENT

The first value gives the global 'productivity' value. The second set ofvalues gives the productivity of each subsystem.

Productivity estimates used should also be stated. Significantdifferences between the estimated and actual productivity values should beexplained.

4.6.4 PHD/4 QUALITY ASSURANCE REVIEW

This section should review the actions taken to achieve quality,particularly reliability, availability, maintainability and safety.

This section should summarise and discuss the effort expendedupon activities of: • software verification and validation; • software quality assurance.

This section should analyse the quality of all the deliverables bypresenting and discussing the number of: • RIDs per document; • SPRs and SCRs per month during the DD, TR and OM phases; • SPRs and SCRs per subsystem per month during the DD, TR and OM

phases.

Measurements of Mean Time Between Failures (MTBF) and MeanTime To Repair (MTTR) should be made in the OM phase. SPR, SCR, SMRand operations log data may be useful for the calculation. Measurementsshould be made at regular intervals (e.g. monthly) and trends monitored.

Average availability may be calculated from the formulaMTBF/(MTBF+MTTR). In addition, the durations of all periods when thesoftware was unavailable should be plotted in a histogram to give a pictureof the number and frequency of periods of unavailability.

All 'safety' incidents where the software caused a hazard to peopleor property should be reported. Actions taken to prevent future incidentsshould be described.

Page 559: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 41THE PROJECT HISTORY DOCUMENT

4.6.5 PHD/5 FINANCIAL REVIEW

This section is optional. If included, this section should state theestimate and actual cost of the project. Costs should be divided into labourcosts and non-labour costs.

4.6.6 PHD/6 CONCLUSIONS

This section should summarise the lessons learned from theproject.

4.6.7 PHD/7 PERFORMANCE OF THE SYSTEM IN THE OM PHASE

This section should summarise in both quantitative and qualitativeterms whether the software performance fell below, achieved or exceededthe user requirements, the software requirements and the expectations ofthe designers.

Performance requirements in the software requirements documentmay be stated in terms of: • worst case; • nominal; • best case value.

These values, if specified, should be used as benchmarks ofperformance.

The results of any acceptance tests required for final acceptanceshould be summarised here.

Page 560: ESA Software Engineering Standards

42 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)THE PROJECT HISTORY DOCUMENT

This page is intentionally left blank.

Page 561: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 43LIFE CYCLE MANAGEMENT ACTIVITIES

CHAPTER 5 LIFE CYCLE MANAGEMENT ACTIVITIES

5.1 INTRODUCTION

Software maintenance is a major activity, and needs to be wellmanaged to be effective. ESA PSS-05-0 identifies four softwaremanagement functions: • software project management; • software configuration management; • software verification and validation; • software quality assurance.

This chapter discusses the planning of these activities in theoperations and maintenance phase.

The plans formulated by the developer at the end of detailed designand production phase should be applied, with updates as necessary, by thedevelopment organisation throughout the transfer phase and operationsand maintenance phase until final acceptance.

ESA PSS-05-0 does not mandate the production of any plans afterfinal acceptance. However the maintenance organisation will need to haveplans to be effective, and is strongly recommended to: • produce its own SPMP and SQAP; • reuse and improve the SCMP and SVVP of the development

organisation.

5.2 SOFTWARE PROJECT MANAGEMENT

Until final acceptance, OM phase activities that involve thedeveloper must be carried out according to the plans defined in theSPMP/TR (OM01). After final acceptance, the software maintenance teamtakes over responsibility for the software. It should produce its own SPMP.Guidelines for producing an SPMP are contained in ESA PSS-05-08, Guideto Software Project Management [Ref 3].

Page 562: ESA Software Engineering Standards

44 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)LIFE CYCLE MANAGEMENT ACTIVITIES

In the maintenance phase, both plans should define the: • organisation of the staff responsible for software maintenance; • work packages; • resources; • activity schedule.

5.2.1 Organisation

A maintenance organisation must be designated for every softwareproduct in operational use (OM04). A software manager should beappointed for every maintenance organisation. The software managershould define the roles and responsibilities of the maintenance staff in thesoftware project management plan. Individuals should be identified withoverall responsibility for: • each subsystem; • software configuration management; • software verification and validation; • software quality assurance.

Major adaptations of the software for new requirements orenvironmental changes should be handled by means of the evolutionary lifecycle, in which a development project runs in parallel with operations andmaintenance. Sometimes an evolutionary approach is decided upon at thestart of the project. More commonly the need for such an approach onlybecomes apparent near the time the software is first released. Whicheverway the evolutionary idea arises, software managers should organise theirstaff accordingly, for example by having separate teams for maintenanceand development. Ideally, software engineers should not work on bothteams at the same time, although the teams may share the same softwaremanager, software librarian and software quality assurance engineer.

5.2.2 Work packages

Work packages should be defined when software change requestsare approved. One work package should produce only one softwaremodification report, but may cover several change requests and problemreports.

Page 563: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 45LIFE CYCLE MANAGEMENT ACTIVITIES

5.2.3 Resources

The operations and maintenance phase of software normallyconsumes more resources than all the other phases added together.Studies have shown that large organisations spend about 50% to 70% oftheir available effort maintaining existing software [Ref 9, 10]. The highrelative cost of maintenance is due to the influence of several factors suchas the: • duration of the operations and maintenance phase being much longer

than all the other phases added together; • occurrence of new requirements that could not have been foreseen

when the software was first specified; • presence of a large number of faults in most delivered software.

Resources must be assigned to the maintenance of product until itis retired (OM07). Software managers must estimate the resources required.The estimates have two components: • predicted non-labour costs; • predicted labour costs (i.e. effort).

The cost of the computer equipment and consumables are themajor non-labour costs. The cost of travel to user sites and the cost ofconsumables may have to be considered.

Labour costs can be assessed from: • the level of effort required from the development organisation to support

the software in the transfer phase; • the number of user requirements outstanding; • past maintenance projects carried out by the organisation; • reliability and maintainability data; • size of the system; • number of subsystems; • type of system; • availability requirements.

Software managers should critically analyse this information.Lehman's laws and a code size method are outlined below to assist theanalysis.

Page 564: ESA Software Engineering Standards

46 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)LIFE CYCLE MANAGEMENT ACTIVITIES

5.2.3.1 Lehman's Laws

Lehman and Belady have examined the growth and evolution ofseveral large software systems and formulated five laws to summarise theirdata [Ref 12]. Three of the laws are directly relevant to estimating softwaremaintenance effort. Simply stated they say: • the characteristics of a program are fixed from the time when it is first

designed and coded (the 'law of large program evolution'); • the rate at which a program develops is approximately constant and

(largely) independent of the resources devoted to its development (the'law of organisational stability');

• the incremental change in each release of a system is approximatelyconstant (the 'law of conservation of familiarity').

The inherent characteristics of a program, introduced when it is firstdesigned and coded, will have a fundamental effect on the amount ofmaintenance a program needs. Poorly designed software will incur highermaintenance costs. Removing the inherent design defects amounts torewriting the program from scratch.

The law of diminishing returns applies at a very early stage insoftware maintenance. No matter how much effort is applied to maintainingsoftware, the need to implement and verify modifications one at a time limitshow fast changes can be made. If the mean time to repair the software doesnot change when the number of maintenance staff is increased, theapparent inefficiency is not the fault of the maintenance staff, but is due tothe sequential nature of the work.

Software maintenance projects settle into a cycle of change.Modifications are requested, implemented and released to users at regularintervals. The release rate is constrained by the amount of change users cancope with.

In summary, software managers must be careful not to waste effortby: • maintaining software that is unmaintainable; • staffing the maintenance team above a saturation threshold; • releasing changes too fast for the users to cope with.

Page 565: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 47LIFE CYCLE MANAGEMENT ACTIVITIES

5.2.3.2 Code size maintenance cost estimating method

The effort required to produce L lines of code when the productivityis P is L divided by P. For example, the addition or modification of 500 linesof code in a 20000 line of code system that took 1000 man-days to developwould require 500/(20000/1000) = 25 man days. This example assumesthat the productivity observed in maintenance is the same as that indevelopment. However productivity in maintenance often falls below that indevelopment because of the need to assess the cause of a problem andperform regression tests.

5.2.4 Activity schedule

The activity schedule should show the dates of the next releases,and the work packages that must be completed for each release. Thisshould be presented in the form of a Gantt chart.

5.3 SOFTWARE CONFIGURATION MANAGEMENT

A good configuration management system is essential for effectivesoftware maintenance, not only for the software but also for the tools andancillary software items. Software configuration management is discussed inESA PSS-05-09 Guide to Software Configuration Management [Ref 4].

The maintenance team should define and describe its softwareconfiguration management system in a Software Configuration ManagementPlan (SCMP). The team may reuse the plan and system made by thedevelopment team, or produce its own. The SCMP must define theprocedures for software modification (OM05). The procedures should bebased upon the change process discussed in Section 2.3.1 The SCMPshould define, step-by-step, how to modify the software, omitting only thedetails specific to individual changes.

5.4 SOFTWARE VERIFICATION AND VALIDATION

Software verification and validation consumes a significantproportion of effort during the operations and maintenance phase becauseof the need to check that changes not only work correctly, but also that theyhave no adverse side effects. Software verification and validation arediscussed in ESA PSS-05-10 Guide to Software Verification and Validation[Ref 5].

Page 566: ESA Software Engineering Standards

48 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)LIFE CYCLE MANAGEMENT ACTIVITIES

The maintenance team should reuse the Software Verification andValidation Plan (SVVP) produced during the development phases. It shouldbe updated and extended as necessary.

The Software Review Board (SRB) review procedure should havebeen defined in the SVVP/DD. This procedure may need to be updated oreven added to the SVVP either in the TR or OM phase. The SRB should usethe technical review procedure described in ESA PSS-05-10 Guide toSoftware Verification and Validation, modified as indicated in Section 2.3.1.2.

Walkthroughs and inspections of the design and code are not onlyuseful for training new staff, they are essential for maintaining and improvingquality. Even if software inspection procedures were not used duringdevelopment, consideration should be given to introducing them in themaintenance phase, especially for subsystems that have serious qualityproblems. All new procedures should be defined in updates to the SVVP.

Every change to the software should be examined from the point ofview of: • what should be done to verify it;• whether the SVVP defines the verification procedure.

The SVVP will have to be extended to include new test designs, testcases and test procedures for:• tests of new or modified software components;• regression tests of the software.

Regression tests may be:• random selections of existing tests;• targeted selections of existing tests;• new tests designed to expose adverse side effects.

Targeted tests and new tests depend upon knowledge of the actualchange made. All such regression tests imply a new test design. The firsttwo reuse test cases and test procedures.

Some of the acceptance tests may require a long period of time tocomplete, and are therefore not required for provisional acceptance. Thereports of these tests should be compiled in the operations andmaintenance phase. All the tests must be completed before finalacceptance (OM02).

Page 567: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) 49LIFE CYCLE MANAGEMENT ACTIVITIES

5.5 SOFTWARE QUALITY ASSURANCE

Software quality assurance activities must be continued during theoperations and maintenance phase. ESA PSS-05-0 specifies ten mandatorypractices for OM phase. ESA PSS-05-11 Guide to Software QualityAssurance contains guidelines on how to check that these practices arecarried out [Ref 6]. In particular, staff responsible for SQA should:• be members of the Software Review Board;• perform functional and physical audits before each release of the

software;• monitor quality, reliability, availability, maintainability and safety.

The SQAP/TR should define the SQA activities that the developmentteam should carry out until final acceptance of the software. Themaintenance organisation that takes over after final acceptance shouldexamine that plan, consider the current status of the software and thenproduce its own plan.

Page 568: ESA Software Engineering Standards

50 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)LIFE CYCLE MANAGEMENT ACTIVITIES

This page is intentionally left blank.

Page 569: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) A-1GLOSSARY

APPENDIX AGLOSSARY

A.1 LIST OF TERMS

Terms used in this document are consistent with ESA PSS-05-0 [Ref1] and ANSI/IEEE Std 610.12 [Ref 2]. Additional terms not defined in thesestandards are listed below.

building

The process of compiling and linking a software system.

development environment

The environment of computer hardware, operating software and externalsoftware in which the software system is developed.

end user

A person who utilises the products or services of a system.

installation

The process of copying a software system into the target environment andconfiguring the target environment to make the software system usable.

operator

A person who controls and monitors the hardware and software of a system.

operational environment

The target environment, external software and users.

target environment

The environment of computer hardware, operating software and externalsoftware in which the software system is used.

Page 570: ESA Software Engineering Standards

A-2 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)GLOSSARY

user

A person who utilises the products or services of a system, or a person whocontrols and monitors the hardware and software of a system (i.e. an enduser, an operator, or both).

A.2 LIST OF ACRONYMS

AD Architectural DesignANSI American National Standards InstituteAT Acceptance TestBSSC Board for Software Standardisation and ControlCASE Computer Aided Software EngineeringCI Configuration ItemDD Detailed Design and productionOM Operations and MaintenancePHD Project History DocumentSCM Software Configuration ManagementSCMP Software Configuration Management PlanSCR Software Change RequestSMR Software Modification ReportSPM Software Project ManagementSPMP Software Project Management PlanSPR Software Problem ReportSQA Software Quality AssuranceSQAP Software Quality Assurance PlanSR Software Requirements definitionSRD Software Requirements DocumentSRN Software Release NoteSTD Software Transfer DocumentSVV Software Verification and ValidationSVVP Software Verification and Validation PlanTR TransferUR User Requirements definitionURD User Requirements Document

Page 571: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) B-1REFERENCES

APPENDIX BREFERENCES

1. ESA Software Engineering Standards, ESA PSS-05-0 Issue 2 February1991.

2. Guide to the Software Architectural Design Phase, ESA PSS-05-04,January 1992.

3. Guide to Software Project Management, ESA PSS-05-08, Issue 1 Draft,July 1994

4. Guide to Software Configuration Management, ESA PSS-05-09, Issue 1,November 1992

5. Guide to Software Verification and Validation, ESA PSS-05-10, Issue 1,February 1994

6. Guide to Software Quality Assurance, ESA PSS-05-11, Issue 1, July1993.

7. IEEE Standard Glossary of Software Engineering Terminology,ANSI/IEEE Std 610.12-1990

8. Standard Dictionary of Measures to Produce Reliable Software,ANSI/IEEE Std 982.1-1988

9. Software Engineering, I. Sommerville, Addison Wesley, Fourth Edition,1992

10. Software Maintenance Management, B.P. Lientz and E.B. Swanson,Addison Wesley, 1980

11. Software Evolution, R.J. Arthur, Wiley, 1988

12. Program Evolution. Processes of Software Change, M.M. Lehman and L.Belady, Academic Press, 1985.

13. Structured Testing: A Software Testing Methodology Using theCyclomatic Complexity Metric, T.J. McCabe, National Bureau ofStandards Special Publications 500-99, 1982.

Page 572: ESA Software Engineering Standards

B-2 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)REFERENCES

14. Maintenance support for object-oriented programs, N.Wilde and R. Huitt,IEEE Transactions on Software Engineering, Vol 18 Number 12, IEEEComputer Society, December 1992

15. Support for maintaining object-oriented programs, M. Lejter, S. Meyersand S.P. Reiss, IEEE Transactions on Software Engineering, Vol 18Number 12, IEEE Computer Society, December 1992

Page 573: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) C-1MANDATORY PRACTICES

APPENDIX CMANDATORY PRACTICES

This appendix is repeated from ESA PSS-05-0 appendix D.7

OM01 Until final acceptance, OM phase activities that involve the developer shall becarried out according to the plans defined in the SPMP/TR.

OM02 All the acceptance tests shall have been successfully completed before thesoftware is finally accepted.

OM03 Even when no contractor is involved, there shall be a final acceptancemilestone to arrange the formal hand-over from software development tomaintenance.

OM04 A maintenance organisation shall be designated for every software product inoperational use.

OM05 Procedures for software modification shall be defined.

OM06 Consistency between code and documentation shall be maintained.

OM07 Resources shall be assigned to a product's maintenance until it is retired.

OM08 The SRB ... shall authorise all modifications to the software.

OM09 The statement of final acceptance shall be produced by the initiator, on behalfof the users, and sent to the developer.

OM10 The PHD shall be delivered to the initiator after final acceptance.

Page 574: ESA Software Engineering Standards

C-2 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)MANDATORY PRACTICES

This page is intentionally left blank

Page 575: ESA Software Engineering Standards

ESA PSS-05-07 Issue 1 Revision 1 (March 1995) D-1INDEX

APPENDIX DINDEX

adaptive maintenance, 3availability, 40black box test, 23change control, 10code reformatter, 32code restructuring tool, 32coding standards, 19cohesion, 18complexity, 19configuration item list, 27consistency, 19corrective maintenance, 3coupling, 19criticality, 8, 13cyclomatic complexity, 19detailed design and code review, 22emergency release, 25ESA PSS-05-04, 18, 19ESA PSS-05-08,, 43ESA PSS-05-09, 10, 29, 47ESA PSS-05-10, 19, 29, 47ESA PSS-05-11, 49evolutionary life cycle, 44Gantt chart, 38inherit, 32install, 27labour cost, 45Lehman's Laws, 46line of code, 38maintainability, 20maintenance, 3major release, 24McCabe, 19mean time between failures, 20, 40mean time to repair, 20, 40milestone trend chart, 38minor release, 24navigation tool, 32non-labour cost, 45object-oriented software, 32OM01, 4, 43OM02, 4, 28, 49OM03, 4OM04, 4, 44OM05, 47

OM06, 21OM07, 45OM08, 14OM09, 30OM10, 4organisation, 44overload, 32patch, 24perfective maintenance, 3performance, 18PHD, 28polymorphism, 19portability, 20pretty printer, 32productivity, 39Project History Document, 28, 35regression test, 48release, 23release number, 25reliability, 20resource, 45resource consumption, 18reverse engineering tool, 33safety, 21SCM14, 25SCM38, 27SCM39, 22security, 21SMR, 17software maintenance manager, 4software manager, 4software modification report, 17software problem report, 7software release note, 25software review board, 14, 28Software Transfer Document, 4SPR, 7SRB, 29SRN, 25statement of provisional acceptance, 30SVV03, 26test, 22TR02, 29traceability matrix, 22urgency, 8, 13

Page 576: ESA Software Engineering Standards

D-2 ESA PSS-05-07 Issue 1 Revision 1 (March 1995)INDEX

white box test, 23work package, 44workaround, 14

Page 577: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1March 1995

european space agency / agence spatiale européenne8-10, rue Mario-Nikis, 75738 PARIS CEDEX, France

Guide tosoftwareprojectmanagement

Prepared by:ESA Board for SoftwareStandardisation and Control(BSSC)

Approved by:The Inspector General, ESA

Page 578: ESA Software Engineering Standards

ii ESA PSS-05-08 Issue 1 Revision 1 (March 1995)DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

1. DOCUMENT TITLE: ESA PSS-05-08 Guide to Software Project Management

2. ISSUE 3. REVISION 4. DATE 5. REASON FOR CHANGE

1 0 1994 First issue

1 1 1995 Minor updates for publication

Issue 1 Revision 1 approved, May 1995Board for Software Standardisation and ControlM. Jones and U. Mortensen, co-chairmen

Issue 1 approved, 15th June 1995Telematics Supervisory Board

Issue 1 approved by:The Inspector General, ESA

Published by ESA Publications Division,ESTEC, Noordwijk, The Netherlands.Printed in the Netherlands.ESA Price code: E1ISSN 0379-4059

Copyright © 1995 by European Space Agency

Page 579: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) iiiTABLE OF CONTENTS

TABLE OF CONTENTS

CHAPTER 1 INTRODUCTION................................................................................ 11.1 PURPOSE ................................................................................................................11.2 OVERVIEW...............................................................................................................1

CHAPTER 2 SOFTWARE PROJECT MANAGEMENT .......................................... 32.1 INTRODUCTION......................................................................................................32.2 THE PROJECT MANAGER'S ROLE AND RESPONSIBILITIES.............................42.3 PROJECT INTERFACES ........................................................................................52.4 PROJECT PLANNING ............................................................................................6

2.4.1 Define products .. ............................................................................................82.4.2 Define activities... ............................................................................................8

2.4.2.1 Process model ......................................................................................82.4.2.1.1 Selection of life cycle approach...................................................92.4.2.1.2 Tailoring of the phase process models.....................................112.4.2.1.3 Selection of methods and tools ................................................13

2.4.2.2 Work package definition .....................................................................142.4.3 Estimate resources and duration..................................................................18

2.4.3.1 Defining human resources...................................................................182.4.3.2 Estimating effort ...................................................................................192.4.3.3 Estimating non-labour costs................................................................202.4.3.4 Estimating duration ..............................................................................20

2.4.4 Define activity network....................................................................................212.4.5 Define schedule and total cost .....................................................................21

2.5 LEADING THE PROJECT.....................................................................................222.6 TECHNICAL MANAGEMENT OF PROJECTS.....................................................232.7 MANAGEMENT OF PROJECT RISKS .................................................................23

2.7.1 Experience factors ..........................................................................................242.7.1.1 Experience and qualifications of the project manager .......................242.7.1.2 Experience and qualifications of staff..................................................242.7.1.3 Maturity of suppliers.............................................................................25

2.7.2 Planning factors... ..........................................................................................252.7.2.1 Accuracy of estimates..........................................................................262.7.2.2 Short timescales...................................................................................262.7.2.3 Long timescales ...................................................................................272.7.2.4 Single-point failures..............................................................................272.7.2.5 Location of staff....................................................................................272.7.2.6 Definition of responsibilities .................................................................282.7.2.7 Staffing profile evolution.......................................................................28

2.7.3 Technological factors .....................................................................................282.7.3.1 Technical novelty..................................................................................29

Page 580: ESA Software Engineering Standards

iv ESA PSS-05-08 Issue 1 Revision 1 (March 1995)TABLE OF CONTENTS

2.7.3.2 Maturity and suitability of methods......................................................292.7.3.3 Maturity and efficiency of tools ............................................................292.7.3.4 Quality of commercial software ...........................................................30

2.7.4 External factors ... ..........................................................................................302.7.4.1 Quality and stability of user requirements ...........................................302.7.4.2 Definition and stability of external interfaces .......................................312.7.4.3 Quality and availability of external systems.........................................31

2.8 MEASURING PROJECT PROCESSES AND PRODUCTS ..................................312.8.1 Assess environment and define primary goal ..............................................312.8.2 Analyse goals and define metrics..................................................................31

2.8.2.1 Process metrics....................................................................................312.8.2.2 Product metrics ....................................................................................31

2.8.3 Collect metric data and measure performance............................................312.8.4 Improving performance .................................................................................31

2.9 PROJECT REPORTING........................................................................................312.9.1 Progress reports . ..........................................................................................312.9.2 Work package completion reports.................................................................312.9.3 Timesheets ... ..........................................................................................31

CHAPTER 3 SOFTWARE PROJECT MANAGEMENT METHODS ..................... 313.1 INTRODUCTION....................................................................................................313.2 PROJECT PLANNING METHODS ........................................................................31

3.2.1 Process modelling methods .........................................................................313.2.2 Estimating methods ......................................................................................31

3.2.2.1 Historical comparison .........................................................................313.2.2.2 COCOMO. ..........................................................................................313.2.2.3 Function Point Analysis.......................................................................313.2.2.4 Activity distribution analysis ................................................................313.2.2.5 Delphi method .....................................................................................313.2.2.6 Integration and System Test effort estimation ...................................313.2.2.7 Documentation effort estimation .........................................................31

3.2.3 Activity network methods ...............................................................................313.2.4 Scheduling presentation methods.................................................................31

3.3 PROJECT RISK MANAGEMENT METHODS .......................................................313.3.1 Risk table ... ..........................................................................................313.3.2 Risk matrix ... ..........................................................................................31

3.4 PROJECT REPORTING METHODS......................................................................313.4.1 Progress tables and charts ............................................................................313.4.2 Milestone trend charts....................................................................................31

Page 581: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) vTABLE OF CONTENTS

CHAPTER 4 SOFTWARE PROJECT MANAGEMENT TOOLS ........................... 314.1 INTRODUCTION....................................................................................................314.2 PROJECT PLANNING TOOLS..............................................................................31

4.2.1 General purpose project planning tools ........................................................314.2.2 Process modelling tools.................................................................................314.2.3 Estimating Tools .. ..........................................................................................31

4.3 PROJECT RISK ANALYSIS TOOLS ......................................................................314.4 PROJECT REPORTING TOOLS............................................................................314.5 PROCESS SUPPORT TOOLS...............................................................................31

CHAPTER 5 THE SOFTWARE PROJECT MANAGEMENT PLAN...................... 315.1 INTRODUCTION....................................................................................................315.2 STYLE.....................................................................................................................315.3 RESPONSIBILITY...................................................................................................315.4 MEDIUM ................................................................................................................315.5 SERVICE INFORMATION......................................................................................315.6 CONTENTS............................................................................................................315.7 EVOLUTION...........................................................................................................31

CHAPTER 6 THE SOFTWARE PROJECT PROGRESS REPORT ...................... 316.1 INTRODUCTION....................................................................................................316.2 STYLE.....................................................................................................................316.3 RESPONSIBILITY...................................................................................................316.4 MEDIUM ................................................................................................................316.5 SERVICE INFORMATION......................................................................................316.6 CONTENTS............................................................................................................31

APPENDIX A GLOSSARY .................................................................................A-31APPENDIX B REFERENCES.............................................................................B-31APPENDIX C MANDATORY PRACTICES ....................................................... C-31APPENDIX D PROJECT MANAGEMENT FORMS...........................................D-31APPENDIX E INDEX ..........................................................................................E-31

Page 582: ESA Software Engineering Standards

vi ESA PSS-05-08 Issue 1 Revision 1 (March 1995)TABLE OF CONTENTS

This page is intentionally left blank

Page 583: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) viiPREFACE

PREFACE

This document is one of a series of guides to software engineering produced bythe Board for Software Standardisation and Control (BSSC), of the European SpaceAgency. The guides contain advisory material for software developers conforming toESA's Software Engineering Standards, ESA PSS-05-0. They have been compiled fromdiscussions with software engineers, research of the software engineering literature,and experience gained from the application of the Software Engineering Standards inprojects.

Levels one and two of the document tree at the time of writing are shown inFigure 1. This guide, identified by the shaded box, provides guidance aboutimplementing the mandatory requirements for software project management describedin the top level document ESA PSS-05-0.

Guide to theSoftware Engineering

Guide to theUser Requirements

Definition Phase

Guide toSoftware Project

Management

PSS-05-01

PSS-05-02 UR GuidePSS-05-03 SR Guide

PSS-05-04 AD GuidePSS-05-05 DD Guide

PSS-05-06 TR GuidePSS-05-07 OM Guide

PSS-05-08 SPM GuidePSS-05-09 SCM Guide

PSS-05-11 SQA Guide

ESASoftware

EngineeringStandards

PSS-05-0

Standards

Level 1

Level 2

PSS-05-10 SVV Guide

Figure 1: ESA PSS-05-0 document tree

The Guide to the Software Engineering Standards, ESA PSS-05-01, containsfurther information about the document tree. The interested reader should consult thisguide for current information about the ESA PSS-05-0 standards and guides.

The following past and present BSSC members have contributed to theproduction of this guide: Carlo Mazza (chairman), Gianfranco Alvisi, Michael Jones,Bryan Melton, Daniel de Pablo and Adriaan Scheffer.

Page 584: ESA Software Engineering Standards

viii ESA PSS-05-08 Issue 1 Revision 1 (March 1995)PREFACE

The BSSC wishes to thank Jon Fairclough for his assistance in the developmentof the Standards and Guides, and to all those software engineers in ESA and Industrywho have made contributions.

Requests for clarifications, change proposals or any other comment concerningthis guide should be addressed to:

BSSC/ESOC Secretariat BSSC/ESTEC SecretariatAttention of Mr M Jones Attention of Mr U MortensenESOC ESTECRobert Bosch Strasse 5 Postbus 299D-64293 Darmstadt NL-2200 AG NoordwijkGermany The Netherlands

Page 585: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 1INTRODUCTION

CHAPTER 1INTRODUCTION

1.1 PURPOSE

ESA PSS-05-0 describes the software engineering standards to beapplied for all deliverable software implemented for the European SpaceAgency (ESA) [Ref 1].

ESA PSS-05-0 requires that every software project be planned,organised, staffed, led, monitored and controlled. These activities are called'Software Project Management' (SPM). Each project must define its SoftwareProject Management activities in a Software Project Management Plan(SPMP).

This guide defines and explains what software project managementis, provides guidelines on how to do it, and defines in detail what a SoftwareProject Management Plan should contain.

This guide should be read by software project managers, teamleaders, software quality assurance managers, senior managers andinitiators of software projects.

1.2 OVERVIEW

Chapter 2 contains a general discussion of the principles ofsoftware project management, expanding upon ESA PSS-05-0. Chapter 3discusses methods for software project management that can be used tosupport the activities described in Chapter 2. Chapter 4 discusses tools forsoftware project management. Chapter 5 describes how to write the SPMP.Chapter 6 discusses progress reporting.

All the mandatory practices in ESA PSS-05-0 relevant to softwareproject management are repeated in this document. The identifier of thepractice is added in parentheses to mark a repetition. This documentcontains no new mandatory practices.

Page 586: ESA Software Engineering Standards

2 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)INTRODUCTION

This page is intentionally left blank

Page 587: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 3SOFTWARE PROJECT MANAGEMENT

CHAPTER 2SOFTWARE PROJECT MANAGEMENT

2.1 INTRODUCTION

Software project management is 'the process of planning,organising, staffing, monitoring, controlling and leading a software project'[Ref 3]. Every software project must have a manager who leads thedevelopment team and is the interface with the initiators, suppliers andsenior management. The project manager:• produces the Software Project Management Plan (SPMP);• defines the organisational roles and allocates staff to them;• controls the project by informing staff of their part in the plan;• leads the project by making the major decisions and by motivating staff

to perform well;• monitors the project by measuring progress;• reports progress to initiators and senior managers.

SPMPSCMPSVVPSQAP

Make Plans

Make Products

Reports

Products

User Requirements

Standardsfor control

for monitoring

Figure 2.1: Management control loop

Figure 2.1 shows the control and monitoring loop required in everysoftware project. Standards and user requirements are the primary input toboth the planning and production processes. Plans are made for projectmanagement, configuration management, verification and validation and

Page 588: ESA Software Engineering Standards

4 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT

quality assurance. These plans control production. Reports, such asprogress reports, timesheets, work package completion reports, qualityassurance reports and test reports provide feedback to the planningprocess. Plans may be updated to take account of these reports.

The project manager is the driving force in the management controlloop. The following sections define the role of the project manager anddiscuss project management activities.

2.2 THE PROJECT MANAGER'S ROLE AND RESPONSIBILITIES

When they are appointed, project managers should be given termsof reference that define their:• objectives;• responsibilities;• limits of authority.

The objective of every project manager is to deliver the product ontime, within budget and with the required quality. Although the preciseresponsibilities of a project manager will vary from company to companyand from project to project, they should always include planning andforecasting. Three additional areas of management responsibility defined byMintzberg [Ref 17] are:• interpersonal responsibilities, which include:

- leading the project team;- liaising with initiators, senior management and suppliers;- being the 'figurehead', i.e. setting the example to the project team

and representing the project on formal occasions.• informational responsibilities, which include:

- monitoring the performance of staff and the implementation of theproject plan;

- disseminating information about tasks to the project team;- disseminating information about project status to initiators and

senior management;- acting as the spokesman for the project team.

Page 589: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 5SOFTWARE PROJECT MANAGEMENT

• decisional responsibilities, which include:- allocating resources according to the project plan, and adjusting

those allocations when circumstances dictate (i.e. the projectmanager has responsibility for the budget);

- negotiating with the initiator about the optimum interpretation ofcontractual obligations, with the company management forresources, and with project staff about their tasks;

- handling disturbances to the smooth progress of the project suchas equipment failures and personnel problems.

2.3 PROJECT INTERFACES

Project managers must identify the people or groups the projectdeals with, both within the parent organisation and outside. A project mayhave interfaces to:• initiators;• end users;• suppliers;• subcontractors;• the prime contractor;• other subsystem developers.

When defining external project interfaces, the project managershould:• ensure that a single, named point of contact exists both within the

project team and each external group;• channel all communications between the project and external groups

through as few people as possible;• ensure that no person in the project has to liaise with more than seven

external groups (the 'rule of seven' principle).

Page 590: ESA Software Engineering Standards

6 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT

2.4 PROJECT PLANNING

Whatever the size of the project, good planning is essential if it is tosucceed. The software project planning process is shown in Figure 2.4. Thefive major planning activities are:• define products;• define activities;• estimate resources and duration;• define activity network;• define schedule and total cost.

This process can be applied to a whole project or to a phase of aproject. Each activity may be repeated several times to make a feasible plan.In principle, every activity in Figure 2.4 can be linked to the other activities by'feedback' loops, in which information gained at a later stage in planning isused to revise earlier planning decisions. These loops have been omittedfrom the figure for simplicity. Iteration is essential for optimising the plan.

DefineProducts

DefineActivities

EstimateResources& Duration

DefineActivityNetwork

DefineSchedule

Standards

URD orSRD orADD

Historical and Supplier Cost Data

Time and Resource Constraints

Process Model

WP[inputs, activities, outputs]

WP[inputs, activities, outputs, resources, duration]

Total Project Duration

WP StartWP End

Gantt chart

Resource Req's

WP Float

Project organisation

Product Definitions

PERT chart

Risk and Environmental Considerations

Total Cost& Cost

Figure 2.4: Planning process

Page 591: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 7SOFTWARE PROJECT MANAGEMENT

The inputs to software project planning are:• User Requirements Document (URD), Software Requirements

Document (SRD) or Architectural Design Document (ADD), accordingto the project phase;

• software standards for products and procedures;• historical data for estimating resources and duration;• supplier cost data;• risks to be considered;• environmental factors, such as new technology;• time constraints, such as delivery dates;• resource constraints, such as the availability of staff.

The primary output of project planning is the Software ProjectManagement Plan. This should contain:• definitions of the deliverable products;• a process model defining the life cycle approach and the methods and

tools to be used;• the work breakdown structure, i.e. a hierarchically structured set of work

packages defining the work;• the project organisation, which defines the roles and reporting

relationships in the project;• an activity network defining the dependencies between work packages,

the total time for completion of the project, and the float times of eachwork package;

• a schedule of the project, defining the start and end times of each workpackage;

• a list of the resources required to implement the project;• a total cost estimate.

The following subsections describe the five major activities.

Page 592: ESA Software Engineering Standards

8 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT

2.4.1 Define products

The first activity in project planning is to define the products to bedelivered. Table 2.4.1 summarises the software products of each phase asrequired by ESA PSS-05-0.

Phase Input Product Output Product

SR URD SRD

AD SRD ADD

DD ADD DDD, code, SUM

TR DDD, code, SUM STD

Table 2.4.1: Phase input and output products

While standards define the contents of documents, someconsideration should be given as to how each section of the document willbe prepared. Care should be taken not to overconstrain the analysis anddesign process in defining the products. Some consideration should also begiven to the physical characteristics of the products, such as the medium(e.g. paper or computer file), language (e.g. English, French) orprogramming language (in the case of code).

2.4.2 Define activities

Once the products are specified, the next step is to define aprocess model and a set of work packages.

2.4.2.1 Process model

A software process model should define the:• activities in a software development process;• the inputs and outputs of each activity;• roles played in each activity.

The software development process must be based upon the ESAPSS-05-0 life cycle model (SLC03). The key decisions in software processmodelling are the:• selection of the life cycle approach, i.e. the pattern of life cycle phases;• modifications, if any, to the phase process models (see below);• selection of methods and tools to support the activities in the phase

process models.

Page 593: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 9SOFTWARE PROJECT MANAGEMENT

2.4.2.1.1 Selection of life cycle approach

ESA PSS-05-0 defines three life cycle approaches, waterfall,incremental and evolutionary.

UR

SR

AD

DD

TR

OM

Figure 2.4.2.1.1A: Waterfall approach

The waterfall approach shown in Figure 2.4.2.1.1A executes eachphase of the life cycle in sequence. Revisiting earlier phases is permitted tocorrect errors. The simple waterfall approach is suitable when:• a set of high quality, stable user requirements exists;• the length of project is short (i.e. two years or less);• the users require the complete system all at once.

Page 594: ESA Software Engineering Standards

10 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT

UR

SR

AD

DD

TR

OM

DD

TR

OM

1

2

2

2

1

1

Figure 2.4.2.1.1B: Incremental approach

The incremental approach to development has multiple DD, TR andOM phases as shown in Figure 2.4.2.1.1B. This approach can be useful if:• delivery of the software has to be according to the priorities set on the

user requirements;• it is necessary to improve the efficiency of integration of the software

with other parts of the system (e.g. the telecommanding and telemetrysubsystems in a spacecraft control system may be required early forsystem tests);

• early evidence that products will be acceptable is required.

Page 595: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 11SOFTWARE PROJECT MANAGEMENT

DEV

OM1

1

DEV

OM2

2

OM3

DEV3

Figure 2.4.2.1.1C: Evolutionary approach

The evolutionary approach to development shown in Figure2.4.2.1.1C consists of multiple waterfall life cycles with overlap betweenoperations, maintenance and development. This approach may be used if,for example:• some user experience is required to refine and complete the

requirements (shown by the dashed line within the OM boxes);• some parts of the implementation may depend on the availability of

future technology;• some new user requirements are anticipated but not yet known;• some requirements may be significantly more difficult to meet than

others, and it is not decided to allow them to delay a usable delivery.

2.4.2.1.2 Tailoring of the phase process models

The process models for the SR, AD and DD phases are shown inFigure 2.4.2.1.2A, B and C.

Page 596: ESA Software Engineering Standards

12 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT

SCMP/AD

Approved

URD

Examined

URD

Logical

Model

Draft

SRD

Draft

SVVP/ST

ApprovedSRD

Examine

URD

Construct

Model

Specify

Req'mnts

Write

Test Plan

SR/R

Write

AD Plans

Accepted RID

CASE tools

Methods

Prototyping

SPMP/AD

SQAP/AD

SVVP/AD

SVVP/ST

SCMP/SR

SPMP/SR

SQAP/SR

SVVP/SR

Figure 2.4.2.1.2A: SR phase process model

SCMP/DD

Approved

SRD

Examined

SRD

Physical

Model

Draft

ADD

Draft

SVVP/IT

ApprovedADD

Examine

SRD

Construct

Model

Specify

Design

Write

Test Plan

AD/R

Write

DD Plans

Accepted RID

CASE tools

Methods

Prototyping

SPMP/DD

SQAP/DD

SVVP/DD

SVVP/IT

SCMP/AD

SPMP/AD

SQAP/AD

SVVP/AD

Figure 2.4.2.1.2B: AD phase process model

Page 597: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 13SOFTWARE PROJECT MANAGEMENT

SCMP/TR

ExaminedADD

VerifiedSubsystems

VerifiedSystem

ApprovedDDD,SUM

DetailDesign

WriteTest Specs

Codeand test

Integrateand test

DD/R

WriteTR Plans

Accepted RID

CASE toolsMethodsPrototyping

SPMP/TR

SQAP/TR

SVVP/AT

SCMP/DDSPMP/DD

SQAP/DDSVVP/DD

Accepted SPR

code

Test tools

Static AnalysersDynamic Analysers

DebuggersCM tools

DraftSVVP/ATSVVP/STSVVP/ITSVVP/UT

DraftDDDSUM

ExamineADD

ApprovedADD

WalkthroughsInspections

Figure 2.4.2.1.2C: DD phase process model

Project managers should tailor these process models to the needsof their projects. For example the specification of unit tests often has to waituntil the code has been produced because there is insufficient information todefine the test cases and procedures after the detailed design stage.

Further decomposition of the activities in the phase process modelsmay be necessary. The granularity of the process model should be suchthat:• managers have adequate visibility of activities at all times;• every individual knows what to do at any time.

Detailed procedures, which should be part of the developmentorganisation's quality system, should be referenced in the SPMP. They neednot be repeated in the SPMP.

2.4.2.1.3 Selection of methods and tools

The project manager should, in consultation with the developmentteam, select the methods and tools used for each activity in the phaseprocess models. The ESA PSS-05 series of guides contains extensiveinformation about methods and tools that may be used for softwaredevelopment [Ref 19].

Page 598: ESA Software Engineering Standards

14 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT

2.4.2.2 Work package definition

The next step in planning is to decompose the work into 'workpackages'. Activities in the process model are instantiated as work packagetasks. The relationship between activities and tasks may be one-to-one (asin the case of constructing the logical model), or one-to-many (e.g. the codeand unit test activity will be performed in many work packages).

Dividing the work into simple practical work packages requires agood understanding of the needs of the project and the logical relationshipsembedded in the process model. Some criteria for work package designare:• coherence; tasks within a work package should have the same goal;• coupling; work package dependencies should be minimised, so that

team members can work independently;• continuity; production work packages should be full-time to maximise

efficiency;• cost; bottom level work packages should require between one man-

week and one man-month of effort.

The level of detail in the work breakdown should be driven by thelevel of accuracy needed for estimating resources and duration in the nextstage of planning. This may be too detailed for other purposes, such asprogress reporting to senior management. In such cases the higher levelwork packages are used to describe the work.

The work packages should be hierarchically organised so thatrelated activities, such as software project management, are groupedtogether. The hierarchy of work packages is called the 'Work BreakdownStructure' (WBS). Table 2.4.2.2 shows an example of a WBS for a mediumsize project. In this example, each work package has a four digit identifierallowing up to four levels in the WBS.

Page 599: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 15SOFTWARE PROJECT MANAGEMENT

1000 Software Project Management1100 UR phase

1110 SPMP/SR production1200 SR phase

1210 SPM reports1220 SPMP/AD production

1300 AD phase1310 SPM reports1320 SPMP/DD production

1400 DD phase1410 SPM reports1420 SPMP/TR production1430 SPMP/DD updates

1500 TR phase1510 SPM reports

2000 Software Production2100 UR phase

2110 Requirements engineering2200 SR phase

2210 Logical model2220 Prototyping2230 SRD draft2240 SRD final

2300 AD phase2310 Physical model2320 Prototyping2330 ADD draft2340 ADD final

2400 DD phase2410 Unit 1 production

2411 Unit 1 DDD2412 Unit 1 code2413 Unit 1 test

2420 Unit 2 production2421 Unit 2 DDD2422 Unit 2 code2423 Unit 2 test

2430 Unit 3 production2431 Unit 3 DDD2432 Unit 3 code2433 Unit 3 test

2440 Unit 4 production2441 Unit 4 DDD2442 Unit 4 code2443 Unit 4 test

2450 Unit 5 production2451 Unit 5 DDD2452 Unit 5 code2453 Unit 5 test

Table 2.4.2.2: Example Work Breakdown Structure

Page 600: ESA Software Engineering Standards

16 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT

2460 Integration and test2461 Integration stage 12462 Integration stage 22463 Integration stage 32464 Integration stage 4

2470 SUM production2500 TR phase

2510 Software installation2520 STD production

3000 Software Configuration Management3100 UR phase

3110 SCMP/SR production3200 SR phase

3210 Library maintenance3220 Status accounting3230 SCMP/AD production

3300 AD phase3310 Library maintenance3320 Status accounting3330 SCMP/DD production

3400 DD phase3410 Library maintenance3420 Status accounting3430 SCMP/TR production

3500 TR phase3510 Library maintenance3520 Status accounting

4000 Software Verification and Validation4100 UR phase

4110 SVVP/SR production4120 SVVP/AT plan production4130 UR review

4200 SR phase4210 Walkthroughs4220 Tracing4230 SR review4240 SVVP/ST plan production

4300 AD phase4310 Walkthroughs4320 Tracing4330 SR review4340 SVVP/IT plan production

4400 DD phase4410 Unit review

4411 Unit 1 Review4412 Unit 2 Review4413 Unit 3 Review4414 Unit 4 Review4415 Unit 5 Review

Table 2.4.2.2: Example Work Breakdown Structure (continued)

Page 601: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 17SOFTWARE PROJECT MANAGEMENT

4420 SVVP/UT production4430 SVVP/IT production4440 SVVP/ST production4450 SVVP/AT production4460 System tests4470 DD review

4500 TR phase4510 Acceptance tests4520 Provisional Acceptance review

5000 Software Quality Assurance5100 UR phase

5110 SQAP/SR production5200 SR phase

5210 SQA reports5220 SQAP/AD production

5300 AD phase5310 SQA reports5320 SQAP/DD production

5400 DD phase5410 SQA reports5420 SQAP/TR production

5500 TR phase5510 SQA reports

Table 2.4.2.2: Example Work Breakdown Structure (continued)

The number of work packages in a project normally increases withproject size. Small projects (less than two man years) may have less thanten bottom-level work packages, and large projects (more than twenty manyears) more than a hundred. Accordingly, small projects will use one or twolevels, medium size projects two to four, and large projects four or five.Medium and large projects may use alphabetic as well as numericalidentifiers.

New work packages should be defined for new tasks that areidentified during a project. The tasks should not be attached to existing,unrelated, work packages. Some projects find it useful to create an'unplanned work' package specifically for miscellaneous activities that arisein every project. However such 'unplanned work' packages should never beallocated more than a few per cent of the project resources. New workpackages should be defined for tasks that involve significant expenditure.

Work packages are documented in the Work Packages, Scheduleand Budget section of the SPMP (see Chapter 5). Work packages mayreference standards, guidelines and manuals that define how to carry outthe work. An example of a Work Package Description form is given in

Page 602: ESA Software Engineering Standards

18 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT

Appendix D. A form should be completed for every work package. Repetitionof information should be minimised.

2.4.3 Estimate resources and duration

The next steps in planning are:• defining human resources;• estimating effort;• estimating non-labour costs;• estimating duration.

2.4.3.1 Defining human resources

Project managers should analyse the work packages and define the'human resource requirements'. These define the roles required in theproject. Examples of software project roles are:• project manager;• team leader;• programmer;• test engineer;• software librarian;• software quality assurance engineer.

The project manager must also define the relationships between theroles to enable the effective coordination and control of the project. Thefollowing rules should be applied when defining organisational structures:• ensure that each member of the team reports to one and only one

person (the 'unity of command principle');• ensure that each person has no more than seven people reporting

directly to him or her (the 'rule of seven' principle).

Page 603: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 19SOFTWARE PROJECT MANAGEMENT

SeniorManager

CompanyQA

ProjectManager

TeamLeader

SoftwareLibrarian

SQAEngineer

SoftwareEngineers

Company management

Project team

Figure 2.4.3.1.1: Example organigram for a medium-size project

The project team structure should be graphically displayed in anorganigram, as shown in Figure 2.4.3.1.1. The 'dotted line' relationshipbetween the project SQA engineer and the Company QA is a violation of the'unity-of-command' principle that is justified by the need to provide seniormanagement with an alternative, independent view on quality and safetyissues.

The senior manager normally defines the activities of the projectmanager by means of terms of reference (see Section 2.2). The projectmanager defines the activities of the project team by means of workpackage descriptions.

2.4.3.2 Estimating effort

The project manager, with the assistance of his team, and, perhaps,external experts, should make a detailed analysis of the work packages andprovide an estimate of the effort (e.g. number of man hours, days ormonths) that will be required.

Where possible these estimates should be compared with historicaldata. The comparison method can work quite well when good historical dataexist. Historical data should be adjusted to take account of factors such as

Page 604: ESA Software Engineering Standards

20 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT

staff experience, project risks, new methods and technology and moreefficient work practices.

Formula approaches such as Boehm's Constructive Cost Model(COCOMO) and Function Point Analysis (FPA) should not be used formaking estimates at this stage of planning. These methods may be used forverifying the total cost estimate when the plan has been made. Thesemethods are further discussed in Chapter 3.

2.4.3.3 Estimating non-labour costs

Non-labour costs are normally estimated from supplier data. Theymay include:• commercial products that form part of the end product;• commercial products that are used to make the end-product but do not

form part of it, e.g. tools;• materials (i.e. consumables not included in overheads);• internal facilities (e.g. computer and test facilities);• external services (e.g. reproduction);• travel and subsistence;• packing and shipping;• insurance.

2.4.3.4 Estimating duration

The duration of the work package may be calculated from effortestimates and historical productivity data or other practical considerations(e.g. the duration of an activity such as a review meeting might be fixed atone day). The duration of each work package is needed for building theactivity network and calculating the total project duration in the next stage ofplanning.

Productivity estimates should describe the average productivity andnot the maximum productivity. Studies show that miscellaneous functionscan absorb up to about 50% of staff time [Ref 10], reducing the averageproductivity to much less than the peak. Furthermore, industrial productivityfigures (such as the often quoted productivity range of 10 to 20 lines of codeper day) are normally averaged over the whole development, and not justthe coding stage.

Page 605: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 21SOFTWARE PROJECT MANAGEMENT

2.4.4 Define activity network

An activity network represents the work packages in the project as aset of nodes with arrows linking them. A sequence of arrows defines a path,or part of a path, through the project. Circular paths are not allowed in anactivity network. The primary objective of building an activity network is todesign a feasible pattern of activities that takes account of all dependencies.Two important by-products of constructing the activity network are the:• critical path;• work package float.

The critical path is the longest path through the network in terms ofthe total duration of activities. The time to complete the critical path is thetime to complete the project.

The float time of a work package is equal to the difference betweenthe earliest and latest work package start (or end) times, and is the amountof time each activity can be moved without affecting the total time tocomplete the project. Activities on the critical path therefore have zero floattime.

In general, activity networks should only include work packages thatdepend upon other work packages for input, or produce output for otherwork packages to use.

Activities that are just connected to the start and end nodes shouldbe excluded from the network. This simplifies it without affecting the criticalpath. Further, the activity network of complex projects should be brokendown into subnetworks (e.g. one per phase). Such a modular approachmakes the project easier to manage, understand and evaluate.

2.4.5 Define schedule and total cost

The last stage of planning defines the schedule and resourcerequirements, and finally the total cost.

The activity network constrains the schedule but does not define it.The project manager has to decide the actual schedule by setting the startand end times of work packages so as to:• comply with time and resource constraints;• minimise the total cost;• minimise the fragmentation of resource allocations;

Page 606: ESA Software Engineering Standards

22 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT

• allow for any risks that might delay the project.

The start and end dates of work packages affected by timeconstraints (e.g. delivery dates) and resource constraints (e.g. staff andequipment availability) should be set first, as they reduce the number ofpossibilities to consider. If the total time to complete the project violates thetime constraints, project managers should return to the define activitiesstage, redefine work packages, re-estimate resources and duration, andthen modify the activity network.

The minimum total cost of the project is the sum of all the workpackages. However labour costs should be calculated from the totalamount of time spent by each person on the project. This ensures that thetime spent waiting for work packages to start and end is included in the costestimate. Simply summing the individual costs of each bottom-level workpackage excludes this overhead. Project managers adjust the schedule tobring the actual cost of the project as close as possible to the minimum.

Activities with high risk factors should be scheduled to start at theirearliest possible start times so that the activity float can be used ascontingency.

Minimising the fragmentation of a resource allocation is often called'resource smoothing'. Project managers should adjust the start time ofactivities using the same resource so that it is used continuously, rather thanat discrete intervals. This is because:• interruptions mean that people have to refamiliarise with the current

situation, or relearn procedures that have been forgotten;• equipment may be charged to the project even when it is not in use.

2.5 LEADING THE PROJECT

Leadership provides direction and guidance to the project team,and is an essential management function. There are numerous studies onleadership in the management literature, and the interested reader shouldconsult them directly.

Page 607: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 23SOFTWARE PROJECT MANAGEMENT

2.6 TECHNICAL MANAGEMENT OF PROJECTS

Besides the managerial issues, a software project manager mustalso understand the project technically. He or she is responsible for themajor decisions concerning:• the methods and tools;• the design and coding standards;• the logical model;• the software requirements;• the physical model;• the architectural design;• the detailed design;• configuration management;• verification and validation;• quality assurance.

In medium and large projects the project manager often delegatesmuch of the routine technical management responsibilities to team leaders.

2.7 MANAGEMENT OF PROJECT RISKS

All projects have risks. The point about risk management is not torun away from risks, but to reduce their ability to threaten the success of aproject. Project managers should manage risk by:• continuously looking out for likely threats to the success of the project;• adjusting the plan to minimise the probability of the threats being

realised;• defining contingency plans for when the threats are realised;• implementing the contingency plans if necessary.

Risk management never starts from the optimistic premise that 'allwill go well'. Instead project managers should always ask themselves 'whatis most likely to go wrong?' This is realism, not pessimism.

Project plans should identify the risks to a project and show how theplan takes account of them.

Page 608: ESA Software Engineering Standards

24 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT

The following sections discuss the common types of risk to softwareprojects, with possible actions that can be taken to counteract them. Thereare four groups:• experience factors;• planning factors;• technology factors;• external factors.

2.7.1 Experience factors

Experience factors that can be a source of risk are:• experience and qualifications of the project manager;• experience and qualifications of staff;• maturity of suppliers.

2.7.1.1 Experience and qualifications of the project manager

The project manager is the member of staff whose performance hasa critical effect on the outcome of a project. Inexperienced project managersare a significant risk, and organisations making appointments should matchthe difficulty of the project to the experience of the project manager. Part-time project managers are also a risk, because this reduces the capability ofa project to respond to problems quickly.

Project management is a discipline like any other. Untrained projectmanagers are a risk because they may be unaware of what is involved inmanagement. Staff moving into management should receive training fortheir new role.

Project management should be the responsibility of a single personand not be divided. This ensures unity of command and direction.

2.7.1.2 Experience and qualifications of staff

Staff will be a risk to a project if they have insufficient experience,skills and qualifications for the tasks that they are assigned.

Project managers should avoid such risks by:• assessing staff before they join the project;

Page 609: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 25SOFTWARE PROJECT MANAGEMENT

• allocating staff tasks that match their experience, skills andqualifications;

• retaining staff within the project who have the appropriate skills,experience and qualifications to cope with future tasks.

2.7.1.3 Maturity of suppliers

The experience and record of suppliers are key factors in assessingthe risks to a project. Indicators of maturity are the:• successful development of similar systems;• use of software engineering standards;• the possession of an ISO 9000 certificate;• existence of a software process improvement programme.

Experience of developing similar systems is an essentialqualification for a project team. Lack of experience results in poor estimates,avoidable errors, and higher costs (because extra work is required to correctthe errors).

Standards not only include system development standards such asESA PSS-05-0, but also coding standards and administrative standards.Standards may exist within an organisation but ignorance of how best toapply them may prevent their effective use. Project managers should ensurethat standards are understood, accepted and applied. Project managersmay require additional support from software quality assurance staff toachieve this.

A software process improvement programme is a good sign ofmaturity. Such a programme should be led by a software process group thatis staffed with experienced software engineers. The group should collectdata about the current process, analyse it, and decide about improvementsto the process.

2.7.2 Planning factors

Planning factors that can be a source of risk are:• accuracy of estimates;• short timescales;• long timescales;• single-point failures;

Page 610: ESA Software Engineering Standards

26 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT

• location of staff;• definition of responsibilities;• staffing profile evolution.

2.7.2.1 Accuracy of estimates

Accurate estimates of resource requirements are needed tocomplete a project successfully. Underestimates are especially dangerous ifthere is no scope for awarding additional resources later. Overestimationcan result in waste and can prevent resources from being deployedelsewhere.

Some of the activities which are often poorly estimated for are:• testing;• integration, especially of external systems;• transfer phase;• reviews, including rework.

Some contingency should always be added to the estimates tocater for errors. The amount of contingency should also be related to theother risks to the project. Estimating the effort required to do something newis particularly difficult.

2.7.2.2 Short timescales

Short timescales increase the amount of parallel working required,resulting in a larger team. Progressive reduction in the timescale increasesthis proportion to a limit where the project becomes unmanageable [Ref 9].Project managers should not accept unrealistic timescales.

Project managers should avoid artificially contracting timescales byattempting to deliver software before it is required. They should use the timeavailable to optimise the allocation of resources such as staff.

When faced with accumulating delays and rapidly approachingdeadlines, project manager's should remember Brooks' Law: 'addingmanpower to a late project makes it later' [Ref 10].

Page 611: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 27SOFTWARE PROJECT MANAGEMENT

2.7.2.3 Long timescales

Projects with long timescales run the risk of:• changes in requirements;• staff turnover;• being overtaken by technological change.

Long timescales can result from starting too early. Projectmanagers should determine the optimal time to start the project by carefulplanning.

Long projects should consider using an incremental delivery orevolutionary development life cycle approach. Both approaches aim todeliver some useful functionality within a timescale in which the requirementsshould be stable. If this cannot be done then the viability of the projectshould be questioned, as the product may be obsolete or unwanted by thetime it appears.

Ambitious objectives, when combined with constraints imposed byannual budgets, often cause projects to have long timescales. Asmanagement costs are incurred at a fixed rate, management consumes alarger proportion of the project cost as the timescale lengthens.Furthermore, change, such as technical and economic change, can makethe objectives obsolete, resulting in the abandonment of the project beforeanything is achieved!

2.7.2.4 Single-point failures

A single-point failure occurs in a project when a resource vital to anactivity fails and there is no backup. Project managers should look forsingle-point failures by examining each activity and considering:• the reliability of the resources;• whether backup is available.

Project managers should devise contingency plans to reallocateresources when failures occur.

2.7.2.5 Location of staff

Dispersing staff to different locations can result in poorcommunication. Project managers should co-locate staff in contiguous

Page 612: ESA Software Engineering Standards

28 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT

accommodation wherever possible. This improves communication andallows for more flexible staff allocation.

2.7.2.6 Definition of responsibilities

Poor definition of responsibilities is a major threat to the success ofa project. Vital jobs may not be done simply because no-one was assignedto do them. Tasks may be repeated unnecessarily because of ignoranceabout responsibilities.

Projects should be well organised by defining project roles andallocating tasks to the roles. Responsibilities should be clearly defined inwork package descriptions.

2.7.2.7 Staffing profile evolution

Rapid increases in the number of staff can be a problem becausenew staff need time to familiarise themselves with a project. After the start ofthe project much of the project knowledge comes from the existing staff.These 'teaching resources' limit the ability of a project to grow quickly.

Rapid decreases in the number of staff can be a problem becausethe loss of expertise before the project is finished can drastically slow therate at which problems are solved. When experienced staff leave a project,time should be set aside for them to transfer their knowledge to other staff.

The number of staff on a software project should grow smoothlyfrom a small team in the software requirements definition phase to a peakduring coding and unit testing, and fall again to a small team for the transferphase. Project managers avoid sudden changes in the manpower profileand ensure that the project can absorb the inflow of staff without disruption.

2.7.3 Technological factors

Technological factors that can be a source of risk are:• technical novelty;• maturity and suitability of methods;• maturity and efficiency of tools;• quality of commercial software.

Page 613: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 29SOFTWARE PROJECT MANAGEMENT

2.7.3.1 Technical novelty

Technical novelty is an obvious risk. Project managers shouldassess the technical novelty of every part of the design. Any radically newcomponent can cause problems because the:• feasibility has not been demonstrated;• amount of effort required to build it will be difficult to estimate

accurately.

Project managers should estimate the costs and benefits oftechnically novel components. The estimates should include the costs ofprototyping the component.

Prototyping should be used to reduce the risk of technical novelty.The cost of prototyping should be significantly less than the cost ofdeveloping the rest of the system, so that feasibility is demonstrated beforemajor expenditure is incurred. More accurate cost estimates are an addedbenefit of prototyping.

2.7.3.2 Maturity and suitability of methods

New methods are often immature, as practical experience with amethod is necessary to refine it. Furthermore, new methods normally lacktool support.

Choosing an unsuitable method results in unnecessary work and,possibly, unsuitable software. Analysis and design methods should matchthe application being built. Verification and validation methods should matchthe criticality of the software.

Project managers should evaluate the strengths (e.g. highsuitability) and the weaknesses (e.g. lack of maturity) of a method beforemaking a choice. Project managers should always check whether themethod has been used for similar applications.

2.7.3.3 Maturity and efficiency of tools

Tools can prove to be a hindrance instead of a benefit if:• staff have not been trained in their use;• they are unsuitable for the methods selected for the project;• they are changed during the project;

Page 614: ESA Software Engineering Standards

30 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT

• they have more overheads than manual techniques.

The aim of using tools is to improve productivity and quality. Projectmanagers should carefully assess the costs and benefits of tools beforedeciding to use them on a project.

2.7.3.4 Quality of commercial software

The use of proven commercial software with well-knowncharacteristics is normally a low-risk decision. However, including newcommercial software, or using commercial software for the first time in anew application area, can be a risk because:• it may not work as advertised, or as expected;• it may not have been designed to meet the quality, reliability,

maintainability and safety requirements of the project;

New commercial software should be comprehensively evaluated asearly as possible in the project so that surprises are avoided later. Besidesthe technical requirements, other aspects to evaluate when considering theacquisition of commercial software are:• supplier size, capability and experience;• availability of support;• future development plans.

2.7.4 External factors

External factors that can be a source of risk are the:• quality and stability of user requirements;• definition and stability of external interfaces;• quality and availability of external systems.

2.7.4.1 Quality and stability of user requirements

A project is unlikely to succeed without a high quality UserRequirements Document. A URD is a well-defined baseline against which tomeasure success. Project managers should, through the UserRequirements Review process, ensure that a coherent set of requirements isavailable. This may require resources very early in the project to help usersdefine their requirements (e.g. prototyping).

Page 615: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 31SOFTWARE PROJECT MANAGEMENT

2.7.4.2 Definition and stability of external interfaces

External interfaces can be a risk when their definitions are unstable,poorly defined or non-standard.

Interfaces with external systems should be defined in InterfaceControl Documents (ICDs). These are needed as early as possible becausethey constrain the design. Once agreed, changes should be minimised, as achange in an interface implies rework by multiple teams.

Standard interfaces have the advantage of being well-defined andstable. For these reasons a standard interface is always to be preferred,even if the system design has to be modified.

2.7.4.3 Quality and availability of external systems

External systems can be a problem if their quality is poor or if theyare not available when required either owing to late delivery or to unreliability.

Project managers should initiate discussions with suppliers ofexternal systems when the project starts. This ensures that suppliers aremade aware of project requirements as early as possible. Project managersshould allocate adequate resources to the integration of external systems.

The effects of late delivery of the external system can be mitigatedby:• adding the capability of temporarily 'bypassing' the external system;• development of software to simulate the external system.

The cost of the simulator must be traded-off against the costsincurred by late delivery.

2.8 MEASURING PROJECT PROCESSES AND PRODUCTS

Project managers should as far as possible adopt a quantitativeapproach to measuring software processes and products. This is essentialfor effective project control, planning future projects and improving thesoftware development process in the organisation.

Page 616: ESA Software Engineering Standards

32 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT

The Application of Metrics in Industry (AMI) guide [Ref 18] describesthe following measurement steps:1. assess the project environment and define the primary goals (e.g.

financial targets, quality, reliability etc);2. analyse the primary goals into subgoals that can be quantitatively

measured and define a metric (e.g. man-months of effort) for eachsubgoal;

3. collect the raw metric data and measure the performance relative to theproject goals;

4. improve the performance of the project by updating plans to correctdeviations from the project goals;

5. improve the performance of the organisation by passing the metric dataand measurements to the software process improvement group, whoare responsible for the standards and procedures the project uses.

Steps one to four are discussed in the following sections.

2.8.1 Assess environment and define primary goal

The purpose of assessing the environment is to understand thecontext of the project. An audit aimed at measuring conformance to ESAPSS-05-0 is one means of assessment. Other methods for assessingsoftware development organisations and projects include the SoftwareEngineering Institute’s Capability Maturity Model [Ref 4] and the EuropeanSoftware Institute’s ‘BOOTSTRAP’ model, which uses the ESA SoftwareEngineering Standards as a definition of basic practices. Whatever theapproach taken, the assessment step should define what is needed andwhat is practical.

The primary software project goal is invariably to deliver the producton time, within budget, with the required quality.

2.8.2 Analyse goals and define metrics

Common subgoals related to the primary goal are:• do not exceed the planned effort on each activity;• do not exceed the planned time on each activity;• ensure that the product is complete;• ensure that the product is reliable.

Page 617: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 33SOFTWARE PROJECT MANAGEMENT

A metric is a 'quantitative measure of the degree to which a system,component or process possesses a given attribute' [Ref 2]. Projectmanagers should define one or more metrics related to the subgoalsdefined for the project. Metric definitions should as far as possible followorganisational standards or industry standards, so that comparisons withother projects are possible [Ref 18].

2.8.2.1 Process metrics

Possible metrics for measuring project development effort are the:• amount of resources used;• amount of resources left.

Possible metrics for measuring project development time are the:• actual duration of activities in days, weeks or months;• slippage of activities (actual start - planned start).

Possible metrics for measuring project progress are:• number of work packages completed;• number of software problems solved.

2.8.2.2 Product metrics

The amount of product produced can be measured in terms of:• number of lines of source code produced, excluding comments;• number of modules coded and unit tested;• number of function points implemented [Ref 11, 15];• number of pages of documentation written.

Actual values should be compared with target values to measureproduct completeness.

Possible metrics for measuring product reliability are:• test coverage;• cyclomatic complexity of source modules;• integration complexity of programs;• number of critical Software Problem Reports;• number of non-critical Software Problem Reports;• number of changes to products after first issue or release.

Page 618: ESA Software Engineering Standards

34 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT

2.8.3 Collect metric data and measure performance

Project managers should ensure that metric data collection is anatural activity within the project. For example the actual effort expendedshould always be assessed before a Work Package is signed off ascompleted. Counts of Software Problem Reports should be accumulated inthe Configuration Status Accounting process. Cyclomatic complexity can bea routine measurement made before code inspection or unit test design.

2.8.4 Improving performance

Metric data should be made available for project managementreporting, planning future projects, and software process improvementstudies [Ref 4 ].

The project manager should use the metric data to improveprocesses and products. For example:• comparisons of predicted and actual effort early in a project can quickly

identify deficiencies in the initial estimates, and prompt major replanningof the project to improve progress;

• analysis of the trends in software problem occurrence during integrationand system testing can show whether further improvement in qualityand reliability is necessary before the software is ready for transfer.

2.9 PROJECT REPORTING

Accurate and timely reporting is essential for the control of a project.Project managers report to initiators and senior managers by means ofprogress reports. Initiators and senior managers should analyse progressreports and arrange regular meetings with the project manager to review theproject. The progress report is an essential input to these managementreviews.

Project managers should have frequent discussions with teammembers so that they are always up-to-date with project status. Teammembers should formally report progress to project managers by means ofwork package completion reports and timesheets.

2.9.1 Progress reports

Project managers should submit routine (e.g. monthly) progressreports that describe:

Page 619: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 35SOFTWARE PROJECT MANAGEMENT

• technical status;• resource status;• schedule status;• problems;• financial status.

Progress reports are discussed in detail in Chapter 6.

2.9.2 Work package completion reports

As each work package is completed, the responsible team member(called the 'work package manager') should notify the project manager by a'work package completion report'. Project managers accept work packagedeliverables by issuing 'work package completion certificates'. One way todocument this process is to add completion and acceptance fields to thework package description form.

2.9.3 Timesheets

Organisations should have a timesheet system that projectmanagers can use for tracking the time spent by staff on a project. Atimesheet describes what each employee has done on a daily or hourlybasis during the reporting period. The project manager needs this data tocompile progress reports.

A good timesheet system should allow staff to record the workpackage associated with the expenditure of effort, rather than just theproject. This level of granularity in the record keeping eases the collection ofdata about project development effort, and provides a more precisedatabase for estimating future projects.

Page 620: ESA Software Engineering Standards

36 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT

This page is intentionally left blank

Page 621: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 37SOFTWARE PROJECT MANAGEMENT METHODS

CHAPTER 3SOFTWARE PROJECT MANAGEMENT METHODS

3.1 INTRODUCTION

Various methods and techniques are available for software projectmanagement. This chapter discusses some methods for the activitiesdescribed in Chapter 2. The reader should consult the references fordetailed descriptions. Methods are discussed for:• project planning;• project risk management;• project reporting.

3.2 PROJECT PLANNING METHODS

Methods are available for supporting the following project planningactivities:• process modelling;• estimating resources and duration;• defining activity networks;• defining schedules.

The following sections discuss these methods.

3.2.1 Process modelling methods

The objective of a process modelling method is to construct amodel of the software process roles, responsibilities, activities, inputs andoutputs. This is often referred to as the 'workflow'.

Process models have been traditionally defined using text or simplediagrams. The Software Lifecycle Model defined in Part 1, Figure 1.2 of ESAPSS-05-0 is an example. Process models have also been defined usingnotations derived from systems analysis. SADT diagrams can be used forillustrating the flow of products and plans between activities [Ref 6].

Specialised process modelling methods based upon programminglanguages, predicate logic and Petri Nets are recent developments in

Page 622: ESA Software Engineering Standards

38 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT METHODS

software engineering [Ref 7]. These methods allow dynamic modelling ofthe software process. Dynamic models are required for workflow support.

3.2.2 Estimating methods

Chapter 2 recommends that labour costs of each work package beestimated mainly by expert opinion. The total labour cost is calculated fromthe total time spent on the project by each team member. The non-labourcosts are then added to get the cost to completion of the project. The labourcost estimates should be verified by using another method. This sectiondiscusses the alternative methods of:• historical comparison;• COCOMO;• function point analysis;• activity distribution analysis;• Delphi methods;• integration and system test effort estimation;• documentation effort estimation.

Some of these methods use formulae. Anyone using formulaapproaches to software cost estimation should take note of the followingstatement from Albrecht, the originator of Function Point Analysis [Ref 11]:

"It is important to distinguish between two types of work-effortestimates, a primary or 'task analysis' estimate and a 'formula' estimate. Theprimary work-effort estimate should always be based on an analysis of thetasks , thus providing the project team with an estimate and a work plan... Itis recommended that formula estimates be used only to validate andprovide perspective on primary estimates".

Boehm gives similar advice, which is all too often ignored by usersof his COCOMO method [Ref 9].

3.2.2.1 Historical comparison

The historical comparison method of cost estimation is simply tocompare the current project with previous projects. This method can workquite well when:• the costs of the previous projects have been accurately recorded;• the previous project has similarities to those of the current project.

Page 623: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 39SOFTWARE PROJECT MANAGEMENT METHODS

If the projects are all identical, then a simple average of the costscan be used. More commonly, minor differences between the projects willlead to a range of actual costs. Estimates can be made by identifying themost similar project, or work package.

Historical comparisons can be very useful for an organisation whichdevelops a series of similar systems, especially when accounts are kept ofthe effort expended upon each work package. There is no substitute forexperience. However, possible disadvantages of the approach are that:• new methods and tools cannot be accounted for in the estimates;• inefficient work practices are institutionalised, resulting in waste.

To avoid these problems, it is very important that special features ofprojects that have influenced the cost are documented, enabling softwareproject managers to adjust their estimates accordingly.

Part of the purpose of the Project History Document (PHD) is topass on the cost data and the background information needed tounderstand it.

3.2.2.2 COCOMO

The Constructive Cost Model (COCOMO) is a formula-basedmethod for estimating the total cost of developing software [Ref 9]. Thefundamental input to COCOMO is the estimated number of lines of sourcecode. This is its major weakness because:• the number of lines of code is only accurately predictable at the end of

the architectural design phase of a project, and this is too late;• what constitutes a 'line of code' can vary amongst programming

languages and conventions;• the concept of a line of code does not apply to some modern

programming techniques, e.g. visual programming.

COCOMO offers basic, intermediate and detailed methods. Thebasic method uses simple formulae to derive the total number of manmonths of effort, and the total elapsed time of the project, from theestimated number of lines of code.

The intermediate method refines the basic effort estimate bymultiplying it with an 'effort adjustment factor' derived from fifteen 'effortmultipliers'. Wildly inaccurate estimates can result from a poor choice ofeffort multiplier values. COCOMO supplies objective criteria to help the

Page 624: ESA Software Engineering Standards

40 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT METHODS

estimator make a sensible choice. The detailed COCOMO method uses aseparate set of effort multipliers for every phase.

When estimating by means of COCOMO, the intermediate methodshould be used because the detailed method does not appear to performmore accurately than the intermediate method. The basic method providesa level of accuracy that is only adequate for preliminary estimates.

3.2.2.3 Function Point Analysis

Function Point Analysis (FPA) estimates the cost of a project fromthe estimates of the delivered functionality [Ref 11, 13]. FPA can therefore beapplied at the end of the software requirements definition phase to makecost estimates. The method combines quite well with methods such asstructured analysis, and has been used for estimating the total effortrequired to develop an information system.

The FPA approach to cost estimation is based upon counting thenumbers of system inputs, outputs and data stores. These counts areweighted, combined and then multiplied by cost drivers, resulting in a'Function Point' count. The original version of FPA converts the number offunction points to lines of code using a language dependent multiplier. Thenumber of lines of code is then fed into COCOMO (see Section 3.2.2.2) tocalculate the effort and schedule.

Mark Two Function Point Analysis simplifies the original approach,is more easily applicable to modern systems, and is better calibrated [Ref15]. Furthermore it does not depend upon the use of language-dependentmultipliers to produce an effort estimate. When estimating by means of FPA,the Mark Two version should be used.

3.2.2.4 Activity distribution analysis

Activity distribution analysis uses measurements of the proportion ofeffort expended on activities in previous projects to:• derive the effort required for each activity from the total effort;• verify that the proportion of effort expended upon each activity is

realistic;• verify that the effort required for future activities is in proportion to the

effort expended upon completed activities.

Page 625: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 41SOFTWARE PROJECT MANAGEMENT METHODS

Project phases are the top-level activities in a project. The activitydistribution technique is called the 'phase per cent method' when it is usedto estimate the amount of effort required for project phases.

The activity distribution method requires data about completedprojects to be analysed and reduced to derive the relative effort values. Aswith all methods based upon historical data, the estimates need to beadjusted to take account of factors such as staff experience, project risks,new methods and technology and more efficient work practices. Forexample the use of modern analysis and design methods and CASE toolsrequires a higher proportion of effort to be expended in the SR and ADphases.

3.2.2.5 Delphi method

The 'Delphi' method is a useful approach for arriving at softwarecost estimates [Ref 9]. The assumption of the Delphi method is that expertswill independently converge on a good estimate.

The method requires several cost estimation experts and acoordinator to direct the process. The basic steps are:1. the coordinator presents each expert with a specification and a form

upon which to record estimates;2. the experts fill out forms anonymously, explaining their estimates (they

may put questions to the coordinator, but should not discuss theproblem with each other);

3. if consensus has not been achieved, the coordinator prepares asummary of all the estimates and distributes them to the experts andthe process repeats from step 1.

The summary should just give the results, and not the reasoningbehind the results.

A common variation is to hold a review meeting after the first passto discuss the independent estimates and arrive at a common estimate.

3.2.2.6 Integration and System Test effort estimation

Integration and system test effort depends substantially upon the:• number and criticality of defects in the software after unit testing;• number and criticality of defects acceptable upon delivery;

Page 626: ESA Software Engineering Standards

42 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT METHODS

• number and duration of integration and system tests;• average time to repair a defect.

The effort required for integration and system testing can thereforebe estimated from a model of the integration and testing process,assumptions about the number of defects present after unit tests, and anestimate of the mean repair effort per defect [Ref 14].

A typical integration and test process model contains the followingcycle:1. the integration and test team adds components, discovers defects, and

reports them to the software modification team;2. the software modification team repairs the defects;3. the integration and test team retests the software.

Reference 14 states that twenty to fifty defects per 1000 lines ofcode typically occur during the life cycle of a software product, and of these,five to fifteen defects per 1000 lines of code remain when integration starts,and one to four defects per 1000 lines of code still exist after system tests.

Repair effort values will vary very widely, but a mean value of half aman-day for a code defect is typical.

3.2.2.7 Documentation effort estimation

Software consists of documentation and code. Documentation is acritical output of software development, not an overhead. When estimatingthe volume of software that a project should produce, project managersshould define not only the amount of code that will be produced, but alsothe amount of documentation.

Boehm observes that the documentation rates vary between twoand four man hours per page [Ref 9]. Boehm also observes that the ratiobetween code volume and documentation volume ranges from about 10 toabout 150 pages of documentation per 1000 lines of code, with a medianvalue of about 50. These figures apply to all the life cycle documents.

The average productivity figure of 20 lines of code per day includesthe effort required for documentation. This productivity figure, whencombined with the documentation rates given above, implies that softwareengineers spend half their time on documentation (i.e. the median of 50pages of documentation per 1000 lines of code implies one page of

Page 627: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 43SOFTWARE PROJECT MANAGEMENT METHODS

documentation per 20 lines of code; one page of documentation requiresfour hours, i.e. half a day, of effort).

3.2.3 Activity network methods

The Program Evaluation and Review Technique (PERT) is the usualmethod for constructing an activity network [Ref 9]. Most planning toolsconstruct a PERT chart automatically as the planner defines activities andtheir dependencies.

M1

43304420

24114411

M2

2412

M3

2461

M4

2462

4440

2470

44604450

1420343054204470

M5

DDD complete

SVVP/IT productionSVVP/UT plan

Unit 1 DDDUnit 1 review

Integration stage 1 end

Unit 1 code

Integration stage 2 end

Integration stage 1

Integration stage 3 end

Integration stage 2

SVVP/ST production

SUM production

System testSVVP/AT production

SPMP/TR productionSCMP/TR productionSQAP/TR productionDD review

Integration stage 4 end

ID Name Duration

20d5d

20d5d

20d

20d20d

20d

20d

20d10d

5d5d5d3d

Month1 2 3 4 5 6 7 8 9 10 11 12Scheduled Start

17/03

02/0102/01

02/0130/01

28/04

06/02

26/05

03/04

23/06

01/05

30/01

10/04

24/0708/05

22/0526/0603/0721/08

21/07

MilestoneID Date

M6 DD phase end 28/08

2413 06/03Unit 1 test242144122422

Unit 2 DDDUnit 2 reviewUnit 2 code

20d5d

20d

02/0130/0106/02

2423 06/03Unit 2 test243144132432

Unit 3 DDDUnit 3 reviewUnit 3 code

20d5d

20d

02/0106/0213/02

2433 13/03Unit 3 test244144142442

Unit 4 DDDUnit 4 reviewUnit 4 code

20d5d

20d

02/0106/0213/02

2443 13/03Unit 4 test245144152452

Unit 5 DDDUnit 5 reviewUnit 5 code

20d5d

20d

13/0213/0320/03

2453 17/04Unit 5 test

24632464

Integration stage 3Integration stage 4

20d20d

29/0526/06

20d

20d

20d

20d

20d

Figure 3.2.4: Example Gantt chart for the DD phase of a project

3.2.4 Scheduling presentation methods

The project schedule defines the start times and duration of eachwork package and the dates of each milestone. The 'Gantt chart' is the usual

Page 628: ESA Software Engineering Standards

44 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT METHODS

method for presenting it. Work packages are marked on a vertical axis andtime along the corresponding horizontal axis. Work package activities areshown as horizontal bars (giving rise to synonym of 'bar chart'). Time maybe expressed absolutely or relative to the start of the phase.

Figure 3.2.4 shows the schedule for DD phase of the exampleproject described in Chapter 2. Planning tools normally construct the Ganttchart automatically as tasks are defined.

3.3 PROJECT RISK MANAGEMENT METHODS

Two simple and effective methods for risk management are:• risk table;• risk matrix.

3.3.1 Risk table

The steps of the risk table method are :1. list the risks in the project:2. define the probability of each risk;3. define the risk reduction action or alternative approach;4. define the decision date;5. define the possible impact.

An example is provided in Table 3.3.1.

Risk Description Prob. Action Decision Date Impact

1 New userrequirements

High Change toevolutionarydevelopment

1/Jun/1997 High

2 Installation ofairconditioning

Medium Relocate staffwhile work isdone

1/April/1998 Medium

Table 3.3.1: Example of a Risk Table

3.3.2 Risk matrix

The steps of the risk matrix method are:1. list the risks in the project:

Page 629: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 45SOFTWARE PROJECT MANAGEMENT METHODS

2. define the probability of each risk;3. define the possible impact;4. plot the risks according to their probability and impact.

An example is provided in Figure 3.3.2, using the data in the risktable in Figure 3.3.1. Risks with high probability and high impact cluster inthe top right hand corner of the matrix. These are the risks that shouldreceive the most attention.

high 1

probability medium 2low

low medium high

impact

Figure 3.3.2: Example of a Risk Matrix

Page 630: ESA Software Engineering Standards

46 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT METHODS

3.4 PROJECT REPORTING METHODS

Some important methods for progress reporting are:• progress tables• progress charts;• milestone trend charts.

3.4.1 Progress tables and charts

Progress tables and charts are used to report how much has beenspent on each work package and how much remains to be spent. For eachwork package in the WBS an initial estimate of the resources is made. Theinitial estimates should be in the SPMP. At the end of every reporting period,the following parameters are collected:1. previous estimate;2. expenditure for this period;3. cumulative expenditure up to the end of the reporting period;4. estimate to completion.

The sum of three and four generates the new estimate, i.e. item onefor the following month. The estimate to completion should be re-evaluatedeach reporting period and not obtained by subtracting the cumulativeexpenditure from the previous estimate.

Work package expenditure should be summarised in progresstables as shown in Table 3.4.1. Work package identifiers and names arelisted in columns one and two. The estimate for the work package in thecurrent SPMP is placed in column three. Subsequent columns contain, foreach month, the values of the previous estimate (PrEst), expenditure for theperiod (ExpP), cumulative expenditure (Cum) and estimate to completion(ToGo). The total in the last column states the current estimated effort for thework package.

Table 3.4.1 shows that the effort required for work packages 2210and 2220 was underestimated. This underestimate of 10 man days for WP2210 was detected and corrected in January. The estimate for WP 2220 hadto be revised upwards by 5 man days in January and increased again byanother 5 man days in February.

Page 631: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 47SOFTWARE PROJECT MANAGEMENT METHODS

WPid Name Plan January February March Total

PrEst ToGo PrEst ToGo PrEst ToGo

ExpP Cum Exp Cum Exp Cum

2210 Logical 20 20 10 30 0 30 0 30

Model 20 20 10 30 0 30

2220 Prototype 30 30 15 35 5 40 0 40

20 20 15 35 5 40

2230 SRD 20 20 20 20 10 20 0 20

draft 0 0 10 10 10 20

2240 SRD 5 5 5 5 5 5 0 5

final 0 0 0 0 5 5

2200 SR phase 75 75 50 90 20 95 0 95

total 40 40 35 75 20 95

Table 3.4.1: Example Progress Table for March

Progress charts plot the estimates against time. They are best usedfor summarising the trends in the costs of high level work packages. Figure3.4.1 is an example of a plot showing the trends in the estimates for WP2200.

Jan Feb Mar Apr Month

WP2200

25

50

75

100

Figure 3.4.1: Example of a Progress Chart

Page 632: ESA Software Engineering Standards

48 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT METHODS

The construction of progress tables and charts every reportingperiod enables:• the accuracy of estimates to be measured;• the degree of completion of each work package to be assessed.

The process of assessing every month what has beenaccomplished and re-estimating what remains is essential for keeping goodcontrol of project resources. Overspend trends are revealed and pinpointedearly enough for timely corrective action to be taken.

3.4.2 Milestone trend charts

Milestone trend charts are used to report when milestones havebeen, or will be, achieved. First, an initial estimate of each milestone date ismade. This estimate is put in the schedule section of the SPMP. Then, at theend of every reporting period, the following data are collected:1. previous estimates of milestone achievement dates;2. new estimate of milestone achievement dates.

The dates should be plotted in a milestone trend chart to illustratethe changes, if any, of the dates. A sample form is provided in Appendix D.

A milestone trend chart for the example project described in Figure3.4.1 is shown in Figure 3.4.2. The vertical scale shows that the reportingperiod is one month. Figure 3.4.2 shows the status after five months.Milestones M1 and M2 have been achieved on schedule. Milestones M3,M4 and M5 have been slipped one month because 'Integration stage 2' tookone month longer than expected. Milestone M6, the end of phase, wasslipped two months when it was announced that the simulator required forsystem testing was going to be supplied two months late. Complaints to thesupplier resulted in the delivery date being brought forward a month. M6then moved forward one month.

The example shows how vertical lines indicate that the project isprogressing according to schedule. Sloping lines indicate changes.Milestone trend charts are a powerful tool for understanding what ishappening in a project. A milestone that slips every reporting periodindicates that a serious problem exists.

Milestone trend charts should show milestone dates relative to thecurrent approved plan, not obsolete plans. They should be reinitialised whenthe SPMP is updated.

Page 633: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 49SOFTWARE PROJECT MANAGEMENT METHODS

M1

M2

M3

M4

M5

M6

Milestone

Month

key to milestones and dates

DDD complete

Integration stage 1 end

Integration stage 2 end

Integration stage 3 end

17/03

28/04

26/05

23/06

21/07Integration stage 4 end

28/08DD phase end

M1 M2

1 2 3 4 5 6 7 8 9 10 11 12

Month 1

UpdateMonth

Plan

Month 2

M3 M5 M6

Month 3

Month 4

Month 5

Month 6

Month 7

Month 8

Month 9

Month 10

Month 11

Month 12

Plan date Plan month

3

4

5

6

7

8

M4

Figure 3.4.2: Milestone trend chart for the DD phase of a project

Page 634: ESA Software Engineering Standards

50 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT METHODS

This page is intentionally left blank.

Page 635: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 51SOFTWARE PROJECT MANAGEMENT TOOLS

CHAPTER 4SOFTWARE PROJECT MANAGEMENT TOOLS

4.1 INTRODUCTION

This chapter discusses tool support for the methods described inchapters 2 and 3. Tools are available for project planning, project riskanalysis, project reporting and process support.

4.2 PROJECT PLANNING TOOLS

As well as the general purpose project planning tools that supportactivity definition, PERT and scheduling, specialised project planning toolsare available for constructing process models and estimating softwareproject costs.

4.2.1 General purpose project planning tools

General purpose project planning tools normally support:• defining work packages;• defining resources;• defining resource availability;• allocating resources to work packages;• defining the duration of work packages;• constructing activity networks using the PERT method;• defining the critical path;• defining the schedule in a Gantt chart.

Managers enter the project activities and define the resources thatthey require. They then mark the dependencies between the activities. Thetool constructs the activity network and Gantt chart. The tools should alsoinclude the ability to:• compute the total amount of resources required;• provide resource utilisation profiles;• fix the duration of an activity;• divide a project into subprojects;• highlight resource conflicts, or over-utilisation;• integrate easily with word processors.

Page 636: ESA Software Engineering Standards

52 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT TOOLS

Common deficiencies of general purpose project planning tools are:• fixed rate scheduling, so that work packages have to be split when

someone takes a holiday;• inability to handle fractional allocations of resources (e.g. resource

conflicts occur when staff have to share their time between concurrentwork packages);

• identifiers that change every time a new work package is inserted;• too much line crossing in activity networks.

Advanced project planning tools permit variable rate scheduling.The project manager only needs to define:• the total effort involved in the work package;• the resources to be allocated to the work package;• the availability of the allocated resources.

The project planning tool then decides what the resource utilisationprofile should be.

4.2.2 Process modelling tools

Process modelling methods are relatively new, and the tools thatsupport them are consequently immature. Process modelling tools should:• support the definition of procedures;• contain a library of 'process templates' that can be tailored to each

project;• make the process model available to a process support tool (see

Section 4.5).

There are few dedicated software process modelling tools. Analysistools that support the structured analysis techniques of data flow diagramsand entity relationship diagrams can be effective substitutes.

4.2.3 Estimating Tools

Project costs should be stored in a spreadsheet or database foraccess during software cost estimating. Project managers estimate bycomparing their project with those in the database and extracting the costdata for the most similar project.

Page 637: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 53SOFTWARE PROJECT MANAGEMENT TOOLS

Some software cost estimation tools use measurements of softwaresize (e.g. number of lines of code, number of function points) to produce theinitial estimate of effort. They then accept estimates of cost drivers (e.g.required reliability, programmer experience) and use them to adjust the initialestimate.

In addition, software cost estimation tools may:• permit 'what-if' type exploration of costs and timescales;• produce estimates of the optimum timescale;• allow input of a range of values instead of a single value, thereby

enabling the accuracy to be estimated;• provide explanations of how estimates were arrived at;• provide estimates even when some information is absent;• allow predicted values to be replaced by actual values, thereby allowing

progressive refinement of an estimate as the project proceeds;• permit backtracking to an earlier point in the cost estimation process

and subsequent input of new data.

4.3 PROJECT RISK ANALYSIS TOOLS

Risk analysis tools have been applied in other fields, but difficultiesin accurately quantifying software risks limits their usefulness.

One approach to risk analysis is to use heuristics or'rules-of-thumb', derived from experience. Tools are available that have beenprogrammed with the rules related to the common risks to a project. Thetools ask a series of questions about the project and then report the mostlikely risks. Tools of this kind have rarely been used in software development.

4.4 PROJECT REPORTING TOOLS

Project planning tools (see Section 4.2) that are capable ofrecording the actual resources used, and the dates that events occurred,can be useful for progress reporting. Such tools should use the informationto:• mark up the Gantt chart to indicate schedule progress;• constrain replanning.

Page 638: ESA Software Engineering Standards

54 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)SOFTWARE PROJECT MANAGEMENT TOOLS

Spreadsheets are useful for constructing progress tables andcharts.

4.5 PROCESS SUPPORT TOOLS

Process support tools help guide, control and automate projectactivities [Ref 8]. They need a formally defined process model in a suitablemachine readable format. The absence of complete software processmodels has limited the application of process support tools.

Process support tools may be integrated into the softwareengineering environment, coordinating the use of other software tools andguiding and controlling developers through the user interface [Ref 5].Outside software engineering, process support tools have been used forcontrolling the work flow in a business process.

Process support tools should provide the following capabilities:• instantiation of the process model;• enactment of each process model instance (i.e. the process support

tool should contain a 'process engine' that drives activities)• viewing of the process model instance status.

In addition process support tools should provide:• interfaces between the process model instance and the actors (i.e. the

people playing the project roles);• interfaces to project planning tools so that coverage of the plan, and the

resource expenditure, can be tracked.

Page 639: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 55THE SOFTWARE PROJECT MANAGEMENT PLAN

CHAPTER 5THE SOFTWARE PROJECT MANAGEMENT PLAN

5.1 INTRODUCTION

All software project management activities must be documented inthe Software Project Management Plan (SPMP) (SPM01). The SPMP is thecontrolling document for managing a software project.

The SPMP contains four sections dedicated to each developmentphase. These sections are called:• Software Project Management Plan for the SR phase (SPMP/SR);• Software Project Management Plan for the AD phase (SPMP/AD);• Software Project Management Plan for the DD phase (SPMP/DD);• Software Project Management Plan for the TR phase (SPMP/TR).

Each section of the SPMP must:• define the project organisation (SPM01);• define the managerial process (SPM01);• outline the technical approach, in particular the methods, tools and

techniques (SPM01);• define the work-breakdown structure (SPM01, SPM10);• contain estimates of the effort required to complete the project (SPM01,

SPM04, SPM06, SPM07, SPM09);• contain a planning network describing when each work package will be

started and finished (SPM01, SPM11);• allow for the risks to the project (SPM01).

The table of contents for each section of the SPMP is described inSection 5.6. This table of contents is derived from the IEEE Standard forSoftware Project Management Plans (ANSI/IEEE Std 1058.1-1987).

5.2 STYLE

The SPMP should be plain, concise, clear, consistent andmodifiable.

Page 640: ESA Software Engineering Standards

56 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)THE SOFTWARE PROJECT MANAGEMENT PLAN

5.3 RESPONSIBILITY

The developer is normally responsible for the production of theSPMP. The software project manager should write the SPMP.

5.4 MEDIUM

It is usually assumed that the SPMP is a paper document. There isno reason why the SPMP should not be distributed electronically to peoplewith the necessary equipment.

5.5 SERVICE INFORMATION

The SR, AD, DD and TR sections of the SPMP are produced atdifferent times in a software project. Each section should be kept separatelyunder configuration control and contain the following service information:

a - Abstract b - Table of Contents c - Document Status Sheet d - Document Change records made since last issue

5.6 CONTENTS

ESA PSS-05-0 recommends the following table of contents for eachphase section of the SPMP:

1 Introduction1.1 Project overview1.2 Project deliverables1.3 Evolution of the SPMP1.4 Reference materials1.5 Definitions and acronyms

2 Project Organisation2.1 Process model2.2 Organisational structure2.3 Organisational boundaries and interfaces2.4 Project responsibilities

Page 641: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 57THE SOFTWARE PROJECT MANAGEMENT PLAN

3 Managerial Process3.1 Management objectives and priorities3.2 Assumptions, dependencies and constraints3.3 Risk management3.4 Monitoring and controlling mechanisms3.5 Staffing plan

4 Technical Process4.1 Methods, tools and techniques4.2 Software documentation4.3 Project support functions

5 Work Packages, Schedule, and Budget5.1 Work packages5.2 Dependencies5.3 Resource requirements5.4 Budget and resource allocation5.5 Schedule

Material unsuitable for the above contents list should be inserted inadditional appendices. If there is no material for a section then the phrase'Not Applicable' should be inserted and the section numbering preserved.

5.6.1 SPMP/1 Introduction

5.6.1.1 SPMP/1.1 Project overview

This section of the SPMP should provide a summary of the project:• objectives;• deliverables;• life cycle approach;• major activities;• milestones;• resource requirements;• schedule;• budget.

This section outlines the plan for whole project and must beprovided in the SPMP/SR (SPM01). The overview may be updated insubsequent phases.

Page 642: ESA Software Engineering Standards

58 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)THE SOFTWARE PROJECT MANAGEMENT PLAN

5.6.1.2 SPMP/1.2 Project deliverables

This section should list the deliverables of the phase. All ESA PSS-05-0 documents, plans and software releases that will be delivered shouldbe listed. Any other deliverable items such as prototypes, demonstratorsand tools should be included.

5.6.1.3 SPMP/1.3 Evolution of the SPMP

This section should summarise the history of the SPMP in this andprevious phases of the project.

This section should describe the plan for updating the SPMP in thisand subsequent phases of the project.

5.6.1.4 SPMP/1.4 Reference materials

This section should provide a complete list of all the applicable andreference documents, identified by title, author and date. Each documentshould be marked as applicable or reference. If appropriate, report number,journal name and publishing organisation should be included.

5.6.1.5 SPMP/1.5 Definitions and acronyms

This section should provide the definitions of all terms, acronyms,and abbreviations used in the plan, or refer to other documents where thedefinitions can be found.

5.6.2 SPMP/2 Project Organisation

5.6.2.1 SPMP/2.1 Process model

This section should define the activities in the phase and their inputsand outputs. The definition should include the major project functions (i.e.activities that span the entire duration of the project, such as projectmanagement, configuration management, verification and validation, andquality assurance) and the major production activities needed to achieve theobjectives of the phase. The definition of the process model may be textualor graphic.

5.6.2.2 SPMP/2.2 Organisational structure

This section should describe the internal management structure ofthe project in the phase. Graphical devices such as organigrams should beused to show the lines of reporting, control and communication.

Page 643: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 59THE SOFTWARE PROJECT MANAGEMENT PLAN

Roles that often appear in the internal management structure are:• project manager;• team leader;• programmers;• software librarian;• software quality assurance engineer.

5.6.2.3 SPMP/2.3 Organisational boundaries and interfaces

This section should describe the relationship between the projectand external groups during the phase such as:• parent organisation;• client organisation;• end users;• subcontractors;• suppliers;• independent verification and validation organisation;• independent quality assurance organisations.

The procedures and responsibilities for the control of each externalinterface should be summarised. For example:• name of Interface Control Document (ICD);• those responsible for the agreement of the ICD;• those responsible for authorising the ICD.

5.6.2.4 SPMP/2.4 Project responsibilities

This section should define the roles identified in the organisationalstructure and its boundaries. Each role definition should briefly describe thepurpose of the role and list the responsibilities.

5.6.3 SPMP/3 Managerial process

5.6.3.1 SPMP/3.1 Management objectives and priorities

This section should define the management objectives of thisproject phase and their relative priorities. They should discuss any trade-offsbetween the objectives.

Page 644: ESA Software Engineering Standards

60 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)THE SOFTWARE PROJECT MANAGEMENT PLAN

5.6.3.2 SPMP/3.2 Assumptions, dependencies and constraints

This section should state for the phase the:• assumptions on which the plan is based;• external events the project is dependent upon;• constraints on the project.

Technical issues should only be mentioned if they have an effect onthe plan.

Assumptions, dependencies and constraints are often difficult todistinguish. The best approach is not to categorise them but to list them. Forexample:• limitations on the budget;• schedule constraints (e.g. launch dates);• constraints on the location of staff (e.g. they must work at developer's

premises);• commercial hardware or software that will be used by the system;• availability of simulators and other test devices;• availability of external systems with which the system must interface.

5.6.3.3 SPMP/3.3 Risk management

This section of the plan should identify and assess the risks to theproject, and describe the actions that will be taken in this phase to managethem. A risk table (see Section 3.3.1) may be used.

5.6.3.4 SPMP/3.4 Monitoring and controlling mechanisms

This section of the plan should define the monitoring and controllingmechanisms for managing the work. Possible monitoring and controllingmechanisms are:• work package descriptions;• work package completion reports;• progress reports;• reviews;• audits.

Page 645: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 61THE SOFTWARE PROJECT MANAGEMENT PLAN

This section should define or reference the formats for alldocuments and forms used for monitoring and controlling the project.

This section should specify the:• frequency of progress meeting with initiators and management;• frequency of submission of progress reports;• modifications (if any) to the progress report template provided in

Chapter 6;• general policy regarding reviews and audits (the details will be in the

SVVP).

5.6.3.5 SPMP/3.5 Staffing plan

This section of the plan should specify the names, roles and gradesof staff that will be involved in the phase. The utilisation of the staff should begiven for each reporting period. A staff profile giving the total number of staffon the project each month may also be given.

5.6.4 SPMP/4 Technical Process

5.6.4.1 SPMP/4.1 Methods, tools and techniques

This section should specify the methods, tools and techniques tobe used to produce the phase deliverables.

5.6.4.2 SPMP/4.2 Software documentation

This section should define or reference the documentation plan forthe phase. For each document to be produced, the documentation planshould specify:• document name;• review requirements;• approval requirements.

Some documents will be deliverable items. Others will be internaldocuments, such as technical notes, or plans, such as the SCMP. Alldocuments should be listed.

The documentation plan may contain or reference a 'style' guidethat describes the format and layout of documents.

Page 646: ESA Software Engineering Standards

62 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)THE SOFTWARE PROJECT MANAGEMENT PLAN

5.6.4.3 SPMP/4.3 Project support functions

This section should contain an overview of the plans for the projectsupport functions of:• software configuration management;• software verification and validation;• software quality assurance.

This section should reference the SCMP, SVVP and SQAP.

5.6.5 SPMP/5 Work Packages, Schedule, and Budget

5.6.5.1 SPMP/5.1 Work packages

This section should describe the breakdown of the phase activitiesinto work packages.

This section may begin with a Work Breakdown Structure (WBS)diagram to describe the hierarchical relationships between the workpackages. Each box should show the title and identifier of the workpackage. Alternatively, the work breakdown may be described by listing thework package titles and identifiers as shown in Section 2.4.2.2.

The full Work Package Descriptions may be contained in thissection or put in an appendix. Each Work Package description should definethe:• work package title;• work package reference number;• responsible organisation;• major constituent activity;• work package manager;• start event;• end event;• inputs;• activities;• outputs.

Page 647: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 63THE SOFTWARE PROJECT MANAGEMENT PLAN

Work packages should be defined for all activities, including projectfunctions such as project management, configuration management,verification and validation and quality assurance.

5.6.5.2 SPMP/5.2 Dependencies

This section should define the ordering relations between the workpackages. This may be done by using a planning network technique, suchas the Program Evaluation and Review Technique (PERT), to order theexecution of work packages according to their dependencies (see Section3.5). Dependency analysis should define the critical path to completion ofthe project and derive the 'float' for activities off the critical path.

5.6.5.3 SPMP/5.3 Resource requirements

This section should describe, for each work package:• the total resource requirements;• the resource requirements as a function of time.

Labour resources should be evaluated in man-hours, man-days orman-months. Other resources should be identified (e.g. equipment).

5.6.5.4 SPMP/5.4 Budget and resource allocation

The project budget allocation should be specified by showing howthe available financial resources will be deployed on the project. This isnormally done by providing a table listing the amount of money budgeted foreach major work package. The manner in which this section is completeddepends upon the contractual relationship.

5.6.5.5 SPMP/5.5 Schedule

This section should define when each work package starts andends. This is normally done by drawing a Gantt chart (see Section 3.6).

This section should describe the milestones in the project, providingfor each milestone:• an identifier;• a description (e.g. a list of deliverables);• the planned date of achievement;• the actual date of achievement (for plan updates).

Milestones should be marked on or alongside the Gantt chart.

Page 648: ESA Software Engineering Standards

64 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)THE SOFTWARE PROJECT MANAGEMENT PLAN

5.7 EVOLUTION

Project planning is a continuous process. Project managers shouldreview their plan when:• making progress reports;• new risks are identified;• problems occur.

It is good practice to draft sections of the plan as soon as thenecessary information is available, and not wait to begin planning until justbefore the start of the phase that the SPMP section applies to. Some partsof the SPMP/DD need to be drafted at the start of the project to arrive at atotal cost estimate, for example.

The evolution of the plan is closely connected with progressreporting (see Chapter 6). Reports are produced and reviewed. Reports maypropose changes to the plan. Alternatively, changes to the plan may bedefined during the progress meeting.

5.7.1 UR phase

By the end of the UR review, the SR phase section of the SPMPmust be produced (SPMP/SR) (SPM02). The SPMP/SR describes, in detail,the project activities to be carried out in the SR phase.

As part of its introduction, the SPMP/SR must outline a plan for thewhole project (SPM03), and provide a rough estimate of the total cost. Theproject manager will need to draft the work breakdown, schedule andbudget section of the SPMP/AD, SPMP/DD and SPMP/TR at the start of theproject to provide the overview and estimate the total cost of the project.

A precise estimate of the effort involved in the SR phase must beincluded in the SPMP/SR (SPM04). Specific factors affecting estimates forthe work required in the SR phase are the:• number of user requirements;• level of user requirements;• stability of user requirements;• level of definition of external interfaces;• quality of the URD.

Page 649: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 65THE SOFTWARE PROJECT MANAGEMENT PLAN

An estimate based simply on the number of user requirementsmight be very misleading - a large number of detailed low-level userrequirements might be more useful, and save more time in the SR phase,than a few high-level user requirements. A poor quality URD with fewrequirements might imply that a lot of requirements analysis is required inthe SR phase.

5.7.2 SR phase

During the SR phase, the AD phase section of the SPMP must beproduced (SPMP/AD) (SPM05). The SPMP/AD describes, in detail, theproject activities to be carried out in the AD phase.

As part of its introduction, the SPMP/AD must include an outlineplan for the rest of the project, and provide an estimate of the total cost ofthe whole project accurate to at least 30% (SPM06). The project managerwill need to draft the work breakdown, schedule and budget section of theSPMP/DD and SPMP/TR at the end of the SR phase to provide the overviewand estimate the total cost of the project.

A precise estimate of the effort involved in the AD phase must beincluded in the SPMP/AD (SPM07). Specific factors that affect estimates forthe work required in the AD phase are the:• number of software requirements;• level of detail of software requirements;• stability of software requirements;• level of definition of external interfaces;• quality of the SRD.

If an evolutionary development life cycle approach is to be used,then this should be stated in the SPMP/AD.

5.7.3 AD phase

During the AD phase, the DD phase section of the SPMP must beproduced (SPMP/DD) (SPM08). The SPMP/DD describes, in detail, theproject activities to be carried out in the DD phase.

An estimate of the total project cost must be included in theSPMP/DD (SPM09). An accuracy of 10% should be aimed at.

Page 650: ESA Software Engineering Standards

66 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)THE SOFTWARE PROJECT MANAGEMENT PLAN

The SPMP/DD must contain a WBS that is directly related to thedecomposition of the software into components (SPM10).

The SPMP/DD must contain a planning network (i.e. activitynetwork) showing the relationships between the coding, integration andtesting activities (SPM11).

5.7.4 DD phase

As the detailed design work proceeds to lower levels, the WBS andjob schedule need to be refined to reflect this. To achieve the necessarylevel of visibility, software production work packages in the SPMP/DD mustnot last longer than one man-month (SPM12).

During the DD phase, the TR phase section of the SPMP must beproduced (SPMP/TR) (SPM13). The SPMP/TR describes, in detail, projectactivities until final acceptance, in the OM phase.

Page 651: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 67THE SOFTWARE PROJECT PROGRESS REPORT

CHAPTER 6THE SOFTWARE PROJECT PROGRESS REPORT

6.1 INTRODUCTION

Accurate and timely reporting is essential for the control of a project.Software project managers should produce progress reports at regularintervals or when events occur that meet agreed criteria. The progress reportis an essential tool for controlling a project because:• preparing the progress report forces re-evaluation of the resources

required to complete the project;• the progress report provides initiators and senior management with

visibility of the status of the project.

Progress reports describe the project status in relation to theapplicable Software Project Management Plan. They should cover technicalstatus, resource status, schedule status, problems and financial status(which may be provided separately).

Progress reports should be distributed to senior management andthe initiator. The progress report sent to senior management should containthe information described in Section 6.6. Although the progress report sentto the initiator should have the same structure, the level of detail providedmay not be the same because of the contractual situation. For example in afixed price contract, it is acceptable to omit detailed information aboutresource status and financial status.

6.2 STYLE

The progress report should be plain, concise, clear, complete andconsistent. Brevity, honesty and relevance are the watchwords of the goodprogress report.

6.3 RESPONSIBILITY

The software project manager is normally responsible for theproduction of the progress report.

Page 652: ESA Software Engineering Standards

68 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)THE SOFTWARE PROJECT PROGRESS REPORT

6.4 MEDIUM

It is usually assumed that the progress report is a paper document.There is no reason why it should not be distributed electronically to peoplewith the necessary equipment.

6.5 SERVICE INFORMATION

Progress reports should contain the following service information:

a - Abstract b - Table of Contents

6.6 CONTENTS

The following table of contents is recommended for the progressreport:

1 Introduction1.1 Purpose1.2 Summary1.3 References1.4 Definitions and acronyms

2 Technical status2.1 Work package technical status2.2 Configuration status2.3 Forecast for next reporting period

3 Resource status3.1 Staff utilisation3.2 Work package resource status3.3 Resource summary

4 Schedule status4.1 Milestone trends4.2 Schedule summary

5 Problems

Page 653: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 69THE SOFTWARE PROJECT PROGRESS REPORT

6 Financial status report6.1 Costs for the reporting period6.2 Cost to completion6.3 Limit of liability6.4 Payments

6.6.1 Progress Report/1 Introduction

6.6.1.1 Progress Report/1.1 Purpose

This section should describe the purpose of the report. This sectionshould include:• the name of the project;• a reference to the applicable SPMP;• a reference to the applicable contract (if any);• a statement of the reporting period.

6.6.1.2 Progress Report/1.2 Summary

This section should summarise the main activities andachievements during the reporting period.

Any change in the total cost to completion should be stated in thissection. Changes to work package resource estimates should besummarised.

6.6.1.3 Progress Report/1.3 References

This section should list any documents referenced in the report orapplicable to it (e.g. this guide, the SPMP etc).

6.6.1.4 Progress Report/1.4 Definitions and acronyms

This section should list any special terms, acronyms orabbreviations used in the report and explain their meaning.

Page 654: ESA Software Engineering Standards

70 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)THE SOFTWARE PROJECT PROGRESS REPORT

6.6.2 Progress Report/2 Technical status

6.6.2.1 Progress Report/2.1 Work package technical status

This section should list the work packages completed during thereporting period. There should be some indication of whether each workpackage was completed successfully or not.

This section should list the work packages continued or startedduring the reporting period. The outstanding tasks for each work packageshould be identified.

6.6.2.2 Progress Report/2.2 Configuration status

This section should summarise the changes to the configurationstatus in the reporting period, e.g:• RID statistics;• issues and revisions of documents;• SCR, SPR and SMR statistics;• releases of software.

6.6.2.3 Progress Report/2.3 Forecast for next reporting period

This section should forecast what technical progress is expected oneach work package in the next reporting period.

Any events that are expected to take place in the reporting periodshould be described (e.g. milestones, deliveries).

6.6.3 Progress Report/3 Resource status

6.6.3.1 Progress Report/3.1 Staff utilisation

This section should comprise:• a table of hours booked by each member of the team;• the hours staff have booked for a number of past reporting periods (e.g.

from the beginning of the current phase, or beginning of the year, orbeginning of the project, as appropriate);

• a forecast for the man-hours staff will book for future reporting periods(e.g. up to the end of the current phase, or the end of the project, orother milestone).

Page 655: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) 71THE SOFTWARE PROJECT PROGRESS REPORT

6.6.3.2 Progress Report/3.2 Work package resource status

This section should contain a work package progress table (seeSection 3.7.1) summarising for each work package:• previous estimate of the effort required for completion;• effort expended in this period;• cumulative effort expenditure up to the end of the reporting period;• estimate of remaining effort required for completion.

6.6.3.3 Progress Report/3.3 Resource status summary

This section should present the aggregated effort expenditure forthe reporting period for:• the project;• subsystems (for medium and large-size projects).

6.6.4 Progress Report/4 Schedule status

6.6.4.1 Progress Report/4.1 Milestone trend charts

This section should provide an updated milestone trend chart (seeSection 3.4.2) for all the milestones described in the SPMP section to whichthis progress report applies. All changes to milestone dates made in thisreporting period should be explained and justified.

6.6.4.2 Progress Report/4.2 Schedule summary

This section should provide an updated bar chart (i.e. Gantt chart)marked to show work packages completed and milestones achieved.Progress on partially completed work packages may also be shown.

6.6.5 Progress Report/5 Problems

This section should identify any problems which are affecting orcould affect progress. These may include technical problems with thesoftware under development, environmental problems (e.g. computers,communications, and accommodation) and resource problems (e.g.staffing problems). This list is not exhaustive.

Page 656: ESA Software Engineering Standards

72 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)THE SOFTWARE PROJECT PROGRESS REPORT

6.6.6 Progress Report/6 Financial status report

The Financial Status Report (sometimes called the Cost Report)provides the total amounts, in currency units, to be billed during thereporting period. The Financial Status Report may be provided separately.

6.6.6.1 Progress Report/6.1 Costs for the reporting period

This section should provide a table listing the hours worked, ratesand costs (hours worked multiplied by rate) for each team member. Totallabour costs should be stated.

For ESA contracts, labour costs may have to be calculated in baserates and price variation rates, determined by applying a contractuallyspecified price-variation formula to the base rate.

Non-labour costs should be listed and a total stated.

6.6.6.2 Progress Report/6.2 Cost to completion

This section should state the accumulated cost so far and theplanned cost to completion of the project and project phase.

A progress chart plotting the cost to completion values reportedsince the start of the project or project phase may be provided (see Section3.4.1).

6.6.6.3 Progress Report/6.3 Limit of liability

This section states:• the Limit Of Liability (LOL);• how much has been spent;• how much there is to go.

6.6.6.4 Progress Report/6.4 Payments

This section should state what payments are due for the reportingperiod (e.g. contractual stage payments on achievement of a givenmilestone).

Page 657: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) A-1GLOSSARY

APPENDIX AGLOSSARY

Terms used in this document are consistent with ESA PSS-05-0 [Ref1] and ANSI/IEEE Std 610.12-1990 [Ref 2].

A.1 LIST OF ACRONYMS

AD Architectural DesignAD/R Architectural Design ReviewADD Architectural Design DocumentAMI Application of Metrics in IndustryANSI American National Standards InstituteAT Acceptance TestBSSC Board for Software Standardisation and ControlCASE Computer Aided Software EngineeringCOCOMO Constructive Cost ModelDCR Document Change RecordDD Detailed Design and productionDD/R Detailed Design and production ReviewDDD Detailed Design and production DocumentESA European Space AgencyFPA Function Point AnalysisICD Interface Control DocumentIEEE Institute of Electrical and Electronics EngineersIT Integration TestLOL Limit Of LiabilityPERT Program Evaluation and Review TechniquePSS Procedures, Specifications and StandardsQA Quality AssuranceRID Review Item DiscrepancySCMP Software Configuration Management PlanSCR Software Change RequestSMR Software Modification ReportSPR Software Problem ReportSR Software RequirementsSR/R Software Requirements ReviewSRD Software Requirements DocumentST System TestSUT Software Under TestSVV Software Verification and Validation

Page 658: ESA Software Engineering Standards

A-2 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)GLOSSARY

SVVP Software Verification and Validation PlanUR User RequirementsUR/R User Requirements ReviewURD User Requirements DocumentUT Unit Test

Page 659: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) B-1REFERENCES

APPENDIX BREFERENCES

1. ESA Software Engineering Standards, ESA PSS-05-0 Issue 2, February1991.

2. IEEE Standard Glossary of Software Engineering Terminology, ANSI/IEEEStd 610.12-1990.

3. IEEE Standard for Software Project Management Plans, ANSI/IEEE Std1058.1-1987.

4. Managing the Software Process, Watts S. Humphrey, SEI Series in SoftwareEngineering, Addison-Wesley, August 1990.

5. Reference Model for Frameworks of Software Engineering Environments,European Computer Manufacturer's Association, TR/55, 1991.

6. Evolution of the ESA Software Engineering Standards, J.Fairclough andC.Mazza, in Proceedings of the IEEE Fourth Software Engineering StandardsApplication Workshop, 1991.

7. Software Process Themes and Issues, M. Dowson, in Proceedings of 2ndInternational Conference on the Software Process, IEEE Computer SocietyPress, 1993.

8. Tool Support for Software Process Definition and Enactment, C. Fernström,Proceedings of the ESA/ESTEC workshop on European Space SoftwareDevelopment Environment, ESA, 1992.

9. Software Engineering Economics, B.Boehm, Prentice-Hall, 1981

10. The Mythical Man-Month, F. Brooks, Addison-Wesley, 1975

11. Software Function, source lines of code and development effort prediction -a software science validation, A. Albrecht and J. Gaffney Jr, IEEETransactions on Software Engineering, SE-9, (6), 1983.

12. SOCRAT, Software Cost Resources Assessment Tool, ESA Study ContractReport, J.Fairclough, R. Blake, I. Alexander, 1992.

13. Function Point Analysis: Difficulties and Improvements, IEEE Transactionson Software Engineering, Vol 14, 1, 1988.

Page 660: ESA Software Engineering Standards

B-2 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)REFERENCES

14. Managing Computer Projects, R. Gibson, Prentice-Hall, 1992.

15. Software Sizing and Estimating, C.R. Symons, Wiley, 1991.

16. Project Control Requirements and Procedures for Medium-Size Projects,ESA PSS-38, Issue 1, December 1977.

17. The Nature of Managerial Work, H. Mintzberg, Prentice-Hall, 1980.

18. Applications of Metrics in Industry Handbook, a quantitative approach tosoftware management, Centre for Systems and Software Engineering, SouthBank University, London, 1992.

19. Guide to the Software Engineering Standards, ESA PSS-05-01, Issue 1,October 1991.

Page 661: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) C-1MANDATORY PRACTICES

APPENDIX CMANDATORY PRACTICES

This appendix is repeated from ESA PSS-05-0, appendix D.8.

SPM01 All software project management activities shall be documented in theSoftware Project Management Plan (SPMP).

SPM02 By the end of the UR review, the SR phase section of the SPMP shall beproduced (SPMP/SR).

SPM03 The SPMP/SR shall outline a plan for the whole project.

SPM04 A precise estimate of the effort involved in the SR phase shall be included inthe SPMP/SR.

SPM05 During the SR phase, the AD phase section of the SPMP shall be produced(SPMP/AD).

SPM06 An estimate of the total project cost shall be included in the SPMP/AD.

SPM07 A precise estimate of the effort involved in the AD phase shall be included inthe SPMP/AD.

SPM08 During the AD phase, the DD phase section of the SPMP shall be produced(SPMP/DD).

SPM09 An estimate of the total project cost shall be included in the SPMP/DD.

SPM10 The SPMP/DD shall contain a WBS that is directly related to thedecomposition of the software into components.

SPM11 The SPMP/DD shall contain a planning network showing relationships ofcoding, integration and testing activities.

SPM12 No software production work packages in the SPMP/DD shall last longerthan 1 man-month.

SPM13 During the DD phase, the TR phase section of the SPMP shall be produced(SPMP/TR).

Page 662: ESA Software Engineering Standards

C-2 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)MANDATORY PRACTICES

This page is intentionally left blank.

Page 663: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) D-1PROJECT MANAGEMENT FORMS

APPENDIX DPROJECT MANAGEMENT FORMS

Page 664: ESA Software Engineering Standards

D-2 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)PROJECT MANAGEMENT FORMS

D.1 WORK PACKAGE DESCRIPTION

PROJECT: PHASE: W.P. REF:

W.P. TITLE:

CONTRACTOR: SHEET 1 OF 1

MAJOR CONSTITUENT:

START EVENT: PLANNED DATE: ISSUE REF:

END EVENT: PLANNED DATE: ISSUE DATE:

W.P. MANAGER

Inputs

Activities

Outputs

Page 665: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) D-3PROJECT MANAGEMENT FORMS

D.2 MILESTONE TREND CHART

M1

M2

M3

M4

M5

M6

M7

Milestone

TimescaleUpdate

key to milestones and dates

Page 666: ESA Software Engineering Standards

D-4 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)PROJECT MANAGEMENT FORMS

This page is intentionally left blank.

Page 667: ESA Software Engineering Standards

ESA PSS-05-08 Issue 1 Revision 1 (March 1995) E-1INDEX

APPENDIX EINDEX

activity distribution, 40activity network, 21AMI, 32ANSI/IEEE Std 1058.1-1987, 55ANSI/IEEE Std 610.12-1990, 1Boehm, 20, 42Brooks' Law, 26COCOMO, 39commercial software, 30contingency plans, 27cost report, 72cyclomatic complexity, 33deliverables, 7, 58Delphi method, 41documentation effort, 42duration, 20dynamic models, 38enaction, 54estimating tool, 52evolutionary approach, 11external interfaces, 31financial status report, 72FPA, 40Function Point Analysis, 40growth curve, 28heuristics, 53human resources, 18incremental approach, 10informational responsibilities, 4integration complexity, 33Interface Control Documents, 31leadership, 22limit of liability, 72location of staff, 27management control loop, 3management responsibility, 4management structure, 59Mark Two Function Point Analysis, 40maturity, 25methods, 29metric, 33metric data analysis, 34metric data collection, 34milestone trend charts, 48Mintzberg, 4non-labour costs, 20organigram, 19

organigrams, 58PERT, 43Petri Nets, 37phase per cent method, 41planning network, 66process improvement, 25process model, 8, 58process modelling method, 37process modelling tool, 52process support tool, 54progress charts, 47progress reports, 35progress tables, 46project development effort, 33project development time, 33project interfaces, 5project organisation, 58project planning, 6project planning tool, 51quality system, 13reliability, 33resource smoothing, 22risk analysis tool, 53risk management, 23risk table, 44rule of seven, 5, 18single point failure, 27SLC03, 8software project management, 3software size measures, 53SPM01, 55, 57SPM02, 64SPM03, 64SPM04, 55, 64SPM05, 65SPM06, 55, 65SPM07, 55, 65SPM08, 65SPM09, 55, 65SPM10, 55, 66SPM11, 55, 66SPM12, 66SPM13, 66SPMP, 7staff, 24staffing profile, 28standard interfaces, 31

Page 668: ESA Software Engineering Standards

E-2 ESA PSS-05-08 Issue 1 Revision 1 (March 1995)INDEX

structured analysis, 52team leader, 23technical management, 23technical novelty, 29terms of reference, 4timesheets, 35tools, 29unity of command principle, 18user requirements, 30waterfall approach, 9WBS, 14Work Breakdown Structure, 14work package completion reports, 35work package definition, 14

Page 669: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1March 1995

european space agency / agence spatiale européenne8-10, rue Mario-Nikis, 75738 PARIS CEDEX, France

Guide tosoftwareconfigurationmanagementPrepared by:ESA Board for SoftwareStandardisation and Control(BSSC)

Page 670: ESA Software Engineering Standards

ii ESA PSS-05-09 Issue 1 Revision 1 (March 1995)DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

1. DOCUMENT TITLE: ESA PSS-05-09 Guide to Software Configuration Management

2. ISSUE 3. REVISION 4. DATE 5. REASON FOR CHANGE

1 0 1992 First issue

1 1 1995 Minor updates for publication

Issue 1 Revision 1 approved, May 1995Board for Software Standardisation and ControlM. Jones and U. Mortensen, co-chairmen

Issue 1 approved, 9th December 1992Telematics Supervisory Board

Issue 1 approved by:The Inspector General, ESA

Published by ESA Publications Division,ESTEC, Noordwijk, The Netherlands.Printed in the Netherlands.ESA Price code: E1ISSN 0379-4059

Copyright © 1995 by European Space Agency

Page 671: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) iiiTABLE OF CONTENTS

TABLE OF CONTENTS

CHAPTER 1 INTRODUCTION..................................................................................11.1 PURPOSE .................................................................................................................11.2 OVERVIEW................................................................................................................1

CHAPTER 2 SOFTWARE CONFIGURATION MANAGEMENT...............................32.1 INTRODUCTION.......................................................................................................32.2 PRINCIPLES OF SOFTWARE CONFIGURATION MANAGEMENT ........................4

2.2.1 Configuration identification................................................................................72.2.1.1 What to identify as configuration items .................................................8

2.2.1.1.1 Baselines .......................................................................................82.2.1.1.2 Variants..........................................................................................9

2.2.1.2 How to identify configuration items ........................................................92.2.1.3 Configuration item control authority............................................................11

2.2.1.3.1 Software author ...........................................................................112.2.1.3.2 Software project manager ..........................................................112.2.1.3.3 Review board ..............................................................................11

2.2.1.4 Configuration item history .....................................................................122.2.2 Configuration item storage..............................................................................12

2.2.2.1 Software libraries ...................................................................................132.2.2.2 Media control........................................................................................15

2.2.3 Configuration change control..........................................................................152.2.4 Configuration status accounting.....................................................................172.2.5 Release ............................................................................................................19

2.3 DOCUMENTATION CONFIGURATION MANAGEMENT......................................192.3.1 Document configuration identification............................................................192.3.2 Document configuration item storage ............................................................202.3.3 Document configuration change control ........................................................202.3.4 Document configuration status accounting ...................................................232.3.5 Document release ...........................................................................................23

2.4 CODE CONFIGURATION MANAGEMENT ...........................................................232.4.1 Code configuration identification ....................................................................242.4.2 Code configuration item storage ....................................................................252.4.3 Code configuration change control ................................................................252.4.4 Code configuration status accounting ...........................................................272.4.5 Code release....................................................................................................28

CHAPTER 3 SOFTWARE CONFIGURATION MANAGEMENT TOOLS................293.1 INTRODUCTION.....................................................................................................293.2 BASIC TOOLSET REQUIREMENTS ......................................................................293.3 ADVANCED TOOLSET REQUIREMENTS .............................................................31

Page 672: ESA Software Engineering Standards

iv ESA PSS-05-09 Issue 1 Revision 1 (March 1995)TABLE OF CONTENTS

3.4 ONLINE TOOLSET REQUIREMENTS....................................................................323.5 INTEGRATED TOOLSET REQUIREMENTS .........................................................32

CHAPTER 4 THE SOFTWARE CONFIGURATION MANAGEMENT PLAN ..........334.1 INTRODUCTION....................................................................................................334.2 STYLE.....................................................................................................................344.3 RESPONSIBILITY...................................................................................................344.4 MEDIUM.................................................................................................................344.5 CONTENT..............................................................................................................344.6 EVOLUTION...........................................................................................................42

4.6.1 UR phase .........................................................................................................424.6.2 SR phase..........................................................................................................434.6.3 AD phase .........................................................................................................434.6.4 DD phase .........................................................................................................43

APPENDIX A GLOSSARY ....................................................................................A-1APPENDIX B REFERENCES................................................................................B-1APPENDIX C MANDATORY PRACTICES ...........................................................C-1APPENDIX D FORM TEMPLATES.......................................................................D-1APPENDIX E INDEX .............................................................................................E-1

Page 673: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) vPREFACE

PREFACE

This document is one of a series of guides to software engineering produced bythe Board for Software Standardisation and Control (BSSC), of the European SpaceAgency. The guides contain advisory material for software developers conforming toESA's Software Engineering Standards, ESA PSS-05-0. They have been compiled fromdiscussions with software engineers, research of the software engineering literature,and experience gained from the application of the Software Engineering Standards inprojects.

Levels one and two of the document tree at the time of writing are shown inFigure 1. This guide, identified by the shaded box, provides guidance aboutimplementing the mandatory requirements for software configuration managementdescribed in the top level document ESA PSS-05-0.

Guide to theSoftware Engineering

Guide to theUser Requirements

Definition Phase

Guide toSoftware Project

Management

PSS-05-01

PSS-05-02 UR GuidePSS-05-03 SR Guide

PSS-05-04 AD GuidePSS-05-05 DD Guide

PSS-05-07 OM Guide

PSS-05-08 SPM GuidePSS-05-09 SCM Guide

PSS-05-11 SQA Guide

ESASoftware

EngineeringStandards

PSS-05-0

Standards

Level 1

Level 2

PSS-05-10 SVV Guide

PSS-05-06 TR Guide

Figure 1: ESA PSS-05-0 document tree

The Guide to the Software Engineering Standards, ESA PSS-05-01, containsfurther information about the document tree. The interested reader should consult thisguide for current information about the ESA PSS-05-0 standards and guides.

The following past and present BSSC members have contributed to theproduction of this guide: Carlo Mazza (chairman), Gianfranco Alvisi, Michael Jones,Bryan Melton, Daniel de Pablo and Adriaan Scheffer.

Page 674: ESA Software Engineering Standards

vi ESA PSS-05-09 Issue 1 Revision 1 (March 1995)PREFACE

The BSSC wishes to thank Jon Fairclough for his assistance in the developmentof the Standards and Guides, and to all those software engineers in ESA and Industrywho have made contributions.

Requests for clarifications, change proposals or any other comment concerningthis guide should be addressed to:

BSSC/ESOC Secretariat BSSC/ESTEC SecretariatAttention of Mr C Mazza Attention of Mr B MeltonESOC ESTECRobert Bosch Strasse 5 Postbus 299D-64293 Darmstadt NL-2200 AG NoordwijkGermany The Netherlands

Page 675: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 1INTRODUCTION

CHAPTER 1INTRODUCTION

1.1 PURPOSE

ESA PSS-05-0 describes the software engineering standards to beapplied for all deliverable software implemented for the European SpaceAgency (ESA), either in house or by industry [Ref. 1].

ESA PSS-05-0 requires that the configuration of the code anddocumentation be managed throughout the software development life cycle.This activity is called ‘Software Configuration Management' (SCM). Eachproject must define its Software Configuration Management activities in aSoftware Configuration Management Plan (SCMP).

This guide defines and explains what software configurationmanagement is, provides guidelines on how to do it, and defines in detailwhat a Software Configuration Management Plan should contain.

This guide should be read by everyone concerned with developing,installing and changing software, i.e. software project managers, softwarelibrarians and software engineers. Some sections describing procedures forhandling software problems and changing documents will be of interest tousers.

1.2 OVERVIEW

Chapter 2 discusses the principles of software configurationmanagement, expanding upon ESA PSS-05-0. Chapter 3 discusses tools forsoftware configuration management. Chapter 4 describes how to write theSCMP.

All the mandatory practices in ESA PSS-05-0 relevant to softwareconfiguration management are repeated in this document. The identifier ofthe practice is added in parentheses to mark a repetition. This documentcontains no new mandatory practices.

Page 676: ESA Software Engineering Standards

2 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)INTRODUCTION

This page is intentionally left blank.

Page 677: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 3SOFTWARE CONFIGURATION MANAGEMENT

CHAPTER 2SOFTWARE CONFIGURATION MANAGEMENT

2.1 INTRODUCTION

The purpose of software configuration management is to plan,organise, control and coordinate the identification, storage and change ofsoftware through development, integration and transfer (SCM02). Everyproject must establish a software configuration management system. Allsoftware items, for example documentation, source code, executable code,files, tools, test software and data, must be subjected to SCM (SCM01).Software configuration management must ensure that:• software components can be identified;• software is built from a consistent set of components;• software components are available and accessible;• software components never get lost (e.g. after media failure or operator

error);• every change to the software is approved and documented;• changes do not get lost (e.g. through simultaneous updates);• it is always possible to go back to a previous version;• a history of changes is kept, so that is always possible to discover who

did what and when.

Project management is responsible for organising softwareconfiguration management activities, defining software configurationmanagement roles (e.g. software librarian), and allocating staff to thoseroles. In the TR and OM phases, responsibility for software configurationmanagement lies with the Software Review Board (SRB).

Each group has its own characteristic requirements for softwareconfiguration management [Ref. 5]. Project management requires accurateidentification of all items, and their status, to control and monitor progress.Development personnel need to share items safely and efficiently. Qualityassurance personnel need to be able to trace the derivation of each itemand establish the completeness and correctness of each configuration. Theconfiguration management system provides visibility of the product toeveryone.

Page 678: ESA Software Engineering Standards

4 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)SOFTWARE CONFIGURATION MANAGEMENT

In large developments, spread across multiple hardware platforms,configuration management procedures may differ in physical details.However, a common set of configuration management procedures must beused (SCM03).

No matter how large the project may be, software configurationmanagement has a critical effect on quality. Good software configurationmanagement is essential for efficient development and maintenance, andfor ensuring that the integrity of the software is never compromised. Badsoftware configuration management can paralyse a project. Sensiblechange requests may fail to be approved because of fears that they cannotbe implemented correctly and will degrade the system.

This chapter summarises the principles of software configurationmanagement described in ESA PSS-05-0 and then discusses theapplication of these principles first to documents and then to code.

2.2 PRINCIPLES OF SOFTWARE CONFIGURATION MANAGEMENT

The subject matter of all software configuration managementactivities is the ‘configuration item' (CI). A CI is an aggregation of hardware,software or both that is designated as a unit for configuration management,and treated as a single entity in the configuration management process[Ref. 2]. A CI can be an aggregation of other CIs, organised in a hierarchy.Any member of this hierarchy can exist in several versions, each being aseparate CI. For code, the CI hierarchy will normally be based on the designhierarchy, but may be modified for more efficient configuration management.Figure 2.2.A shows an example CI hierarchy.

Examples of configuration items are:• software component, such as a version of a source module, object

module, executable program or data file;• support software item, such as a version of a compiler or a linker;• baseline, such as a software system under development;• release, such as a software system in operational use;• document, such as an ADD.

Page 679: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 5SOFTWARE CONFIGURATION MANAGEMENT

A

AA AB AC

AAA AAB ACA ACB

Version 1Version 2

Version 3Version 4

Figure 2.2.A: Example Configuration Item hierarchy

Software configuration management is formally defined inANSI/IEEE Std 610.12-1990 [Ref. 2] as the discipline of applying technicaland administrative direction and administration to:• identify and document the functional and physical characteristics of a

configuration item;• control changes to those characteristics;• record and report change processing and implementation status;• verify compliance with specified requirements.

All configuration items, from modules to entire software releases,must be defined as early as possible and systematically labelled when theyare created. Software development can only proceed smoothly if thedevelopment team are able to work on correct and consistent configurationitems. Responsibilities for the creation and modification of configurationitems must be allocated to individuals and respected by others. Althoughthis allocation is defined in the Work Breakdown Structure (WBS) of theSoftware Project Management Plan (SPMP), the Software ConfigurationManagement Plan (SCMP) defines the procedures that coordinate theefforts of the developers.

Page 680: ESA Software Engineering Standards

6 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)SOFTWARE CONFIGURATION MANAGEMENT

Software develops from baseline to baseline until it can be released.Changes to baselines must be controlled throughout the life cycle bydocumenting the:• problems, so that they can be understood;• changes that are proposed to solve the problem;• modifications that result.

Problems that initiate the change process can result from bugs incode, faults in design, errors in the requirements or omissions in therequirements.

The change process must be monitored. Accurate records andreports of configuration item status are needed to provide visibility ofcontrolled change.

Verification of the completeness and correctness of configurationitems (by reviews and audits) is classified by ESA PSS-05-0 as a softwareverification and validation activity, and is discussed in ESA PSS-05-10,‘Guide to Software Verification and Validation'.

ESA PSS-05-0 identifies five primary configuration managementactivities:• configuration identification;• configuration item storage;• configuration change control;• configuration status accounting;• release.

Figure 2.2.B shows the relationship between the softwareconfiguration management activities (shaded) and componentdevelopment, system integration and build activities (unshaded). Thediagram applies to all CIs, both documents and code, and shows thechange control loop, a primary feature of software configurationmanagement.

The following subsections discuss the principles of configurationmanagement in terms of the five software configuration managementactivities. Software configuration management terms are collected inAppendix A for reference.

Page 681: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 7SOFTWARE CONFIGURATION MANAGEMENT

IdentifyCIs

DevelopCIs

StoreCIs

IntegrateCIs

ChangeCIs

RecordCI status

ReleaseCIs

SCMP

SPMPADDDDD CI lists

conventions

CIs

controlledCIs

baselines

problem description

changedescription

status accounts

release

releasedescription

librarian tools

changed CIs

identification

build tools

Figure 2.2B: Software Configuration Management Activities

2.2.1 Configuration identification

The first step in configuration management is to define the CIs andthe conventions for their identification.

The inputs to the configuration identification activity are the SPMP(which lists the project documents), the ADD and the DDD (which list thesoftware components). These inputs are used to define the hierarchy of CIs.The output is a configuration item list, defining what will be built using thenaming conventions described in the SCMP. The configuration item listphysically defines the components to be developed and the baselines to beintegrated. For example an SPMP will say that an SRD will be written, but theconfiguration item list will define its file specification.

Configuration identification conventions should state how to:• name a CI;• decide who is the control authority of a CI;• describe the history of a CI.

Page 682: ESA Software Engineering Standards

8 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)SOFTWARE CONFIGURATION MANAGEMENT

2.2.1.1 What to identify as configuration items

At the top level, the whole system is a CI. The system CI iscomposed of lower level CIs, which are derived from the design andplanning documentation. Several factors may be relevant in deciding what toidentify as the lowest level CI, such as the:• software design defined in the ADD and DDD (the lowest level of the

software design sets the lowest possible level of configurationmanagement);

• capabilities of the software development tools (e.g. the units that thecompiler or linkers input and output);

• bottom-level work packages defined in the SPMP.

Configurations must be practical from the physical point of view (i.e.each CI is easy to create and modify) and practical from the logical point ofview (i.e. the purpose of each CI is easy to understand). A configurationshould have sufficient ‘granularity' (i.e. the lowest level CIs are smallenough), to ensure that precise control is possible.

CIs should be easy to manage as discrete physical entities (e.g.files), and so the choice of what to make CIs can depend very much uponthe tools available. A basic configuration management toolset (see Chapter3) might rely on the file management facilities of the operating system. It isno accident therefore that CIs are often discrete files.

2.2.1.1.1 Baselines

A ‘baseline' is a CI that has been formally reviewed and agreedupon (i.e. approved), and declared a basis for further development.Although any approved CI in a system can be referred to as a baseline, theterm is normally used for the complete system.

Software development proceeds from baseline to baseline,accumulating new or revised CIs in the process. Early baselines contain onlydocuments specifying the software to be built; later baselines contain thecode as well.

The CIs in a baseline must be consistent with each other. Examplesof consistency are:• code CIs are compatible with the design document CIs;

Page 683: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 9SOFTWARE CONFIGURATION MANAGEMENT

• design document CIs are compatible with the requirements documentCIs;

• user manual CIs describe the operation of the code CIs.

Baselines should be established when sharing software betweendevelopers becomes necessary. As soon as more than one person is usinga piece of software, development of that software must be controlled.

Key baselines are tied to the major milestones in the life cycle (i.e.UR/R, SR/R, AD/R, DD/R, provisional acceptance and final acceptance).Baselines are also tied to milestones during integration. Unit-tested softwarecomponents are assembled and integration-tested to form the initialbaseline. New baselines are generated as other software components areintegrated. The baselines provide a development and test environment forthe new software components undergoing development and integration.

2.2.1.1.2 Variants

The term ‘variant' is often used to identify CIs that, although offeringalmost identical functionality, differ in aspects such as:• hardware environment;• communications protocol;• user language (e.g. English, French).

Variants may also be made to help diagnose software problems.The usefulness of such variants is often temporary, and they should bewithdrawn when the problem has been solved.

The need to develop and maintain several variants can greatlycomplicate configuration management, so the number of variants should beminimised. One way to do this is to include conditional code in modules, sothat one module can handle every variation. However modules can becomevery much more complicated, and developers need to trade-off codecomplexity with configuration complexity.

2.2.1.2 How to identify configuration items

Each configuration item must possess an identifier (SCM06). Theidentifier contains the name, type and version attributes of the configurationitem (SCM07, SCM08, SCM09). The configuration management systemshould automatically ensure that all identifiers are unique.

Page 684: ESA Software Engineering Standards

10 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)SOFTWARE CONFIGURATION MANAGEMENT

Good names help developers quickly locate parts they need to workon, and ease traceability. Documents should have standard names (notethat ESA PSS-05-0 predefines the names of documents) and code itemsshould be named after the software components they contain.

The type field of a CI identifier should identify the processing the CIis intended for. There are three main types of CI:• source CIs;• derived CIs;• tools to generate derived CIs from source CIs.

Source CIs (e.g. documents, source code) are created by softwareengineers. Corrections and modifications to CIs should always be done atsource level.

Derived CIs (e.g. object files, executable files) are generated byprocessing source CIs with tools (e.g. compilers and linkers). The type fieldof a CI identifier should allow easy distinction between source CIs, derivedCIs, and the tools used to make derived CIs.

Version numbers allow changes to CIs to be distinguished.Whenever a CI is changed, the version number changes. In mostconfiguration management systems the version number is an integer.

Some configuration management systems distinguish between‘versions' and ‘revisions'. Revision numbers track minor changes that do notchange the functionality, such as bug fixes. When this approach is used, theversion number actually consists of two numbers. The first marks majorchanges and the second, the revision number, marks minor changes. Forexample a software release might be referred to as ‘Version 5.4'. Fordocuments, the first number is called the ‘issue' number. A version of adocument might be referred to as ‘Issue 2 Revision 1', for example.

The configuration identification method must accommodate newCIs, without requiring the modification of the identifiers of any existing CIs(SCM11). This is the ‘extensibility' requirement. A method that sets a limit onthe number of CIs does not meet the extensibility requirement, for example.When the limit has been reached, existing CIs have to be merged or deletedto make room for new CIs.

Page 685: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 11SOFTWARE CONFIGURATION MANAGEMENT

2.2.1.3 Configuration item control authority

Every CI should have a unique control authority that decides whatchanges will be made to a CI. The control authority may be an individual or agroup of people. Three levels of control authority are normally distinguishedin a software project:• software author;• software project manager;• review board.

Large projects may have more levels. As a CI passes through thestages of verification (formal review meeting in the case of documents, unit,integration, system and acceptance tests in the case of code), so thecontrol authority changes to match the higher verification status of the CI.

2.2.1.3.1 Software author

Software authors create low-level CIs, such as documents andcode. Document writers are usually the control authority for draftdocuments. Programmers are normally the control authority for code untilunit testing is complete.

2.2.1.3.2 Software project manager

Software project managers are responsible for the assembly ofhigh-level CIs (i.e. baselines and releases) from the low-level CIs provided bysoftware authors. Software project managers are the control authorities forall these CIs. Low-level CIs are maintained in software libraries (see Section2.2.2.1). On most projects the software project manager is supported by asoftware librarian.

2.2.1.3.3 Review board

The review board approves baselines and changes to them. Duringdevelopment phases, formal review of baselines is done by the UR/R, SR/R,AD/R and DD/R boards. During the TR and OM phases, baselines arecontrolled by the Software Review Board (SRB).

The review board is the control authority for all baselines that havebeen approved or are under review. The review board may decide that abaseline should be changed, and authorises the development ormaintenance team to implement the change. Control authority rights are notreturned. Those responsible for implementing changes must resist the

Page 686: ESA Software Engineering Standards

12 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)SOFTWARE CONFIGURATION MANAGEMENT

temptation to make any changes that have not been authorised. Newproblems found should be reported in the normal way (see Sections 2.3.3and 2.4.3).

Review boards should be composed of members with sufficientauthority and expertise to resolve any problems with the software. Thenature of the review board depends upon the project. On a small project itmight just consist of two people, such as the project manager and a userrepresentative. On a large project the membership might include the projectmanager, the technical manager, the software librarian, the team leaders,user representatives and quality assurance personnel. Review boardprocedures are discussed in ESA PSS-05-10, ‘Guide to Software Verificationand Validation'.

2.2.1.4 Configuration item history

The development history of each CI must be recorded from themoment it is first submitted for review or integration. For documents, thishistory is stored in Document Change Records (DCRs) and DocumentStatus Sheets (DSS). For source code, the development history is stored inthe module header (SCM18). Change records in module headers shouldreference the Software Change Request (SCR) that actioned them. Forderived code, the development history is stored in the librarian's database.The development history should include records of the tests that have beenperformed. The Software Modification Report (SMR) should summarise thechanges to CIs that have been revised in response to an SCR.

Development histories may be stored as tables of ‘derivationrecords'. Each derivation record records one event in the life of the CI (e.g.source code change, compilation, testing etc.).

2.2.2 Configuration item storage

All CIs must be stored securely so that they never get lost. Softwareprojects can accumulate thousands of low-level CIs, and just as books arestored in libraries, it is necessary to store low-level CIs in software libraries.The software libraries are themselves CIs.

Software CIs reside on hardware media (e.g. paper, magnetic disk).Storage media must be maintained securely and safely so that software isnever lost or its integrity compromised.

Page 687: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 13SOFTWARE CONFIGURATION MANAGEMENT

2.2.2.1 Software libraries

A software library is a controlled collection of configuration items(e.g. source module, object module and document). Low-level CIs aredeveloped and stored in libraries (e.g. source and object libraries), and thenextracted to build high-level CIs (e.g. executable images).

As a minimum, every project must use the following softwarelibraries to store CIs:• development (or dynamic) libraries (SCM23);• master (or controlled) libraries (SCM24);• archive (or static) libraries (SCM25).

Development libraries store CIs that are being written or unit tested.Master libraries store CIs in the current baselines. Archive libraries store CIsin releases or retired baselines. CIs in archive libraries must not be modified(SCM26).

Development libraries are created and maintained by the softwareauthors in their own work space. Master and archive libraries are createdand maintained by the software librarian, who is responsible for:• establishing new master libraries;• updating master libraries;• backup of master libraries to archive libraries;• control of access to master libraries and archive libraries.

To make Figure 2.2B simple, the only storage activity shown is thatof the software librarian storing new and changed CIs in master libraries. CIsare also stored in development libraries during development and changeactivities. CIs are stored in archive libraries when the software is released orroutinely saved. In Figure 2.2B the outputs of the storage activity are thecontrolled CIs which are input to the integration process, or to the softwarechange process.

Software libraries are fundamental to a software configurationmanagement system. Whatever the library, the software configurationmanagement system should permit CIs in libraries to be read, inserted,replaced and deleted.

Page 688: ESA Software Engineering Standards

14 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)SOFTWARE CONFIGURATION MANAGEMENT

Access to software libraries should be controlled so that it isimpossible:• to access CIs without the correct authorisation;• for two people to simultaneously update the same CI.

Tables 2.2.2.1A and B show the software library access rights thatshould be granted to development staff and software librarians. Readaccess allows a CI to be examined or copied. The insert access right allowsa new CI to be added to a software library. The delete access right allows aCI to be removed from a library. The replace right allows a CI to be removedfrom a library and another version inserted.

library \ access right read insert replace delete

development libraries y y y y

master libraries y n n n

archive libraries y n n n

Table 2.2.2.1A: Development staff access rights

library \ access right read insert replace delete

development libraries y y y y

master libraries y n n n

archive libraries y n n n

Table 2.2.2.1B: Software librarian access rights

Table 2.2.2.1A does not mean that a development library should beaccessible to all developers. On the contrary, apart from the softwarelibrarian, only the person directly responsible for developing or maintainingthe CIs in a development library should have access to them. For sharingsoftware, master libraries should be used, not development libraries.

Simultaneous update occurs when two or more developers take acopy of a CI and make changes to it. When a developer returns a modifiedCI to the master library, modifications made by developers who have

Page 689: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 15SOFTWARE CONFIGURATION MANAGEMENT

returned their CI earlier are lost. A charge-in/charge-out or ‘locking'mechanism is required to prevent simultaneous update.

2.2.2.2 Media control

All CIs should be stored on controlled media (e.g. tapes, disks).Media should be labelled, both visibly (e.g. sticky label) and electronically(e.g. tape header). The label should have a standard format, and mustcontain the:• project name (SCM19);• configuration item identifier (name, type, version) (SCM20);• date of storage (SCM21);• content description (SCM22).

Procedures for the regular backup of development libraries must beestablished (SCM28). Up-to-date security copies of master and archivelibraries must always be available (SCM27). An aim of good media control isto be able to recover from media loss quickly.

2.2.3 Configuration change control

Changes are normal in the evolution of a system. Proper changecontrol is the essence of good configuration management.

Software configuration control evaluates proposed changes toconfiguration items and coordinates the implementation of approvedchanges. The change control process (see Figure 2.2B) analyses problemdescriptions (such as RIDs and SPRs), decides which (if any) controlled CIsare to be changed, retrieves them from the master library for change, andreturns changed CIs to the master library for storage. The process alsooutputs descriptions of the changes.

The change process is a miniature software development itself.Problems must be identified, analysed and a solution designed andimplemented. Each stage in the process must be documented. ESA PSS-05-0 describes procedures for changing documents and code that are partof a baseline. These procedures are discussed in detail in Section 2.3.3 and2.4.3.

Changes must be verified before the next changes areimplemented. If changes are done in a group, each change must beseparately verifiable. For documents, this just means that the Document

Page 690: ESA Software Engineering Standards

16 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)SOFTWARE CONFIGURATION MANAGEMENT

Change Record (DCR) at the beginning of a document should describeeach change precisely, or that a revised document is issued with changebars. For code, groups of changes must be carefully analysed to eliminateany confusion about the source of a fault during regression testing. If timepermits, changes are best done one at a time, testing each change beforeimplementing the next. This ‘one-at-a-time' principle applies at all levels:code modification and unit testing; integration and integration testing, andsystem testing.

If an evolutionary or incremental delivery life cycle approach is used,changes may be propagated backwards to releases in use, as well asforwards to releases under development. Consider two releases of software,1 and 2. Release 1 is in operational use and release 2 is undergoingdevelopment. As release 2 is developed, release 1 may evolve throughseveral baselines (e.g. release 1.0, 1.1, 1.2 etc.) as problems discovered inoperation are fixed.

Maintain

Reviewproblems in

Operaterelease 1

Developrelease 2

release 1

release 1

Reviewproblems in

release 2

SPR

SCR

release 1

SPR

SPR

SCR

Forwardspropagation

Backwardspropagation

of SPRsin release 1to release 2

of SPRsin release 2to release 1

Figure 2.2.3: Software problem report propagation

Reports of problems found in release 1 must be fed forwards to thedevelopers of release 2. Reports of problems in release 2 must also be fedbackward to the maintainers of release 1, so that they can decide if theproblems need to be fixed in release 1. Figure 2.2.3 shows how this can bedone. The maintainers of release 1 should consider whether the problemspose a hazard to the users of release 1, and update it if they do.

Page 691: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 17SOFTWARE CONFIGURATION MANAGEMENT

2.2.4 Configuration status accounting

Software configuration status accounting is ‘the recording andreporting of information needed to manage a configuration effectively,including a listing of the approved configuration identification, the status ofproposed changes to the configuration, and the implementation status ofthe proposed changes' [Ref. 2]. The status of all configuration items mustbe recorded (SCM31). Configuration status accounting continuesthroughout the life cycle.

In Figure 2.2.B, the Record CI Status process examines theconfiguration item lists output by the Identify CIs process, and the changedescriptions output by the Change CIs process, to produce the statusaccounts.

The software librarian is normally responsible for maintaining theconfiguration status accounts, and for checking that the ‘as-built'configuration status complies with the ‘as-designed' status defined in theSCMP.

Configuration status accounts are composed of:• records that describe the contents, dates and version/issue of each

baseline (SCM32 and SCM35);• records that describe the status of problem reports and their solutions

(SCM33 and SCM34).

Configuration status accounts can be presented as tables of thesetwo types of records.

Table 2.2.4 is an example of a table of baseline records. Eachcolumn of the table lists CIs in a baseline. The table shows the issue andrevision number of each document, and the version number of the code inseven baselines. Six of the baselines are associated with the six majormilestones of the software project. An extra interim baseline is createdduring integration. Table 2.2.4 assumes that revisions to each document arerequired between each baseline to show how the version numbers change.This is an extreme case.

Each management document (i.e. SPMP, SCMP, SVVP, SQAP) isreissued at each major milestone. This corresponds to the inclusion of newsections describing the plans for the next phase.

Page 692: ESA Software Engineering Standards

18 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)SOFTWARE CONFIGURATION MANAGEMENT

Baseline 1 2 3 4 5 6 7CI\Milestone URD

approvalSRD

approvalADD

approvalInterim

BaselineDDD

approvalProv'

Accept’Final

Accept’SPMP 1.0 2.0 3.0 4.0 4.0 4.1 4.2SCMP 1.0 2.0 3.0 4.0 4.0 4.0 4.0SVVP 1.0 2.0 3.0 4.0 4.1 4.2 4.3SQAP 1.0 2.0 3.0 4.0 4.0 4.0 4.0URD issue 1.0 1.1 1.2 1.3 1.4 1.5 1.6SRD issue 1.0 1.1 1.2 1.3 1.4 1.5ADD issue 1.0 1.1 1.2 1.3 1.4DDD issue 1.0 1.1 1.2 1.3SUM issue 1.0 1.1 1.2 1.3Program A 1.0 1.1 1.2 1.3Program B 1.0 1.1 1.2 1.3Compiler 5.2 5.2 5.2 5.2Linker 3.1 3.1 3.1 3.1Oper. Sys 6.1 6.1 6.1 6.1Program C 1.0 1.1 1.2STD issue 1.0 1.0SRD issue 1.0

Table 2.2.4: Example of baseline records

Records should be kept of the progress of Software ProblemReports (SPR) and Review Item Discrepancies (RID). A record should becreated for each RID and SPR when it is received.

Each record should contain fields for noting:• whether the SPR or RID has been approved;• the corresponding SCR, SMR or RID.

The records do not actually describe the changes - this is what theCI history does (see Section 2.2.1.4).

Configuration status accounts are important instruments of projectcontrol, and should be made available to all project members for up-to-dateinformation on baseline status. When a program works one day and not thenext, the first question is ‘what has changed?'. The configuration statusaccounts should answer such questions quickly and accurately. It should bepossible to reproduce the original baseline, verify correct operation, andindividually reintroduce the changes until the problem recurs [Ref. 6].

Values of quality metrics (e.g. reliability) might be derived by meansof data in the configuration status accounts, and these metrics might beused to decide the acceptability of the software. Sometimes quite simple

Page 693: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 19SOFTWARE CONFIGURATION MANAGEMENT

pieces of information can be highly significant. A module that requiresfrequent ‘bug fixes' may have more serious structural problems, andredesign may be necessary.

Sections 2.3.4 and 2.4.4 discuss the details of the configurationstatus accounting of documents and code.

2.2.5 Release

When a CI is distributed outside the project it is said to be‘released'. In Figure 2.2B, inputs to the release process are the baselinesthat have been successfully system tested, and the status accounts (fordeciding the readiness for release). Outputs of the release process are therelease itself and its ‘release description'.

The Software Transfer Document (STD) describes the first release.The configuration item list in the first release must be included in the STD(SCM12). A Software Release Note (SRN) describes subsequent releases ofthe software (SCM14). A list of changed configuration items must beincluded in each SRN (SCM13).

2.3 DOCUMENTATION CONFIGURATION MANAGEMENT

ESA PSS-05-0 requires several documents to be produced in thesoftware life cycle. The documents may specify the product (e.g. SoftwareRequirements Document, Architectural Design Document) or some part ofthe process (e.g. Software Verification and Validation Plan, SoftwareConfiguration Management Plan). Whatever the document, it must besubject to configuration management procedures (SCM01).

The following subsections discuss the configuration managementfunctions from the point of view of documentation.

2.3.1 Document configuration identification

A document CI may be:• an entire document;• a part of a document (e.g. a diagram).

The general principle established in ESA PSS-05-0 is that anyconfiguration identification method must accommodate new CIs without

Page 694: ESA Software Engineering Standards

20 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)SOFTWARE CONFIGURATION MANAGEMENT

modifying any existing CIs (SCM11). This principle applies to documentsand parts of documents just as much as to code.

For documentation, the CI identifier must include:• a number or name related to the purpose of the CI (SCM07);• an indication of the type of processing for which the CI is intended

(SCM08);• an issue number and a revision number (SCM10).

As an example, the identifier of Issue 1, Revision 3 of the SoftwareRequirements Document for a project called XXXX, produced with a wordprocessor that attaches the filetype DOC, might be:

XXXX/SRD/DOC/Issue 1/Revision 3

The DOC field relates to SCM08, the Issue and Revision fieldscontain the version information required by SCM10, and the remaining fieldsname the item, as required by SCM07.

The title page(s) of a document should include:• project name (SCM19);• configuration item identifier (name, type, version) (SCM20);• date (SCM21);• content description (SCM22) (i.e. abstract and table of contents).

2.3.2 Document configuration item storage

ESA PSS-05-0 requires that all CIs be stored in software libraries.Paper versions of documents should be systematically filed in the projectlibrary. Electronic documents should be managed with the aid of softwareconfiguration management tools and should be stored in electronic libraries.

2.3.3 Document configuration change control

The procedure for controlling changes to a document is shown inFigure 2.3.3. This diagram shows the decomposition of the ‘Change CIs'process in Figure 2.3.3.B. The process is used when a document that is partof a major baseline is modified, and is a mandatory requirement of ESAPSS-05-0 (SCM29).

Page 695: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 21SOFTWARE CONFIGURATION MANAGEMENT

examinedocument

commenton

RIDs

review

RID

commentedRID

approved RID

(DCR, DSS)

reviewer

author

UR/R, SR/R,AD/R, DD/R, SRB revise

document

(revised document)author

change description

Problem description (SPR)

Controlled CI

changed CI

(Draft or Issued Doc)

Figure 2.3.3: Document configuration change control activities

Examination is the first step in the document release process.Copies of the documents are circulated to everyone concerned (who shouldbe identified in the Software Verification and Validation Plan). In the DD, TRand OM phases, SPRs may be input to the examination stage, to guidereviewers looking for errors in the URD, SRD, ADD, DDD and SUM.

Reviewers describe the defects that they find in the document onReview Item Discrepancy (RID) forms. RIDs permit reviewers to describe the:• CI identifier of the document;• problem location (in terms of a lower level CI identifier, if appropriate);• problem;• possible solution recommended by the reviewer.

RIDs are returned to the document authors for comment. Authorsneed to study problems identified in RIDs prior to the review meeting andmark their response on the RIDs, recording their views on the problems andthe recommended solutions. Their responses may include preliminaryassessments of the resources required to make the recommendedchanges, and the risks involved.

The commented RIDs and draft document are then input to theformal review meeting. Procedures for formal review meetings are discussedin ESA PSS-05-10, Guide to Software Verification and Validation. The reviewmeeting may be a UR/R, SR/R, AD/R, DD/R or SRB meeting, dependingupon the phase of the project.

Page 696: ESA Software Engineering Standards

22 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)SOFTWARE CONFIGURATION MANAGEMENT

The review meeting should discuss the RIDs and decide whetherto:• close the review item because the update has been completed;• update (i.e. change) the document on the basis of the review item

comments;• reject the review item;• define actions needed before a decision to close, update or reject can

be made.

Each meeting should consider the level of authority required toapprove each update. Agreements with other organisations may benecessary before decisions on some RIDs can be made.

The review meeting ends with a decision on whether to approve thedocument. A document may be approved although some changes havebeen agreed. Until the document has been updated, the baseline consistsof the examined document and the RIDs describing the updates. Such RIDsare normally put in an appendix to the document.

A new version of the document incorporating the updates is thenprepared. All updates should be recorded in Document Change Records(DCRs) and on the Document Status Sheet (DSS). Because the updateshave the authority of the review board, authors must implement updateinstructions in the RIDs exactly.

Examples of RIDs, DCRs and DSSs are contained in Appendix D.

Sometimes a project finds that so many changes to a documentare required that the RID method of recording problems is too clumsy. Inthis situation reviewers may prefer to:• mark up a copy of the document;• describe the problems in a note.

Review by these methods is acceptable only when the first issue ofa document is being prepared, because there is no baseline for comparisonuntil the first issue appears. After the first issue, the RID/DCR/DSS methodmust be used, because it allows the accurate tracking of changes tobaselines, which the other methods do not.

Page 697: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 23SOFTWARE CONFIGURATION MANAGEMENT

2.3.4 Document configuration status accounting

Configuration status accounting keeps track of the contents ofbaselines. The configuration status accounts should describe the issue andrevision number of each document in every baseline (see Section 2.2.4).

The configuration status accounts must record the status of everyRID outstanding against every document in the baseline (SCM33). Table2.3.4 describes the configuration status accounts of the RIDs in baseline 3in Table 2.2.4.

RIDNumber

CI Datesubmitted

Datereviewed

Status DCRNumber

CI withDCR

1 URD 1.1 1/6/92 1/7/92 closed 1 URD 1.2

2 URD 1.1 5/6/92 1/7/92 rejected - -

3 URD 1.1 6/6/92 1/7/92 action

4 SRD 1.0 7/6/92 2/7/92 update

Table 2.3.4: Records of the status RIDs in baseline 3

It is also important to track the RIDs outstanding againstdocuments. Configuration status accounts need to be organised in a varietyof ways (e.g. CI identifier order) and a DBMS is useful for managing them.

2.3.5 Document release

A document is released when it is formally ‘issued'. The DocumentStatus Sheet (DSS) is updated to mark this event, and any DCRs (remainingfrom previous revisions) are omitted. There should be no outstanding RIDswhen a document is issued.

A formal issue must carry the approval of the responsible reviewbody. Only formally issued documents can be made applicable. Approvalsshould be marked on the title page.

2.4 CODE CONFIGURATION MANAGEMENT

Source code, object code, executable code, data files, tools, testsoftware and data must be subjected to configuration managementprocedures (SCM01). Procedures for the configuration management of codeneed to be defined before any code is written. This may be as early as the

Page 698: ESA Software Engineering Standards

24 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)SOFTWARE CONFIGURATION MANAGEMENT

UR phase if prototypes are needed. DD phase code configurationmanagement procedures need to be defined at the end of the AD phase inthe SCMP/AD. These procedures may differ very much from those used inprevious phases.

2.4.1 Code configuration identification

A code CI may be:• a data file;• an executable program;• an object module;• a source module.

The CI name must represent its function or specific purpose(SCM07). Since the name used for a software component in the ADD orDDD must also summarise the function of the component, the softwarecomponent name should be used in the configuration identifier. The name ofa source module that calculates an average might be called ‘CALC_MEAN',for example. This configuration identification method allows easy traceabilitybetween the design and the CIs that implement it.

The CI type must summarise the processing for which the item isintended. If CALC_MEAN is a FORTRAN source file, its type would be writtenas ‘FOR'. The configuration identifier of the first version of the file is then‘CALC_MEAN.FOR.1'.

The identifier of a CI must distinguish it from other items withdifferent:• requirements, especially functionality and interfaces (e.g.

CALC_ARITHMETIC_MEAN and CALC_GEOMETRIC_MEAN) (SCM04);• implementation (e.g. CALC_MEAN coded in FORTRAN and

CALC_MEAN coded in C) (SCM05).

Source modules must have a header that includes:• configuration item identifier (name, type, version) (SCM15);• author (SCM16);• creation date (SCM17);• change history (version/date/author/description) (SCM18).

Page 699: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 25SOFTWARE CONFIGURATION MANAGEMENT

Projects should define a standard module header in the DDD. Theformat of the header must follow the rules of the selected programminglanguage. Configuration management tools and documentation tools maybe built to handle header information.

Code configuration identification conventions need to beestablished before any design work is done. They should be strictlyobserved throughout the design, coding, integration and testing.

2.4.2 Code configuration item storage

Librarians are perhaps the most widely used software developmenttools after editors, compilers and linkers. Librarian tools are used to:• store source and object modules;• keep records of library transactions;• cross-reference source and object modules.

Once a source code module has been created and successfullycompiled, source and object code are inserted into the developmentlibraries. These libraries are owned by the programmer.

Programmers must observe configuration management proceduresduring coding and unit testing. The CI identifiers for the modules stored intheir development libraries should be the same as the identifiers for thecorresponding modules in master libraries. This makes CI status easy totrack. Programmers should not, for example, write a module called CM.FORand then rename it CALC_MEAN.FOR when it is submitted for promotion tothe master library.

Code should be portable. Absolute file references should not behardwired in source modules. Information relating names in software tonames in the environment should be stored in tables that can be easilyaccessed.

2.4.3 Code configuration change control

The procedure for controlling changes to released code is shown inFigure 2.4.3. This diagram is a decomposition of the ‘Change CIs' processin Figure 2.2B. The process is used when code that is part of a baseline ismodified, and is a mandatory requirement of ESA PSS-05-0 (SCM30).

Page 700: ESA Software Engineering Standards

26 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)SOFTWARE CONFIGURATION MANAGEMENT

diagnose SCR

reviewchanges

problems

expert

review board

changed CI

modifycode

developers

change description (SMR)

SCRapproved

problem description (SPR)

controlled CI

maintainers

(changed software components)

Figure 2.4.3: Code configuration change control activities

Problems may be identified during integration, system testing,acceptance testing or operations by users, development staff and qualityassurance personnel. Software Problem Reports (SPRs) contain:• CI title or name;• CI version or release number;• priority of the problem with respect to other problems;• a description of the problem;• operating environment;• recommended solution (if possible).

The SPR is then assigned to an expert for diagnosis. Somedebugging of code is often required, and the expert will need to have accessto all the software in the baseline. The expert must prepare a SoftwareChange Request (SCR) if modifications to the software are required. TheSCR describes the:• CI title or name;• CI version or release number;• priority of the problem with respect to other problems;• changes required;• responsible staff;• estimated start date, end date and manpower effort.

Page 701: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 27SOFTWARE CONFIGURATION MANAGEMENT

The review board (i.e. the DD/R, SRB or a delegated individual) thenreviews each SPR and associated SCRs and decides whether to:• close the SPR because the update has been completed;• update (i.e. change) the software on the basis of the• recommendations in the associated SCR;• reject the SPR;• define an action that needs to be taken before a decision to close,

update or reject can be made.

Approved SCRs are passed to the developers or maintainers forimplementation, who produce a Software Modification Report (SMR) foreach SCR. An SMR must describe the:• CI title or name;• CI version or release number;• changes implemented;• actual start date, end date and manpower effort.

Attachments may accompany the SPR, SCR and SMR to detail theproblem, the change and the modification.

The code to be changed is taken from the master library used tobuild the baseline that exhibited the problem. Regression testing isnecessary when changes are made (SCM39).

2.4.4 Code configuration status accounting

Configuration status accounting keeps track of the contents ofbaselines, allowing software evolution to be followed, and old baselines tobe reconstructed.

Records of the status of each CI should be kept. These shoulddescribe whether the CI has passed project milestones, such as unit testingand integration testing. The table of these records provides a snapshot ofbaseline status, and provides management with visibility of productdevelopment.

The configuration status accounts must record the status of everySPR, SCR and SMR related to every CI in the baseline (SCM34). Table 2.4.4shows how such records might be kept.

Page 702: ESA Software Engineering Standards

28 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)SOFTWARE CONFIGURATION MANAGEMENT

CIName

Version

SPR Datesubmitted

ReviewDate

SPRDecision

RelatedSCR

RelatedSMR

CompletionDate

LOADERv1.1

2 5/8/92 1/9/92 update 2 2 1/10/92

LOADERv1.1

3 7/8/92 1/9/92 update 3 2 1/10/92

EDITORv1.2

4 9/8/92 1/9/92 rejected - - -

MERGERv1.0

1 1/8/92 1/9/92 update 1 1 3/10/92

Table 2.4.4: Configuration status accounts example

2.4.5 Code release

The first release of the code must be documented in the STD.Subsequent releases must be accompanied by a Software Release Note(SRN). STDs and SRNs provide an overview of a release. They include aconfiguration item list that catalogues the CIs in the release. They mustsummarise any faults that have been repaired and the new requirementsthat have been incorporated (SCM36). One way to do this is to list the SPRsthat have been dealt with.

For each release, documentation and code must be consistent(SCM37). Furthermore, old releases must be retained, for reference(SCM38). Where possible, the previous release should be kept online duringa change-over period, to allow comparisons, and as a fallback. The numberof releases in operational use should be minimised.

Releases should be self-identifying (e.g. label, display or printedoutput) so that users know what the software is. Some form of softwareprotection may be desirable to avoid unauthorised use of a release.

Modified software must be retested before release (SCM29). Testsshould be selected from the SVVP to demonstrate its operational capability.

Page 703: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 29SOFTWARE CONFIGURATION MANAGEMENT TOOLS

CHAPTER 3 SOFTWARE CONFIGURATION MANAGEMENT TOOLS

3.1 INTRODUCTION

Software configuration management is often described as simple inconcept but complex in detail. Tools that support software configurationmanagement are widely available and their use is strongly recommended.

This chapter does not describe particular tools. Instead, this chapterdiscusses software configuration management tools in terms of theircapabilities. No single tool can be expected to provide all of them, anddevelopers need to define their own software configuration managementtool requirements and assemble a ‘toolset' that meets them.

ANSI/IEEE Std 1042-1987, IEEE Guide to Software ConfigurationManagement, recognises four types of toolset:• basic;• advanced;• online;• integrated.

This classification is used to organise software configurationmanagement tools. Each toolset includes the capabilities of precedingtoolsets.

3.2 BASIC TOOLSET REQUIREMENTS

Basic toolsets use standard operating system utilities such as theeditor and librarian system, perhaps supplemented by a databasemanagement system. They rely heavily on personal discipline. The capabilityrequirements for a basic toolset are listed below.1. A basic toolset should permit the structured definition, identification and

storage of configuration items.The file systems of operating systems allows storage of CIs in files. Afile system should object if the user tries to create a file with the samename as an existing file, i.e. it should ensure that names are unique.

Page 704: ESA Software Engineering Standards

30 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)SOFTWARE CONFIGURATION MANAGEMENT TOOLS

2. A basic toolset should permit the structured storage of configurationitems.The file systems of operating systems should allow the creation ofdirectory trees for storing files.

3. A basic toolset should provide a librarian system that supports:- insertion of modules;- extraction of modules;- replacement of modules by updated modules;- deletion of modules;- production of cross-reference listings to show which modules refer

to which;- storage of the transaction history;- all the above features for both source (i.e. text) and object (i.e.

binary) files.4. A basic toolset should provide facilities for building executable software

from designated sources.This implies the capability to create a command file of build instructionswith a text editor.

5. A basic toolset should provide security features so that access to CIscan be restricted to authorised personnel.This means that the operating system should allow the owner of a file tocontrol access to it.

6. A basic toolset should provide facilities for comparing source modulesso that changes can be identified.Most operating systems provide a tool for differencing ASCII files.

7. A basic toolset should include tools for the systematic backup of CIs.Specifically:

- automatic electronic labelling of media;- complete system backup;- incremental backup;- restore;- production of backup logs.

Most operating systems have commands for backing-up and restoringfiles.

Page 705: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 31SOFTWARE CONFIGURATION MANAGEMENT TOOLS

3.3 ADVANCED TOOLSET REQUIREMENTS

Advanced toolsets supplement the basic toolset with standalonetools such as source code control systems and module managementsystems. The extra capability requirements for an advanced toolset arelisted below.1. An advanced toolset should provide a locking system with the library to

ensure that only one person can work on a module at a time.Most operating system librarians do not provide this feature; it is acharacteristic of dedicated software configuration management librariantools.

2. An advanced toolset should minimise the storage space needed.One way to do this is store the latest version and the changes, usuallycalled ‘deltas', required to generate earlier versions.

3. An advanced librarian tool should allow long names to be used inidentifiers.Some operating systems force the use of shorter names than thecompiler or programming language standard allows, and this can makeit impossible to make the configuration identifiers contain the namesused in design.

4. An advanced toolset should permit rollback, so that softwareconfiguration management operations can be undone.A simple but highly desirable type of backtracking operation is the‘undelete'. Some tools can reverse deletion operations.

5. An advanced toolset should provide facilities for rebuilding executablesoftware from up-to-date versions of designated sources.This capability requires the build tool to examine the dependenciesbetween the modules in the build and recompile those modules thathave been changed since the last build.

6. An advanced toolset should record the version of each module that wasused to build a product baseline, so that product baselines can alwaysbe reproduced.

7. An advanced toolset should provide facilities for the handling ofconfiguration status accounts, specifically:

- insertion of new RID, SPR, SCR and SMR status records;- modification of RID, SPR, SCR and SMR status records;- deletion of RID, SPR, SCR and SMR status records;

Page 706: ESA Software Engineering Standards

32 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)SOFTWARE CONFIGURATION MANAGEMENT TOOLS

- search of RID, SPR, SCR and SMR records on any field;- differencing of configuration status accounts, so that changes can

be identified;- report generation.

3.4 ONLINE TOOLSET REQUIREMENTS

Online toolsets supplement the capabilities of advanced toolsets byproviding facilities for interactive entry of change control information. Theextra capability requirements for an online toolset are listed below.1. An online toolset should provide facilities for the direct entry of RID,

SPR, SCR and SMR information into the database;2. An online toolset should provide an authentication system that permits

online approval of changes.

3.5 INTEGRATED TOOLSET REQUIREMENTS

Integrated toolsets supplement the capabilities of online toolsets byextending the change control system to documentation. Integrated Toolsetsform part of so-called ‘Integrated Project Support Environments' (IPSEs),‘Project Support Environments' (PSEs) and ‘Software DevelopmentEnvironments' (SDEs). The extra capabilities of an integrated toolset arelisted below.1. An integrated toolset should recognise all the dependencies between

CIs, so that consistency can be controlled.This capability requires a super-library called a ‘repository'. The OnlineToolset has to be integrated with the CASE tools used for design. Thisfacility permits the system to be automatically rebuilt from a designchange, not just a code change.

2. An integrated toolset should have standard interfaces to other tools(e.g. project management tools).Standards for interfacing software tools have been proposed, such asthe Portable Common Tools Environment (PCTE) [Ref. 7] and theCommon APSE Interface Set (CAIS) [Ref. 8].

Page 707: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 33THE SOFTWARE CONFIGURATION MANAGEMENT PLAN

CHAPTER 4 THE SOFTWARE CONFIGURATION MANAGEMENT PLAN

4.1 INTRODUCTION

All software configuration management activities shall bedocumented in the Software Configuration Management Plan (SCM40). Anew section of the SCMP must be produced for each development phase(SCM42, SCM44, SCM46, SCM48). Each SCMP section must document allsoftware configuration management activities (SCM40), specifically the:• organisation of configuration management;• procedures for configuration identification;• procedures for change control;• procedures for configuration status accounting;• tools, techniques and methods for software configuration management;• procedures for supplier control;• procedures for the collection and retention of records.

The size and content of the SCMP should reflect the complexity ofthe project. ANSI/IEEE Std 1042-1987, IEEE Guide to Software ConfigurationManagement contains examples of plans for:• a complex, critical computer system;• a small software development project;• maintaining programs developed by other activities or organisations;• developing and maintaining embedded software.

Configuration management procedures must be in place beforesoftware production (code and documentation) starts (SCM41). Softwareconfiguration management procedures should be easy to follow andefficient. Wherever possible, procedures should be reusable in later phases.Instability in software configuration management procedures can impedeprogress in a software project.

Page 708: ESA Software Engineering Standards

34 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)THE SOFTWARE CONFIGURATION MANAGEMENT PLAN

4.2 STYLE

The SCMP should be plain and concise. The document should beclear, consistent and modifiable.

The author of the SCMP should assume familiarity with the purposeof the software, and not repeat information that is explained in otherdocuments.

4.3 RESPONSIBILITY

The developer is responsible for the production of the SCMP.

4.4 MEDIUM

It is usually assumed that the SCMP is a paper document. There isno reason why the SCMP should not be distributed electronically toparticipants with the necessary equipment.

4.5 CONTENT

The SCMP is divided into four sections, one for each developmentphase. These sections are called:• Software Configuration Management Plan for the SR phase (SCMP/SR);• Software Configuration Management Plan for the AD phase (SCMP/AD);• Software Configuration Management Plan for the DD phase

(SCMP/DD);• Software Configuration Management Plan for the TR phase (SCMP/TR).

ESA PSS-05-0 recommends the following table of contents for eachsection of the SCMP, which as been derived from the IEEE Standard forSoftware Configuration Management Plans (ANSI/IEEE Std 828-1990).

Service Information:

a - Abstractb - Table of Contentsc - Document Status Sheetd - Document Change records made since last issue

Page 709: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 35THE SOFTWARE CONFIGURATION MANAGEMENT PLAN

1 INTRODUCTION1.1 Purpose1.2 Scope1.3 Glossary1.4 References

2 MANAGEMENT

3 CONFIGURATION IDENTIFICATION

4 CONFIGURATION CONTROL4.1 Code (and document) control4.2 Media control4.3 Change control

5 CONFIGURATION STATUS ACCOUNTING

6 TOOLS TECHNIQUES AND METHODS FOR SCM

7 SUPPLIER CONTROL

8 RECORDS COLLECTION AND RETENTION

Material unsuitable for the above contents list should be inserted inadditional appendices. If there is no material for a section then the phrase‘Not Applicable' should be inserted and the section numbering preserved.

4.5.1 SCMP/1 INTRODUCTION

The following subsections should provide an introduction to theplan.

4.5.1.1 SCMP/1.1 Purpose

This section should:(1) briefly define the purpose of the particular SCMP;(2) specify the intended readership of the SCMP.

4.5.1.2 SCMP/1.2 Scope

This section should identify the:(1) configuration items to be managed;(2) configuration management activities in this plan;

Page 710: ESA Software Engineering Standards

36 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)THE SOFTWARE CONFIGURATION MANAGEMENT PLAN

(3) organisations the plan applies to;(4) phase of the life cycle the plan applies to.

4.5.1.3 SCMP/1.3 Glossary

This section should define all terms, acronyms, and abbreviationsused in the plan, or refer to other documents where the definitions can befound.

4.5.1.4 SCMP/1.4 References

This section should provide a complete list of all the applicable andreference documents, identified by title, author and date. Each documentshould be marked as applicable or reference. If appropriate, report number,journal name and publishing organisation should be included.

4.5.2 SCMP/2 MANAGEMENT

This section should describe the organisation of configurationmanagement, and the associated responsibilities. It should define the rolesto be carried out. ANSI/IEEE Std 828-1990, ‘Standard for SoftwareConfiguration Management Plans' [Ref. 3] recommends that the followingstructure be used for this section.

4.5.2.1 SCMP/2.1 Organisation

This section should:• identify the organisational roles that influence the software configuration

management function (e.g. project managers, programmers, qualityassurance personnel and review boards);

• describe the relationships between the organisational roles;• describe the interface with the user organisation.

Relationships between the organisational roles may be shown bymeans of an organigram. This section may reference the SPMP.

4.5.2.2 SCMP/2.2 SCM responsibilities

This section should identify the:• software configuration management functions each organisational role

is responsible for (e.g. identification, storage, change control, statusaccounting);

Page 711: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 37THE SOFTWARE CONFIGURATION MANAGEMENT PLAN

• responsibilities of each organisational role in the review, audit andapproval process;

• responsibilities of the users in the review, audit and approval process.

4.5.2.3 SCMP/2.3 Interface management

This section should define the procedures for the management ofexternal hardware and software interfaces. In particular it should identify the:• external organisations responsible for the systems or subsystems with

which the software interfaces;• points of contact in the external organisations for jointly managing the

interface;• groups responsible for the management of each interface.

4.5.2.4 SCMP/2.4 SCMP implementation

This section should establish the key events in the implementationof the SCMP, for example the:• readiness of the configuration management system for use;• establishment of the Software Review Board;• establishment of baselines;• release of products.

The scheduling of the software configuration managementresources should be shown (e.g. availability of software librarian, softwareconfiguration management tools and SRB). This section may cross-reference the SPMP.

4.5.2.5 SCMP/2.5 Applicable policies, directives and procedures

This section should:• identify all applicable software configuration management policies,

directives or procedures to be implemented as part of this plan(corporate software configuration management documents may bereferenced here, with notes describing the parts of the documents thatapply);

Page 712: ESA Software Engineering Standards

38 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)THE SOFTWARE CONFIGURATION MANAGEMENT PLAN

• describe any software configuration management polices, directives orprocedures specific to this project, for example:- project-specific interpretations of corporate software configuration

management documents;- level of authority required for each level of control;- level of review, testing or assurance required for promotion.

4.5.3 SCMP/3 CONFIGURATION IDENTIFICATION

This section should describe the conventions for identifying thesoftware items and then define the baselines used to control their evolution.

4.5.3.1 SCMP/3.1 Conventions

This section should:• define project CI naming conventions;• define or reference CI labelling conventions.

4.5.3.2 SCMP/3.2 Baselines

For each baseline, this section should give the:• identifier of the baseline;• contents, i.e. the:

- software itself (e.g. URD, SRD, ADD, DDD, modules, executables,SUM, SVVP);

- tools for making derived items in the baseline (e.g. compiler, linkerand build procedures);

- test software (e.g. data, harnesses and stubs);- RIDs, SPRs etc that relate to the baseline;

• ICDs, if any, which define the interfaces of the software;• review and approval events, and the acceptance criteria, associated

with establishing each baseline;• participation required of developers and users in establishing baselines.

Because the SCMP is a plan, the precise contents of each baselinemay not be known when it is written (e.g. names of modules before thedetailed design is started). When this occurs, procedures for getting a report

Page 713: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 39THE SOFTWARE CONFIGURATION MANAGEMENT PLAN

of the contents of the baseline should be defined (e.g. a directory list). Anexample of a report may be supplied in this section.

If appropriate, the description of each baseline should:• distinguish software being developed from software being reused or

purchased;• define the hardware environment needed for each configuration;• trace CIs to deliverable items listed in the SPMP, and, for code, to the

software components described in the ADD and DDD.

4.5.4 SCMP/4 CONFIGURATION CONTROL

In ANSI/IEEE Std 610.12-1990 [Ref. 2], configuration control coversonly configuration item change control. In ESA PSS-05-0, the definition ofconfiguration control is expanded to include configuration item storage.Sections 4.1, ‘Code Control' and 4.2, ‘Media Control' therefore describe theprocedures for configuration item storage. Section 4.3 of the plan describesthe procedures for configuration item change control.

4.5.4.1 SCMP/4.1 Code (and document) control

This section should describe the software library handlingprocedures. ESA PSS-05-0 calls for three types of library:• development (or dynamic);• master (or controlled);• archive (or static).

Ideally the same set of procedures should be used for each type oflibrary.

While Section 6 of the SCMP, ‘Tools, Techniques and Methods',gives background information, this section should describe, in a stepwisemanner, how these are applied.

4.5.4.2 SCMP/4.2 Media control

This section should describe the procedure for handling thehardware on which the software resides, such as:• magnetic disk;• magnetic tape;

Page 714: ESA Software Engineering Standards

40 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)THE SOFTWARE CONFIGURATION MANAGEMENT PLAN

• Read-Only Memory (ROM);• Erasable Programmable Read-Only Memory (EPROM);• optical disk.

Whatever media is used, this section should describe theprocedures for:• labelling media (which should be based on ESA PSS-05-0, Part 2,

Section 3.2.1);• storing the media (e.g. fire-proof safes, redundant off-site locations);• recycling the media (e.g. always use new magnetic tapes when

archiving).

4.5.4.3 SCMP/4.3 Change control

This section should define the procedures for processing changesto baselines described in Section 3.2 of the plan.

4.5.4.3.1 SCMP/4.3.1 Levels of authority

This section should define the level of authority required to authorisechanges to a baseline (e.g. software librarian, project manager).

4.5.4.3.2 SCMP/4.3.2 Change procedures

This section should define the procedures for processing changeproposals to software.

The documentation change process (i.e. URD, SRD, ADD, DDD,SUM) should be based on the procedures defined in ESA PSS-05-0, Part 2,Section 3.2.3.2.1, ‘Documentation Change Procedures'. These proceduresuse the RID, DCR and DSS forms.

The code change process should be based on the procedures ESAPSS-05-0, Part 2, Section 3.2.3.2.2, ‘Software Problem Reportingprocedures'. These procedures use the SPR, SCR and SMR forms.

4.5.4.3.3 SCMP/4.3.3 Review board

This section should define the:• membership of the review board (e.g. DD/R, SRB);• levels of authority required for different types of change.

Page 715: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 41THE SOFTWARE CONFIGURATION MANAGEMENT PLAN

The second point implies that the review board may delegate someof their responsibilities for change.

4.5.4.3.4 SCMP/4.3.4 Interface control

This section should define the procedures for controlling theinterfaces. In particular it should define the:• change control process for ICDs;• status accounting process for ICDs.

4.5.4.3.5 SCMP/4.3.5 Support software change procedures

This section should define the change control procedures forsupport software items. Support software items are not produced by thedeveloper, but may be components of the delivered system, or may be toolsused to make it. Examples are:• commercial software;• reused software;• compilers;• linkers.

4.5.5 SCMP/5 CONFIGURATION STATUS ACCOUNTING

This section should:• define how configuration item status information is to be collected,

stored, processed and reported;• identify the periodic reports to be provided about the status of the CIs,

and their distribution;• state what dynamic inquiry capabilities, if any, are to be provided;• describe how to implement any special status accounting requirements

specified by users.

In summary, this section defines how the project will keep an audittrail of changes.

Page 716: ESA Software Engineering Standards

42 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)THE SOFTWARE CONFIGURATION MANAGEMENT PLAN

4.5.6 SCMP/6 TOOLS, TECHNIQUES AND METHODS FOR SCM

This section should describe the tools, techniques and methods tosupport:• configuration identification (e.g. controlled allocation of identifiers);• configuration item storage (e.g. source code control systems);• configuration change control (e.g. online problem reporting systems);• configuration status accounting (e.g. tools to generate accounts).

4.5.7 SCMP/7 SUPPLIER CONTROL

This section should describe the requirements for softwareconfiguration management to be placed on external organisations such as:• subcontractors;• suppliers of commercial software (i.e. vendors).

This section should reference the review and audit procedures in theSVVP for evaluating the acceptability of supplied software.

This section may reference contractual requirements.

4.5.8 SCMP/8 RECORDS COLLECTION AND RETENTION

This section should:• identify the software configuration management records to be retained

(e.g. CIDLs, RIDs, DCRs, DSSs, SPRs, SCRs, SMRs, configurationstatus accounts);

• state the methods to be used for retention (e.g. fire-proof safe, onpaper, magnetic tape);

• state the retention period (e.g. retain all records for all baselines or onlythe last three baselines).

4.6 EVOLUTION

4.6.1 UR phase

By the end of the UR review, the SR phase section of the SCMPmust be produced (SCMP/SR) (SCM42). The SCMP/SR must cover the

Page 717: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) 43THE SOFTWARE CONFIGURATION MANAGEMENT PLAN

configuration management procedures for all documentation, CASE tooloutputs or prototype code produced in the SR phase (SCM43).

Since the SCMP/SR is the first section to be produced, Section 3.2of the SCMP should identify the libraries in which the URD, SRD, SCMP andSVVP will be stored. Section 3.3 of the SCMP should describe the UR/R andSR/R board.

4.6.2 SR phase

During the SR phase, the AD phase section of the SCMP must beproduced (SCMP/AD) (SCM44). The SCMP/AD must cover the configurationmanagement procedures for documentation, CASE tool outputs orprototype code produced in the AD phase (SCM45). Unless there is a goodreason to change (e.g. different CASE tool used), SR phase proceduresshould be reused.

Section 3.2 should identify the libraries in which the ADD will bestored. Section 3.3 should describe the AD/R board.

4.6.3 AD phase

During the AD phase, the DD phase section of the SCMP must beproduced (SCMP/DD) (SCM46). The SCMP/DD must cover theconfiguration management procedures for documentation, deliverable code,CASE tool outputs or prototype code produced in the DD phase (SCM47).Unless there is a good reason to change, AD phase procedures should bereused.

Section 3.2 should identify the libraries in which the DDD, SUM, andcode will be stored. Section 3.3 should describe the DD/R board.

4.6.4 DD phase

During the DD phase, the TR phase section of the SCMP must beproduced (SCMP/TR) (SCM48). The SCMP/TR must cover the proceduresfor the configuration management of the deliverables in the operationalenvironment (SCM49).

Page 718: ESA Software Engineering Standards

44 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)THE SOFTWARE CONFIGURATION MANAGEMENT PLAN

This page is intentionally left blank.

Page 719: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) A-1GLOSSARY

APPENDIX AGLOSSARY

A.1 LIST OF TERMS

Terms used in this document are consistent with ESA PSS-05-0 [Ref1] and ANSI/IEEE Std 610.12 [Ref 2]. This section defines the softwareconfiguration management terms used in this guide.

Archive library

A software library used for storing CIs in releases or retired baselines.

Backup

A copy of a software item for use in the event of the loss of the original.

Baseline

A configuration item that has been formally reviewed and agreed upon, thatthereafter serves as a basis for further development, and that can bechanged only through formal change control procedures.

Change control

An element of configuration management, consisting of the evaluation,coordination, approval or disapproval, and implementation of changes toconfiguration items after formal establishment of their configurationidentification [Ref. 2].

Charge-in

The process of inserting or replacing a configuration item in a master library.

Charge-out

The process of obtaining a copy of a configuration item from a master libraryand preventing charge-out operations on the item until it is charged back in.

Page 720: ESA Software Engineering Standards

A-2 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)GLOSSARY

Configuration identification

An element of configuration management, consisting of selecting theconfiguration items for a system and recording their functional and physicalcharacteristics in technical documentation [Ref. 2].

Configuration item

A aggregation of hardware, software, or both, that is designated forconfiguration management and treated as a single entity in the configurationmanagement process [Ref. 2].

Configuration item identifier

A data structure that describes the name, type and version number of aconfiguration item.

Configuration item list

A catalogue of configuration items in a baseline or release.

Configuration item storage

The process of storing configuration items in software libraries and onhardware media.

Configuration status accounting

An element of configuration management, consisting of the recording andreporting of information needed to manage a configuration effectively [Ref.2].

Control authority

The individual or group who decides upon the changes to a configurationitem; a configuration item cannot be changed without the control authority'sconsent.

Controlled configuration item

A configuration item that has been accepted by the project for integrationinto a baseline. Controlled configuration items are stored in master libraries.Controlled configuration items are subject to formal change control.

Page 721: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) A-3GLOSSARY

Derivation record

A record of one event in the life of the configuration item (e.g. source codechange, compilation, testing etc).

Development history

A set of time-ordered derivation records; there is a development history foreach configuration item.

Development library

A software library that is used for storing software components that arebeing coded (or modified) and unit tested.

Document change record

A description of one or more changes to a document.

Document status sheet

An issue-by-issue description of the history of a document.

Formal

Used to describe activities that have explicit and definite rules of procedure(e.g. formal review) or reasoning (e.g. formal method and formal proof).

Issue

A version of a document that has undergone major changes since theprevious version.

Level of authority

The position in the project hierarchy that is responsible for deciding upon achange.

Master library

A software library that is used for storing controlled configuration items.

Media control

The process of labelling and storing the hardware items on which thesoftware is stored.

Page 722: ESA Software Engineering Standards

A-4 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)GLOSSARY

Release

A baseline made available for use.

Review board

The authority responsible for evaluating proposed ... changes, and ensuringimplementation of the approved changes; same as configuration controlboard in Reference 2.

Review item discrepancy

A description of a problem with a document and its recommended solution.

Revision

A version of a document with minor changes from the previous version.

Simultaneous update

The phenomenon of two updates being made by different people ondifferent copies of the same configuration item; one of the updates is alwayslost.

Software change request

A description of the changes required to documentation and code, theresponsible staff, schedule and effort.

Software configuration management

A discipline of applying technical and administrative direction andsurveillance to identify and document the functional and physicalcharacteristics of a configuration item, control changes to thosecharacteristics, record and report change processing and implementationstatus, and verify compliance with specified requirements [Ref. 2].

Software librarian

A person responsible for establishing, controlling, and maintaining asoftware library.

Page 723: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) A-5GLOSSARY

Software library

A controlled collection of software and related documentation designed toaid in software development, use, or maintenance [Ref. 2].

Software modification report

A report that describes the implementation of a software change request.

Software problem report

A report that describes a problem with the software, the urgency of theproblem, the environment in which it occurs, and the recommendedsolution.

Software release note

A document that describes the changes and configuration items in asoftware release

Software review board

The name for the review board in the OM phase.

Variant

A configuration item that meets special requirements.

Page 724: ESA Software Engineering Standards

A-6 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)GLOSSARY

A.2 LIST OF ACRONYMS

AD Architectural DesignAD/R Architectural Design ReviewADD Architectural Design DocumentANSI American National Standards InstituteAPSE Ada Programming Support EnvironmentAT Acceptance TestBSSC Board for Software Standardisation and ControlCASE Computer Aided Software EngineeringDBMS Data Base Management SystemDCR Document Change RecordDD Detailed Design and productionDD/R Detailed Design and production ReviewDDD Detailed Design and production DocumentDSS Document Status SheetEPROM Erasable Programmable Read-Only MemoryESA European Space AgencyICD Interface Control DocumentIEEE Institute of Electrical and Electronics EngineersISO International Standards OrganisationIT Integration TestPA Product AssurancePCTE Portable Common Tools EnvironmentPSS Procedures, Specifications and StandardsQA Quality AssuranceRID Review Item DiscrepancyROM Read-Only MemorySCM Software Configuration ManagementSCMP Software Configuration Management PlanSCR Software Change RequestSMR Software Modification ReportSPM Software Project ManagementSPMP Software Project Management PlanSPR Software Problem ReportSQA Software Quality AssuranceSQAP Software Quality Assurance PlanSR Software RequirementsSR/R Software Requirements ReviewSRD Software Requirements DocumentSRN Software Release Note

Page 725: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) A-7GLOSSARY

ST System TestSTD Software Transfer DocumentSUM Software User ManualSVVP Software Verification and Validation PlanUR User RequirementsUR/R User Requirements ReviewURD User Requirements DocumentUT Unit Test

Page 726: ESA Software Engineering Standards

A-8 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)GLOSSARY

This page is intentionally left blank.

Page 727: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) B-1REFERENCES

APPENDIX BREFERENCES

1. ESA Software Engineering Standards, ESA PSS-05-0 Issue 2 February1991.

2. IEEE Standard Glossary of Software Engineering Terminology,ANSI/IEEE Std 610.12-1990.

3. IEEE Standard for Software Configuration Management Plans,ANSI/IEEE Std 828-1990.

4. IEEE Guide to Software Configuration Management, ANSI/IEEE Std1042-1987.

5. The STARTs Guide - a guide to methods and software tools for theconstruction of large real-time systems, NCC Publications, 1987.

6. Managing the Software Process, Watts S. Humphrey, SEI Series inSoftware Engineering, Addison-Wesley, August 1990.

7. Portable Common Tool Environment (PCTE) Abstract Specification,Standard ECMA-149, European Computer Manufacturers Association,December 1990.

8. Requirements for Ada Programming Support Environments,STONEMAN, U.S Department of Defense, February 1980.

Page 728: ESA Software Engineering Standards

B-2 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)REFERENCES

This page is intentionally left blank

Page 729: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) C-1MANDATORY PRACTICES

APPENDIX CMANDATORY PRACTICES

This appendix is repeated from ESA PSS-05-0, appendix D.9.

SCM01 All software items, for example documentation, source code, object orrelocatable code, executable code, files, tools, test software and data, shallbe subjected to configuration management procedures.

SCM02 The configuration management procedures shall establish methods foridentifying, storing and changing software items through development,integration and transfer.

SCM03 A common set of configuration management procedures shall be used.

Every configuration item shall have an identifier that distinguishes it fromother items with different:

SCM04 • requirements, especially functionality and interfaces;

SCM05 • implementation.

SCM06 Each component defined in the design process shall be designated as a CIand include an identifier.

SCM07 The identifier shall include a number or a name related to the purpose of theCI.

SCM08 The identifier shall include an indication of the type of processing the CI isintended for (e.g. filetype information).

SCM09 The identifier of a CI shall include a version number.

SCM10 The identifier of documents shall include an issue number and a revisionnumber.

SCM11 The configuration identification method shall be capable of accommodatingnew CIs, without requiring the modification of the identifiers of any existingCIs.

SCM12 In the TR phase, a list of configuration items in the first release shall beincluded in the STD.

Page 730: ESA Software Engineering Standards

C-2 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)MANDATORY PRACTICES

SCM13 In the OM phase, a list of changed configuration items shall be included ineach Software Release Note (SRN).

SCM14 An SRN shall accompany each release made in the OM phase.

As part of the configuration identification method, a software module shallhave a standard header that includes:

SCM15 • configuration item identifier (name, type, version);

SCM16 • original author;

SCM17 • creation date;

SCM18 • change history (version/date/author/description).

All documentation and storage media shall be clearly labelled in a standardformat, with at least the following data:

SCM19 • project name;

SCM20 • configuration item identifier (name, type, version);

SCM21 • date;

SCM22 • content description.

To ensure security and control of the software, at a minimum, the followingsoftware libraries shall be implemented for storing all the deliverablecomponents (e.g. documentation, source and executable code, test files,command procedures):

SCM23 • Development (or Dynamic) library;

SCM24 • Master (or Controlled) library;

SCM25 • Static (or Archive) library.

SCM26 • Static libraries shall not be modified.

SCM27 Up-to-date security copies of master and static libraries shall always beavailable.

SCM28 Procedures for the regular backup of development libraries shall beestablished.

Page 731: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) C-3MANDATORY PRACTICES

SCM29 The change procedure described (in Part 2, Section 3.2.3.2.1) shall beobserved when changes are needed to a delivered document.

SCM30 Software problems and change proposals shall be handled by theprocedure described (in Part 2, Section 3.2.3.2.2).

SCM31 The status of all configuration items shall be recorded.

To perform software status accounting, each software project shall record:

SCM32 • the date and version/issue of each baseline;

SCM33 • the date and status of each RID and DCR;

SCM34 • the date and status of each SPR, SCR and SMR;

SCM35 • a summary description of each Configuration Item.

SCM36 As a minimum, the SRN shall record the faults that have been repaired andthe new requirements that have been incorporated.

SCM37 For each release, documentation and code shall be consistent.

SCM38 Old releases shall be retained, for reference.

SCM39 Modified software shall be retested before release.

SCM40 All software configuration management activities shall be documented in theSoftware Configuration Management Plan (SCMP).

SCM41 Configuration management procedures shall be in place before theproduction of software (code and documentation) starts.

SCM42 By the end of the UR review, the SR phase section of the SCMP shall beproduced (SCMP/SR).

SCM43 The SCMP/SR shall cover the configuration management procedures fordocumentation, and any CASE tool outputs or prototype code, to beproduced in the SR phase.

SCM44 During the SR phase, the AD phase section of the SCMP shall be produced(SCMP/AD).

Page 732: ESA Software Engineering Standards

C-4 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)MANDATORY PRACTICES

SCM45 The SCMP/AD shall cover the configuration management procedures fordocumentation, and CASE tool outputs or prototype code, to be producedin the AD phase.

SCM46 During the AD phase, the DD phase section of the SCMP shall be produced(SCMP/DD).

SCM47 The SCMP/DD shall cover the configuration management procedures fordocumentation, deliverable code, and any CASE tool outputs or prototypecode, to be produced in the DD phase.

SCM48 During the DD phase, the TR phase section of the SCMP shall be produced(SCMP/TR).

SCM49 The SCMP/TR shall cover the procedures for the configuration managementof the deliverables in the operational environment.

Page 733: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) C-1APPENDIX D

APPENDIX D

FORM TEMPLATES

Template forms are provided for:

DCR Document Change RecordDSS Document Status SheetRID Review Item DiscrepancySCR Software Change RequestSMR Software Modification ReportSPR Software Problem ReportSRN Software Release Note

Page 734: ESA Software Engineering Standards

C-2 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)FORM TEMPLATES

DOCUMENT CHANGE RECORD DCR NO

DATE

ORIGINATOR

APPROVED BY

1. DOCUMENT TITLE:

2. DOCUMENT REFERENCE NUMBER:

3. DOCUMENT ISSUE/REVISION NUMBER:

4. PAGE 5. PARAGRAPH 6. REASON FOR CHANGE

Page 735: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) C-3FORM TEMPLATES

DOCUMENT STATUS SHEET

1. DOCUMENT TITLE:

2. DOCUMENT REFERENCE NUMBER:

3. ISSUE 4. REVISION 5. DATE 6. REASON FOR CHANGE

Page 736: ESA Software Engineering Standards

C-4 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)FORM TEMPLATES

REVIEW ITEM DISCREPANCY RID NO

DATE

ORIGINATOR

1. DOCUMENT TITLE:

2. DOCUMENT REFERENCE NUMBER:

3. DOCUMENT ISSUE/REVISION NUMBER:

4. PROBLEM LOCATION:

5. PROBLEM DESCRIPTION:

6. RECOMMENDED SOLUTION;

7. AUTHOR’S RESPONSE:

8. REVIEW DECISION: CLOSE/UPDATE/ACTION/REJECT (underline choice)

Page 737: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) C-5FORM TEMPLATES

SOFTWARE CHANGE REQUEST SCR NO

DATE

ORIGINATOR

RELATED SPRs

1. SOFTWARE ITEM TITLE:

2. SOFTWARE ITEM VERSION/RELEASE NUMBER:

3. PRIORITY: CRITICAL/URGENT/ROUTINE (underline choice)

4. CHANGES REQUIRED:

5. RESPONSIBLE STAFF:

6. ESTIMATED START DATE, END DATE AND MANPOWER EFFORT

7. ATTACHMENTS:

8. REVIEW DECISION: CLOSE/UPDATE/ACTION/REJECT (underline choice)

Page 738: ESA Software Engineering Standards

C-6 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)FORM TEMPLATES

SOFTWARE MODIFICATION REPORT SMR NO

DATE

ORIGINATOR

RELATED SCRs

1. SOFTWARE ITEM TITLE:

2. SOFTWARE ITEM VERSION/RELEASE NUMBER:

3. CHANGES IMPLEMENTED:

4. ACTUAL START DATE, END DATE AND MANPOWER EFFORT:

5. ATTACHMENTS (tick as appropriate)

Page 739: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) C-7FORM TEMPLATES

SOFTWARE PROBLEM REPORT SPR NO

DATE

ORIGINATOR

1. SOFTWARE ITEM TITLE:

2. SOFTWARE ITEM VERSION/RELEASE NUMBER:

3. PRIORITY: CRITICAL/URGENT/ROUTINE (underline choice)

4. PROBLEM DESCRIPTION:

5. DESCRIPTION OF ENVIRONMENT:

6. RECOMMENDED SOLUTION;

7. REVIEW DECISION: CLOSE/UPDATE/ACTION/REJECT (underline choice)

8. ATTACHMENTS:

Page 740: ESA Software Engineering Standards

C-8 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)FORM TEMPLATES

SOFTWARE RELEASE NOTE SRN NO

DATE

ORIGINATOR

1. SOFTWARE ITEM TITLE:

2. SOFTWARE ITEM VERSION/RELEASE NUMBER:

3. CHANGES IN THIS RELEASE:

4. CONFIGURATION ITEMS INCLUDED IN THIS RELEASE:

5. INSTALLATION INSTRUCTIONS:

Page 741: ESA Software Engineering Standards

ESA PSS-05-09 Issue 1 Revision 1 (March 1995) E-1INDEX

APPENDIX DINDEX

ANSI/IEEE Std 1042-1987, 29, 33ANSI/IEEE Std 610.12-1990, 5, 39ANSI/IEEE Std 828-1990, 34archive library, 13backup, 15baseline, 8, 17CAIS, 32CASE tool, 32, 43change control, 15charge-in/charge-out, 15CI, 4Common APSE Interface Set, 32conditional code, 9configuration control, 15configuration item, 4configuration item list, 19configuration status accounting, 17control authority, 11controlled library, 13DCR, 22derivation record, 12derived CI, 10development history, 12development library, 13Document Change Record, 22Document Status Sheet, 22DSS, 22dynamic library, 13identifier, 9library, 13locking mechanism, 15master library, 13media, 15media control, 15PCTE, 32Portable Common Tools Environment, 32release, 19review board, 11Review Item Discrepancy, 21revision, 10RID, 21SCM01, 3, 19, 23SCM02, 3SCM03, 4SCM04, 24

SCM05, 24SCM06, 9SCM07, 9, 20, 24SCM08, 9, 20SCM09, 9SCM10, 20SCM11, 10, 20SCM12, 19SCM13, 19SCM14, 19SCM15, 24SCM16, 24SCM17, 24SCM18, 12, 24SCM19, 15, 20SCM20, 15, 20SCM21, 15, 20SCM22, 15, 20SCM23, 13SCM24, 13SCM25, 13SCM26, 13SCM27, 15SCM28, 15SCM29, 20, 28SCM30, 25SCM31, 17SCM32, 17SCM33, 17, 23SCM34, 17, 27SCM35, 17SCM36, 28SCM37, 28SCM38, 28SCM39, 27SCM40, 33SCM41, 33SCM42, 33, 42SCM43, 43SCM44, 33, 43SCM45, 43SCM46, 33, 43SCM47, 43SCM48, 33, 43SCM49, 43

Page 742: ESA Software Engineering Standards

E-2 ESA PSS-05-09 Issue 1 Revision 1 (March 1995)INDEX

SCMP/AD, 43SCMP/DD, 43SCMP/SR, 42SCMP/TR, 43SCR, 26simultaneous update, 14SMR, 27software authors, 11Software Change Request, 26software configuration management, 5software librarian, 11, 17Software Modification Report, 27Software Problem Reports, 26software project manager, 11Software Release Note, 28Software Review Board, 21source CI, 10SPR, 26SRB, 3, 11, 21, 27, 37, 40SRN, 28static library, 13STD, 28variant, 9version, 10

Page 743: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1March 1995

european space agency / agence spatiale européenne8-10, rue Mario-Nikis, 75738 PARIS CEDEX, France

Guide tosoftwareverificationandvalidationPrepared by:ESA Board for SoftwareStandardisation and Control(BSSC)

Approved by:The Inspector General, ESA

Page 744: ESA Software Engineering Standards

ii ESA PSS-05-10 Issue 1 Revision 1 (March 1995)DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

1. DOCUMENT TITLE: ESA PSS-05-10 Guide to Software Verification and Validation

2. ISSUE 3. REVISION 4. DATE 5. REASON FOR CHANGE

1 0 1994 First issue

1 1 1995 Minor updates for publication

Issue 1 Revision 1 approved, May 1995Board for Software Standardisation and ControlM. Jones and U. Mortensen, co-chairmen

Issue 1 approved by:The Inspector General, ESA

Published by ESA Publications Division,ESTEC, Noordwijk, The Netherlands.Printed in the Netherlands.ESA Price code: E2ISSN 0379-4059

Copyright © 1994 by European Space Agency

Page 745: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) iiiTABLE OF CONTENTS

TABLE OF CONTENTS

CHAPTER 1 INTRODUCTION................................................................................ 11.1 PURPOSE ...............................................................................................................11.2 OVERVIEW..............................................................................................................11.3 IEEE STANDARDS USED FOR THIS GUIDE.........................................................2

CHAPTER 2 SOFTWARE VERIFICATION AND VALIDATION.............................. 32.1 INTRODUCTION.....................................................................................................32.2 PRINCIPLES OF SOFTWARE VERIFICATION AND VALIDATION.........................42.3 REVIEWS.................................................................................................................6

2.3.1 Technical reviews..............................................................................................72.3.1.1 Objectives...............................................................................................82.3.1.2 Organisation ...........................................................................................82.3.1.3 Input ........................................................................................................92.3.1.4 Activities..................................................................................................9

2.3.1.4.1 Preparation................................................................................ 102.3.1.4.2 Review meeting......................................................................... 11

2.3.1.5 Output .................................................................................................. 122.3.2 Walkthroughs................................................................................................ 12

2.3.2.1 Objectives............................................................................................ 132.3.2.2 Organisation ........................................................................................ 132.3.2.3 Input ..................................................................................................... 142.3.2.4 Activities............................................................................................... 14

2.3.2.4.1 Preparation................................................................................ 142.3.2.4.2 Review meeting......................................................................... 14

2.3.2.5 Output .................................................................................................. 152.3.3 Audits ............................................................................................................. 15

2.3.3.1 Objectives............................................................................................ 162.3.3.2 Organisation ........................................................................................ 162.3.3.3 Input .................................................................................................... 162.3.3.4 Activities............................................................................................... 172.3.3.5 Output .................................................................................................. 17

2.4 TRACING .............................................................................................................. 182.5 FORMAL PROOF.................................................................................................. 192.6 TESTING ............................................................................................................... 19

2.6.1 Unit tests ........................................................................................................ 222.6.1.1 Unit test planning ................................................................................ 222.6.1.2 Unit test design ................................................................................... 23

2.6.1.2.1 White-box unit tests .................................................................. 252.6.1.2.2 Black-box unit tests................................................................... 262.6.1.2.3 Performance tests..................................................................... 28

Page 746: ESA Software Engineering Standards

iv ESA PSS-05-10 Issue 1 Revision 1 (March 1995)TABLE OF CONTENTS

2.6.1.3 Unit test case definition....................................................................... 282.6.1.4 Unit test procedure definition.............................................................. 282.6.1.5 Unit test reporting................................................................................ 29

2.6.2 Integration tests ............................................................................................. 292.6.2.1 Integration test planning ..................................................................... 292.6.2.2 Integration test design ........................................................................ 30

2.6.2.2.1 White-box integration tests ....................................................... 312.6.2.2.2 Black-box integration tests ....................................................... 322.6.2.2.3 Performance tests..................................................................... 32

2.6.2.3 Integration test case definition............................................................ 322.6.2.4 Integration test procedure definition................................................... 322.6.2.5 Integration test reporting..................................................................... 33

2.6.3 System tests .................................................................................................. 332.6.3.1 System test planning........................................................................... 332.6.3.2 System test design.............................................................................. 33

2.6.3.2.1 Function tests............................................................................ 342.6.3.2.2 Performance tests..................................................................... 342.6.3.2.3 Interface tests............................................................................ 352.6.3.2.4 Operations tests........................................................................ 352.6.3.2.5 Resource tests .......................................................................... 362.6.3.2.6 Security tests............................................................................. 362.6.3.2.7 Portability tests.......................................................................... 372.6.3.2.8 Reliability tests........................................................................... 372.6.3.2.9 Maintainability tests .................................................................. 372.6.3.2.10 Safety tests.............................................................................. 382.6.3.2.11 Miscellaneous tests ................................................................ 382.6.3.2.12 Regression tests ..................................................................... 382.6.3.2.13 Stress tests.............................................................................. 39

2.6.3.3 System test case definition ................................................................. 392.6.3.4 System test procedure definition ........................................................ 392.6.3.5 System test reporting .......................................................................... 40

2.6.4 Acceptance tests........................................................................................... 402.6.4.1 Acceptance test planning ................................................................... 402.6.4.2 Acceptance test design ...................................................................... 40

2.6.4.2.1 Capability tests.......................................................................... 412.6.4.2.2 Constraint tests ......................................................................... 41

2.6.4.3 Acceptance test case specification.................................................... 412.6.4.4 Acceptance test procedure specification........................................... 422.6.4.5 Acceptance test reporting................................................................... 42

Page 747: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) vTABLE OF CONTENTS

CHAPTER 3 SOFTWARE VERIFICATION AND VALIDATION METHODS......... 433.1 INTRODUCTION................................................................................................... 433.2 SOFTWARE INSPECTIONS ................................................................................. 43

3.2.1 Objectives ...................................................................................................... 443.2.2 Organisation .................................................................................................. 443.2.3 Input ............................................................................................................... 453.2.4 Activities ......................................................................................................... 45

3.2.4.1 Overview .............................................................................................. 463.2.4.2 Preparation .......................................................................................... 463.2.4.3 Review meeting ................................................................................... 473.2.4.4 Rework ................................................................................................. 483.2.4.5 Follow-up ............................................................................................. 48

3.2.5 Output ............................................................................................................ 483.3 FORMAL METHODS ............................................................................................ 483.4 PROGRAM VERIFICATION TECHNIQUES.......................................................... 493.5 CLEANROOM METHOD ...................................................................................... 493.6 STRUCTURED TESTING...................................................................................... 50

3.6.1 Testability ....................................................................................................... 503.6.2 Branch testing................................................................................................ 543.6.3 Baseline method............................................................................................ 54

3.7 STRUCTURED INTEGRATION TESTING ............................................................ 553.7.1 Testability ....................................................................................................... 563.7.2 Control flow testing........................................................................................ 603.7.3 Design integration testing method................................................................ 60

CHAPTER 4 SOFTWARE VERIFICATION AND VALIDATION TOOLS............... 634.1 INTRODUCTION................................................................................................... 634.2 TOOLS FOR REVIEWING..................................................................................... 63

4.2.1 General administrative tools.......................................................................... 634.2.2 Static analysers.............................................................................................. 644.2.3 Configuration management tools ................................................................. 654.2.4 Reverse engineering tools............................................................................. 65

4.3 TOOLS FOR TRACING......................................................................................... 654.4 TOOLS FOR FORMAL PROOF............................................................................ 674.5 TOOLS FOR TESTING ......................................................................................... 67

4.5.1 Static analysers.............................................................................................. 694.5.2 Test case generators..................................................................................... 704.5.3 Test harnesses .............................................................................................. 704.5.4 Debuggers ..................................................................................................... 724.5.5 Coverage analysers....................................................................................... 724.5.6 Performance analysers.................................................................................. 734.5.7 Comparators.................................................................................................. 734.5.8 Test management tools ................................................................................ 74

Page 748: ESA Software Engineering Standards

vi ESA PSS-05-10 Issue 1 Revision 1 (March 1995)TABLE OF CONTENTS

CHAPTER 5 THE SOFTWARE VERIFICATION AND VALIDATION PLAN ......... 755.1 INTRODUCTION................................................................................................... 755.2 STYLE.................................................................................................................... 775.3 RESPONSIBILITY.................................................................................................. 775.4 MEDIUM................................................................................................................ 775.5 SERVICE INFORMATION..................................................................................... 785.6 CONTENT OF SVVP/SR, SVVP/AD & SVVP/DD SECTIONS............................... 785.7 CONTENT OF SVVP/UT, SVVP/IT, SVVP/ST & SVVP/AT SECTIONS ................. 825.8 EVOLUTION.......................................................................................................... 92

5.8.1 UR phase ....................................................................................................... 925.8.2 SR phase........................................................................................................ 925.8.3 AD phase ....................................................................................................... 935.8.4 DD phase ....................................................................................................... 93

APPENDIX A GLOSSARY ...................................................................................A-1APPENDIX B REFERENCES...............................................................................B-1APPENDIX C MANDATORY PRACTICES ......................................................... C-1APPENDIX D INDEX........................................................................................... D-1

Page 749: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) viiPREFACE

PREFACE

This document is one of a series of guides to software engineering produced bythe Board for Software Standardisation and Control (BSSC), of the European SpaceAgency. The guides contain advisory material for software developers conforming toESA's Software Engineering Standards, ESA PSS-05-0. They have been compiled fromdiscussions with software engineers, research of the software engineering literature,and experience gained from the application of the Software Engineering Standards inprojects.

Levels one and two of the document tree at the time of writing are shown inFigure 1. This guide, identified by the shaded box, provides guidance aboutimplementing the mandatory requirements for software verification and validationdescribed in the top level document ESA PSS-05-0.

Guide to theSoftware Engineering

Guide to theUser Requirements

Definition Phase

Guide toSoftware Project

Management

PSS-05-01

PSS-05-02 UR GuidePSS-05-03 SR Guide

PSS-05-04 AD GuidePSS-05-05 DD Guide

PSS-05-06 TR GuidePSS-05-07 OM Guide

PSS-05-08 SPM GuidePSS-05-09 SCM Guide

PSS-05-11 SQA Guide

ESASoftware

EngineeringStandards

PSS-05-0

Standards

Level 1

Level 2

PSS-05-10 SVV Guide

Figure 1: ESA PSS-05-0 document tree

The Guide to the Software Engineering Standards, ESA PSS-05-01, containsfurther information about the document tree. The interested reader should consult thisguide for current information about the ESA PSS-05-0 standards and guides.

The following past and present BSSC members have contributed to theproduction of this guide: Carlo Mazza (chairman), Gianfranco Alvisi, Michael Jones,Bryan Melton, Daniel de Pablo and Adriaan Scheffer.

Page 750: ESA Software Engineering Standards

viii ESA PSS-05-10 Issue 1 Revision 1 (March 1995)PREFACE

The BSSC wishes to thank Jon Fairclough for his assistance in the developmentof the Standards and Guides, and to all those software engineers in ESA and Industrywho have made contributions.

Requests for clarifications, change proposals or any other comment concerningthis guide should be addressed to:

BSSC/ESOC Secretariat BSSC/ESTEC SecretariatAttention of Mr C Mazza Attention of Mr B MeltonESOC ESTECRobert Bosch Strasse 5 Postbus 299D-64293 Darmstadt NL-2200 AG NoordwijkGermany The Netherlands

Page 751: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 1INTRODUCTION

CHAPTER 1INTRODUCTION

1.1 PURPOSE

ESA PSS-05-0 describes the software engineering standards to beapplied for all deliverable software implemented for the European SpaceAgency (ESA), either in-house or by industry [Ref 1].

ESA PSS-05-0 requires that software be verified during every phaseof its development life cycle and validated when it is transferred. Theseactivities are called 'Software Verification and Validation' (SVV). Each projectmust define its Software Verification and Validation activities in a SoftwareVerification and Validation Plan (SVVP).

This guide defines and explains what software verification andvalidation is, provides guidelines on how to do it, and defines in detail what aSoftware Verification and Validation Plan should contain.

This guide should be read by everyone concerned with developingsoftware, such as software project managers, software engineers andsoftware quality assurance staff. Sections on acceptance testing and formalreviews should be of interest to users.

1.2 OVERVIEW

Chapter 2 contains a general discussion of the principles ofsoftware verification and validation, expanding upon the ideas in ESA PSS-05-0. Chapter 3 discusses methods for software verification and validationthat can be used to supplement the basic methods described in Chapter 2.Chapter 4 discusses tools for software verification and validation. Chapter 5describes how to write the SVVP.

All the mandatory practices in ESA PSS-05-0 concerning softwareverification and validation are repeated in this document. The identifier of thepractice is added in parentheses to mark a repetition. This documentcontains no new mandatory practices.

Page 752: ESA Software Engineering Standards

2 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)INTRODUCTION

1.3 IEEE STANDARDS USED FOR THIS GUIDE

Six standards of the Institute of Electrical and Electronics Engineers(IEEE) have been used to ensure that this guide complies as far as possiblewith internationally accepted standards for verification and validationterminology and documentation. The IEEE standards are listed in Table 1.3below.

Reference Title610.12-1990 Standard Glossary of Software Engineering Terminology829-1983 Standard for Software Test Documentation1008-1987 Standard for Software Unit Testing1012-1986 Standard for Software Verification and Validation Plans1028-1988 Standard for Software Reviews and Audits

Table 1.3: IEEE Standards used for this guide.

IEEE Standard 829-1983 was used to define the table of contentsfor the SVVP sections that document the unit, integration, system andacceptance testing activities (i.e. SVVP/UT, SVVP/IT, SVVP/ST, SVVP/AT).

IEEE Standard 1008-1987 provides a detailed specification of theunit testing process. Readers who require further information on the unittesting should consult this standard.

IEEE Standard 1012-1986 was used to define the table of contentsfor the SVVP sections that document the non-testing verification andvalidation activities (i.e. SVVP/SR, SVVP/AD, SVVP/DD).

IEEE Standard 1028-1988 was used to define the technical review,walkthrough, inspection and audit processes.

Because of the need to integrate the requirements of six standardsinto a single approach to software verification and validation, users of thisguide should not claim complete compliance with any one of the IEEEstandards.

Page 753: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 3SOFTWARE VERIFICATION AND VALIDATION

CHAPTER 2SOFTWARE VERIFICATION AND VALIDATION

2.1 INTRODUCTION

Software verification and validation activities check the softwareagainst its specifications. Every project must verify and validate the softwareit produces. This is done by:• checking that each software item meets specified requirements;• checking each software item before it is used as an input to another

activity;• ensuring that checks on each software item are done, as far as

possible, by someone other than the author;• ensuring that the amount of verification and validation effort is adequate

to show each software item is suitable for operational use.

Project management is responsible for organising softwareverification and validation activities, the definition of software verification andvalidation roles (e.g. review team leaders), and the allocation of staff tothose roles.

Whatever the size of project, software verification and validationgreatly affects software quality. People are not infallible, and software thathas not been verified has little chance of working. Typically, 20 to 50 errorsper 1000 lines of code are found during development, and 1.5 to 4 per 1000lines of code remain even after system testing [Ref 20]. Each of these errorscould lead to an operational failure or non-compliance with a requirement.The objective of software verification and validation is to reduce softwareerrors to an acceptable level. The effort needed can range from 30% to 90%of the total project resources, depending upon the criticality and complexityof the software [Ref 12].

This chapter summarises the principles of software verification andvalidation described in ESA PSS-05-0 and then discusses the application ofthese principles first to documents and then to code.

Page 754: ESA Software Engineering Standards

4 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION

2.2 PRINCIPLES OF SOFTWARE VERIFICATION AND VALIDATION

Verification can mean the:• act of reviewing, inspecting, testing, checking, auditing, or otherwise

establishing and documenting whether items, processes, services ordocuments conform to specified requirements [Ref.5];

• process of evaluating a system or component to determine whether theproducts of a given development phase satisfy the conditions imposedat the start of the phase [Ref. 6]

• formal proof of program correctness [Ref.6].

The first definition of verification in the list above is the most generaland includes the other two. In ESA PSS-05-0, the first definition applies.

Validation is, according to its ANSI/IEEE definition, 'the process ofevaluating a system or component during or at the end of the developmentprocess to determine whether it satisfies specified requirements'. Validationis, therefore, 'end-to-end' verification.

Verification activities include:• technical reviews, walkthroughs and software inspections;• checking that software requirements are traceable to user requirements;• checking that design components are traceable to software

requirements;• unit testing;• integration testing;• system testing;• acceptance testing;• audit.

Verification activities may include carrying out formal proofs.

The activities to be conducted in a project are described in theSoftware Verification and Validation Plan (SVVP).

Page 755: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 5SOFTWARE VERIFICATION AND VALIDATION

URD

SRD

ADD

DDD

INTEGRATION

TESTS

SYSTEM

TESTS

ACCEPTANCE

TESTS

UNIT

TESTS

Product

Key

Activity

Verify

1

2

3

4

5

6

7

8

CODE

DETAILED

DESIGN

ARCHITECTURALDESIGN

SOFTWARE

REQUIREMENTS

DEFINITION

USER

REQUIREMENTS

DEFINITION 9

AcceptedSoftware

TestedSystem

TestedSubsystems

TestedUnits

CompiledModules

ProjectRequest

SVVP/AT

SVVP/SR

SVVP/AD

SVVP/DD

SVVP/UT

SVVP/IT

SVVP/ST

SVVP/DD

Figure 2.2: Life cycle verification approach

Figure 2.2 shows the life cycle verification approach. Softwaredevelopment starts in the top left-hand corner, progresses down the left-hand 'specification' side to the bottom of the 'V' and then onwards up theright-hand 'production' side. The V-formation emphasises the need to verifyeach output specification against its input specification, and the need toverify the software at each stage of production against its correspondingspecification.

In particular the:• SRD must be verified with respect to the URD by means of the

SVVP/SR;• ADD must be verified with respect to the SRD by means of the

SVVP/AD;• DDD must be verified with respect to the ADD by means of the

SVVP/DD;• code must be verified with respect to the DDD by means of the

SVVP/DD;• unit tests verify that the software subsystems and components work

correctly in isolation, and as specified in the detailed design, by meansof the SVVP/UT;

Page 756: ESA Software Engineering Standards

6 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION

• integration tests verify that the major software components workcorrectly with the rest of the system, and as specified in the architecturaldesign, by means of the SVVP/IT;

• system tests verify that the software system meets the softwarerequirements, by means of the SVVP/ST;

• acceptance tests verify that the software system meets the userrequirements, by means of the SVVP/AT.

These verification activities demonstrate compliance tospecifications. This may be done by showing that the product:• performs as specified;• contains no defects that prevent it performing as specified.

Demonstration that a product meets its specification is amechanical activity that is driven by the specification. This part of verificationis efficient for demonstrating conformance to functional requirements (e.g. tovalidate that the system has a function it is only necessary to exercise thefunction). In contrast, demonstration that a product contains no defects thatprevent it from meeting its specification requires expert knowledge of whatthe system must do, and the technology the system uses. This expertise isneeded if the non-functional requirements (e.g. those for reliability) are to bemet. Skill and ingenuity are needed to show up defects.

In summary, software verification and validation should show thatthe product conforms to all the requirements. Users will have moreconfidence in a product that has been through a rigorous verificationprogramme than one subjected to minimal examination and testing beforerelease.

2.3 REVIEWS

A review is 'a process or meeting during which a work product, orset of work products, is presented to project personnel, managers, users,customers, or other interested parties for comment or approval' [Ref 6].

Reviews may be formal or informal. Formal reviews have explicit anddefinite rules of procedure. Informal reviews have no predefined procedures.Although informal reviews can be very useful for educating project membersand solving problems, this section is only concerned with reviews that haveset procedures, i.e. formal reviews.

Page 757: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 7SOFTWARE VERIFICATION AND VALIDATION

Three kinds of formal review are normally used for softwareverification:• technical review;• walkthrough;• audits.

These reviews are all 'formal reviews' in the sense that all havespecific objectives and procedures. They seek to identify defects anddiscrepancies of the software against specifications, plans and standards.

Software inspections are a more rigorous alternative towalkthroughs, and are strongly recommended for software with stringentreliability, security and safety requirements. Methods for softwareinspections are described in Section 3.2.

The software problem reporting procedure and document changeprocedure defined in Part 2, Section 3.2.3.2 of ESA PSS-05-0, and in moredetail ESA PSS-05-09 'Guide to Software Configuration Management', callsfor a formal review process for all changes to code and documentation. Anyof the first two kinds of formal review procedure can be applied for changecontrol. The SRB, for example, may choose to hold a technical review orwalkthrough as necessary.

2.3.1 Technical reviews

Technical reviews evaluate specific software elements to verifyprogress against the plan. The technical review process should be used forthe UR/R, SR/R, AD/R, DD/R and any critical design reviews.

The UR/R, SR/R, AD/R and DD/R are formal reviews held at the endof a phase to evaluate the products of the phase, and to decide whether thenext phase may be started (UR08, SR09, AD16 and DD11).

Critical design reviews are held in the DD phase to review thedetailed design of a major component to certify its readiness forimplementation (DD10).

The following sections describe the technical review process. Thisprocess is based upon the ANSI/IEEE Std 1028-1988, 'IEEE Standard forSoftware Reviews and Audits' [Ref 10], and Agency best practice.

Page 758: ESA Software Engineering Standards

8 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION

2.3.1.1 Objectives

The objective of a technical review is to evaluate a specific set ofreview items (e.g. document, source module) and provide management withevidence that:• they conform to specifications made in previous phases;• they have been produced according to the project standards and

procedures;• any changes have been properly implemented, and affect only those

systems identified by the change specification (described in a RID, DCRor SCR).

2.3.1.2 Organisation

The technical review process is carried out by a review team, whichis made up of:• a leader;• a secretary;• members.

In large and/or critical projects, the review team may be split into areview board and a technical panel. The technical panel is usuallyresponsible for processing RIDs and the technical assessment of reviewitems, producing as output a technical panel report. The review boardoversees the review procedures and then independently assesses the statusof the review items based upon the technical panel report.

The review team members should have skills to cover all aspects ofthe review items. Depending upon the phase, the review team may be drawnfrom:• users;• software project managers;• software engineers;• software librarians;• software quality assurance staff;• independent software verification and validation staff;• independent experts not involved in the software development.

Page 759: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 9SOFTWARE VERIFICATION AND VALIDATION

Some continuity of representation should be provided to ensureconsistency.

The leader's responsibilities include:• nominating the review team;• organising the review and informing all participants of its date, place

and agenda;• distribution of the review items to all participants before the meeting;• organising as necessary the work of the review team;• chairing the review meetings;• issuing the technical review report.

The secretary will assist the leader as necessary and will beresponsible for documenting the findings, decisions and recommendationsof the review team.

Team members examine the review items and attend reviewmeetings. If the review items are large, complex, or require a range ofspecialist skills for effective review, the leader may share the review itemsamong members.

2.3.1.3 Input

Input to the technical review process includes as appropriate:• a review meeting agenda;• a statement of objectives;• the review items;• specifications for the review items;• plans, standards and guidelines that apply to the review items;• RID, SPR and SCR forms concerning the review items;• marked up copies of the review items;• reports of software quality assurance staff.

2.3.1.4 Activities

The technical review process consists of the following activities:• preparation;• review meeting.

Page 760: ESA Software Engineering Standards

10 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION

The review process may start when the leader considers the reviewitems to be stable and complete. Obvious signs of instability are thepresence of TBDs or of changes recommended at an earlier review meetingnot yet implemented.

Adequate time should be allowed for the review process. Thisdepends on the size of project. A typical schedule for a large project (20man years or more) is shown in Table 2.3.1.4.

Event Time

Review items distributed R - 20 days

RIDs categorised and distributed R - 10 days

Review Meeting R

Issue of Report R + 20 days

Table 2.3.1.4: Review Schedule for a large project

Members may have to combine their review activities with othercommitments, and the review schedule should reflect this.

2.3.1.4.1 Preparation

The leader creates the agenda and distributes it, with thestatements of objectives, review items, specifications, plans, standards andguidelines (as appropriate) to the review team.

Members then examine the review items. Each problem is recordedby completing boxes one to six of the RID form. A RID should record onlyone problem, or group of related problems. Members then pass their RIDsto the secretary, who numbers each RID uniquely and forwards them to theauthor for comment. Authors add their responses in box seven and thenreturn the RIDs to the secretary.

The leader then categorises each RID as major, minor, or editorial.Major RIDs relate to a problem that would affect capabilities, performance,quality, schedule and cost. Minor RIDs request clarification on specificpoints and point out inconsistencies. Editorial RIDs point out defects informat, spelling and grammar. Several hundred RIDs can be generated in alarge project review, and classification is essential if the RIDs are to be dealt

Page 761: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 11SOFTWARE VERIFICATION AND VALIDATION

with efficiently. Failure to categorise the RIDs can result in long meetings thatconcentrate on minor problems at the expense of major ones.

Finally the secretary sorts the RIDs in order of the position of thediscrepancy in the review item. The RIDs are now ready for input to thereview meeting.

Preparation for a Software Review Board follows a similar pattern,with RIDs being replaced by SPRs and SCRs.

2.3.1.4.2 Review meeting

A typical review meeting agenda consists of:1. Introduction;2. Presentation of the review items;3. Classification of RIDs;4. Review of the major RIDs;5. Review of the other RIDs;6. Conclusion.

The introduction includes agreeing the agenda, approving the reportof any previous meetings and reviewing the status of outstanding actions.

After the preliminaries, authors present an overview of the reviewitems. If this is not the first meeting, emphasis should be given to anychanges made since the items were last discussed.

The leader then summarises the classification of the RIDs. Membersmay request that RIDs be reclassified (e.g. the severity of a RID may bechanged from minor to major). RIDs that originate during the meeting shouldbe held over for decision at a later meeting, to allow time for authors torespond.

Major RIDs are then discussed, followed by the minor and editorialRIDs. The outcome of the discussion of any defects should be noted by thesecretary in the review decision box of the RID form. This may be one ofCLOSE, UPDATE, ACTION or REJECT. The reason for each decision shouldbe recorded. Closure should be associated with the successful completionof an update. The nature of an update should be agreed. Actions should beproperly formulated, the person responsible identified, and the completiondate specified. Rejection is equivalent to closing a RID with no action orupdate.

Page 762: ESA Software Engineering Standards

12 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION

The conclusions of a review meeting should be agreed during themeeting. Typical conclusions are:• authorisation to proceed to the next phase, subject to updates and

actions being completed;• authorisation to proceed with a restricted part of the system;• a decision to perform additional work.

One or more of the above may be applicable.

If the review meeting cannot reach a consensus on RID dispositionsand conclusions, possible actions are:• recording a minority opinion in the review report;• for one or more members to find a solution outside the meeting;• referring the problem to the next level of management.

2.3.1.5 Output

The output from the review is a technical review report that shouldcontain the following:• abstract of the report;• a list of the members;• an identification of the review items;• tables of RIDs, SPRs and SCRs organised according to category, with

dispositions marked;• a list of actions, with persons responsible identified and expected dates

for completion defined;• conclusions.

This output can take the form of the minutes of the meeting, or be aself-standing report. If there are several meetings, the collections of minutescan form the report, or the minutes can be appended to a reportsummarising the findings. The report should be detailed enough formanagement to judge what happened. If there have been difficulties inreaching consensus during the review, it is advisable that the output besigned off by members.

2.3.2 Walkthroughs

Walkthroughs should be used for the early evaluation of documents,models, designs and code in the SR, AD and DD phases. The following

Page 763: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 13SOFTWARE VERIFICATION AND VALIDATION

sections describe the walkthrough process, and are based upon theANSI/IEEE Std 1028-1988, 'IEEE Standard for Software Reviews and Audits'[Ref 10].

2.3.2.1 Objectives

The objective of a walkthrough is to evaluate a specific softwareelement (e.g. document, source module). A walkthrough should attempt toidentify defects and consider possible solutions. In contrast with other formsof review, secondary objectives are to educate, and to resolve stylisticproblems.

2.3.2.2 Organisation

The walkthrough process is carried out by a walkthrough team,which is made up of:• a leader;• a secretary;• the author (or authors);• members.

The leader, helped by the secretary, is responsible for managementtasks associated with the walkthrough. The specific responsibilities of theleader include:• nominating the walkthrough team;• organising the walkthrough and informing all participants of the date,

place and agenda of walkthrough meetings;• distribution of the review items to all participants before walkthrough

meetings;• organising as necessary the work of the walkthrough team;• chairing the walkthrough meeting;• issuing the walkthrough report.

The author is responsible for the production of the review items, andfor presenting them at the walkthrough meeting.

Members examine review items, report errors and recommendsolutions.

Page 764: ESA Software Engineering Standards

14 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION

2.3.2.3 Input

Input to the walkthrough consists of:• a statement of objectives in the form of an agenda;• the review items;• standards that apply to the review items;• specifications that apply to the review items.

2.3.2.4 Activities

The walkthrough process consists of the following activities:• preparation;• review meeting.

2.3.2.4.1 Preparation

The moderator or author distributes the review items when theauthor decides that they are ready for walkthrough. Members shouldexamine the review items prior to the meeting. Concerns should be noted onRID forms so that they can be raised at the appropriate point in thewalkthrough meeting.

2.3.2.4.2 Review meeting

The review meeting begins with a discussion of the agenda and thereport of the previous meeting. The author then provides an overview of thereview items.

A general discussion follows, during which issues of the structure,function and scope of the review items should be raised.

The author then steps through the review items, such as documentsand source modules (in contrast technical reviews step through RIDs, notthe items themselves). Members raise issues about specific points as theyare reached in the walkthrough.

As the walkthrough proceeds, errors, suggested changes andimprovements are noted on RID forms by the secretary.

Page 765: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 15SOFTWARE VERIFICATION AND VALIDATION

2.3.2.5 Output

The output from the walkthrough is a walkthrough report that shouldcontain the following:• a list of the members;• an identification of the review items;• a list of changes and defects noted during the walkthrough;• completed RID forms;• a list of actions, with persons responsible identified and expected dates

for completion defined;• recommendations made by the walkthrough team on how to remedy

defects and dispose of unresolved issues (e.g. further walkthroughmeetings).

This output can take the form of the minutes of the meeting, or be aself-standing report.

2.3.3 Audits

Audits are independent reviews that assess compliance withsoftware requirements, specifications, baselines, standards, procedures,instructions, codes and contractual and licensing requirements. To ensuretheir objectivity, audits should be carried out by people independent of thedevelopment team. The audited organisation should make resources (e.g.development team members, office space) available to support the audit.

A 'physical audit' checks that all items identified as part of theconfiguration are present in the product baseline. A 'functional audit' checksthat unit, integration and system tests have been carried out and recordstheir success or failure. Other types of audits may examine any part of thesoftware development process, and take their name from the part of theprocess being examined, e.g. a 'code audit' checks code against codingstandards.

Audits may be routine or non-routine. Examples of routine audits arethe functional and physical audits that must be performed before the releaseof the software (SVV03). Non-routine audits may be initiated by theorganisation receiving the software, or management and quality assurancepersonnel in the organisation producing the software.

Page 766: ESA Software Engineering Standards

16 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION

The following sections describe the audit process, and are basedupon the ANSI/IEEE Std 1028-1988, 'IEEE Standard for Software Reviewsand Audits' [Ref 10].

2.3.3.1 Objectives

The objective of an audit is to verify that software products andprocesses comply with standards, guidelines, specifications andprocedures.

2.3.3.2 Organisation

The audit process is carried out by an audit team, which is made upof:• a leader;• members.

The leader is responsible for administrative tasks associated withthe audit. The specific responsibilities of the leader include:• nominating the audit team;• organising the audit and informing all participants of the schedule of

activities;• issuing the audit report.

Members interview the development team, examine review items,report errors and recommend solutions.

2.3.3.3 Input

The following items should be input to an audit:• terms of reference defining the purpose and scope of the audit;• criteria for deciding the correctness of products and processes such as

contracts, plans, specifications, procedures, guidelines and standards;• software products;• software process records;• management plans defining the organisation of the project being

audited.

Page 767: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 17SOFTWARE VERIFICATION AND VALIDATION

2.3.3.4 Activities

The team formed to carry out the audit should produce a plan thatdefines the:• products or processes to be examined;• schedule of audit activities;• sampling criteria, if a statistical approach is being used;• criteria for judging correctness (e.g. the SCM procedures might be

audited against the SCMP);• checklists defining aspects to be audited;• audit staffing plan;• date, time and place of the audit kick-off meeting.

The audit team should prepare for the audit by familiarisingthemselves with the organisation being audited, its products and itsprocesses. All the team must understand the audit criteria and know how toapply them. Training may be necessary.

The audit team then examines the software products andprocesses, interviewing project team members as necessary. This is theprimary activity in any audit. Project team members should co-operate fullywith the auditors. Auditors should fully investigate all problems, documentthem, and make recommendations about how to rectify them. If the systemis very large, the audit team may have to employ a sampling approach.

When their investigations are complete, the audit team should issuea draft report for comment by the audited organisation, so that anymisunderstandings can be eliminated. After receiving the auditedorganisation's comments, the audit team should produce a final report. Afollow-up audit may be required to check that actions are implemented.

2.3.3.5 Output

The output from an audit is an audit report that:• identifies the organisation being audited, the audit team, and the date

and place of the audit;• defines the products and processes being audited;• defines the scope of the audit, particularly the audit criteria for products

and processes being audited;• states conclusions;• makes recommendations;• lists actions.

Page 768: ESA Software Engineering Standards

18 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION

2.4 TRACING

Tracing is 'the act of establishing a relationship between two ormore products of the development process; for example, to establish therelationship between a given requirement and the design element thatimplements that requirement' [Ref 6]. There are two kinds of traceability:• forward traceability;• backward traceability.

Forward traceability requires that each input to a phase must betraceable to an output of that phase (SVV01). Forward traceability showscompleteness, and is normally done by constructing traceability matrices.These are normally implemented by tabulating the correspondence betweeninput and output (see the example in ESA PSS-05-03, Guide to SoftwareRequirements Definition [Ref 2]). Missing entries in the matrix displayincompleteness quite vividly. Forward traceability can also show duplication.Inputs that trace to more than one output may be a sign of duplication.

Backward traceability requires that each output of a phase must betraceable to an input to that phase (SVV02). Outputs that cannot be tracedto inputs are superfluous, unless it is acknowledged that the inputsthemselves were incomplete. Backward tracing is normally done byincluding with each item a statement of why it exists (e.g. source of asoftware requirement, requirements for a software component).

During the software life cycle it is necessary to trace:• user requirements to software requirements and vice-versa;• software requirements to component descriptions and vice versa;• integration tests to architectural units and vice-versa;• unit tests to the modules of the detailed design;• system tests to software requirements and vice-versa;• acceptance tests to user requirements and vice-versa.

To support traceability, all components and requirements areidentified. The SVVP should define how tracing is to be done. References tocomponents and requirements should include identifiers. The SCMP definesthe identification conventions for documents and software components. TheSVVP should define additional identification conventions to be used withindocuments (e.g. requirements) and software components.

Page 769: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 19SOFTWARE VERIFICATION AND VALIDATION

2.5 FORMAL PROOF

Formal proof attempts to demonstrate logically that software iscorrect. Whereas a test empirically demonstrates that specific inputs resultin specific outputs, formal proofs logically demonstrate that all inputsmeeting defined preconditions will result in defined postconditions beingmet.

Where practical, formal proof of the correctness of software may beattempted. Formal proof techniques are often difficult to justify because ofthe additional effort required above the necessary verification techniques ofreviewing, tracing and testing.

The difficulty of expressing software requirements and designs inthe mathematical form necessary for formal proof has prevented the wideapplication of the technique. Some areas where formal methods have beensuccessful are for the specification and verification of:• protocols;• secure systems.

Good protocols and very secure systems depend upon havingprecise, logical specifications with no loopholes.

Ideally, if formal techniques can prove that software is correct,separate verification (e.g. testing) should not be necessary. However,human errors in proofs are still possible, and ways should be sought toavoid them, for example by ensuring that all proofs are checkedindependently.

Sections 3.3 and 3.4 discuss Formal Methods and formal ProgramVerification Techniques.

2.6 TESTING

A test is 'an activity in which a system or component is executedunder specified conditions, the results are observed or recorded, and anevaluation is made of some aspect of the system or component' [Ref 6].Compared with other verification techniques, testing is the most directbecause it executes the software, and is therefore always to be preferred.When parts of a specification cannot be verified by a test, anotherverification technique (e.g. inspection) should be substituted in the test plan.For example a test of a portability requirement might be to run the software

Page 770: ESA Software Engineering Standards

20 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION

in the alternative environment. If this not possible, the substitute approachmight be to inspect the code for statements that are not portable.

Testing skills are just as important as the ability to program, designand analyse. Good testers find problems quickly. Myers defines testing as'the process of executing a program with the intent of finding errors' [Ref 14].While this definition is too narrow for ESA PSS-05-0, it expresses thesceptical, critical attitude required for effective testing.

The testability of software should be evaluated as it is designed, notwhen coding is complete. Designs should be iterated until they are testable.Complexity is the enemy of testability. When faced with a complex design,developers should ask themselves:• can the software be simplified without compromising its capabilities?• are the resources available to test software of this complexity?

Users, managers and developers all need to be assured that thesoftware does what it is supposed to do. An important objective of testing isto show that software meets its specification. The 'V diagram' in Figure 2.2shows that unit tests compare code with its detailed design, integrationtests compare major components with the architectural design, system testscompare the software with the software requirements, and acceptance testscompare the software with the user requirements. All these tests aim to'verify' the software, i.e. show that it truly conforms to specifications.

In ESA PSS-05-0 test plans are made as soon as the correspondingspecifications exist. These plans outline the approach to testing and areessential for estimating the resources required to complete the project.Tests are specified in more detail in the DD phase. Test designs, test casesand test procedures are defined and included in the SVVP. Tests are thenexecuted and results recorded.

Figure 2.6 shows the testing activities common to unit, integration,system and acceptance tests. Input at the top left of the figure are theSoftware Under Test (SUT), the test plans in the SVVP, and the URD, SRD,ADD and DDD that define the baselines for testing against. This sequenceof activities is executed for unit testing, integration testing, system testingand acceptance testing in turn.

The following paragraphs address each activity depicted in Figure2.6. Section 4.5 discusses the tools needed to support the activities.1. The 'specify tests' activity takes the test plan in the SVVP, and the

product specification in one of the URD, SRD, ADD or DDD and

Page 771: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 21SOFTWARE VERIFICATION AND VALIDATION

produces a test design for each requirement or component. Eachdesign will imply a family of test cases. The Software Under Test (SUT)is required for the specification of unit tests.

2. The 'make test software' activity takes the test case specifications andproduces the test code (stubs, drivers, simulators, harnesses), inputdata files and test procedures needed to run the tests.

3. The 'link SUT' activity takes the test code and links it with the SUT, and(optionally) existing tested code, producing the executable SUT.

4. The 'run tests' activity executes the tests according to the testprocedures, by means of the input data. The output data produced mayinclude coverage information, performance information, or dataproduced by the normal functioning of the SUT.

Specifytests

Make testsoftware

Link SUT

Run tests

Analysecoverage

Analyseperformance

Checkoutputs

Storetest data

URD & SVVP/AT/Test PlanSRD & SVVP/ST/Test PlanADD & SVVP/IT/Test PlanDDD & SVVP/UT/Test Plan

Testcases

TestCode

SUT

Tested CodeExecutableSUT

OutputData

OldOutputData

TestProc'sInput Data

TestData

Expected Output Data

Test Data = Input Data + Expected Output Data

Test results/coverage

Test results/performance

Test results

Figure 2.6: Testing activities5. The 'analyse coverage' activity checks that the tests have in fact

executed those parts of the SUT that they were intended to test.6. The 'analyse performance' activity studies the resource consumption of

the SUT (e.g. CPU time, disk space, memory).7. The 'check outputs' activity compares the outputs with the expected

output data or the outputs of previous tests, and decides whether thetests have passed or failed.

Page 772: ESA Software Engineering Standards

22 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION

8. The 'store test data' activity stores test data for reruns of tests. Testoutput data needs to be retained as evidence that the tests have beenperformed.

The following sections discuss the specific approaches to unittesting, integration testing, system testing and acceptance testing. For eachtype of testing sections are provided on:• test planning;• test design;• test case specification;• test procedure definition;• test reporting.

2.6.1 Unit tests

A 'unit' of software is composed of one or more modules. In ESAPSS-05-0, 'unit testing' refers to the process of testing modules against thedetailed design. The inputs to unit testing are the successfully compiledmodules from the coding process. These are assembled during unit testingto make the largest units, i.e. the components of architectural design. Thesuccessfully tested architectural design components are the outputs of unittesting.

An incremental assembly sequence is normally best. When thesequence is top-down, the unit grows during unit testing from a kernelmodule to the major component required in the architectural design. Whenthe sequence is bottom-up, units are assembled from smaller units.Normally a combination of the two approaches is used, with the objective ofminimising the amount of test software, measured both in terms of thenumber of test modules and the number of lines of test code. This enablesthe test software to be easily verified by inspection.

Studies of traditional developments show that approximately 65% ofbugs can be caught in unit testing, and that half these bugs will be caughtby 'white-box' tests [Ref 12]. These results show that unit testing is the mosteffective type of testing for removing bugs. This is because less software isinvolved when the test is performed, and so bugs are easier to isolate.

2.6.1.1 Unit test planning

The first step in unit testing is to construct a unit test plan anddocument it in the SVVP (SVV18). This plan is defined in the DD phase and

Page 773: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 23SOFTWARE VERIFICATION AND VALIDATION

should describe the scope, approach, resources and schedule of theintended unit tests. The scope of unit testing is to verify the design andimplementation of all components from the lowest level defined in thedetailed design up to and including the lowest level in the architecturaldesign. The approach should outline the types of tests, and the amounts oftesting, required.

The amount of unit testing required is dictated by the need toexecute every statement in a module at least once (DD06). The simplestmeasure of the amount of testing required is therefore just the number oflines of code.

Execution of every statement in the software is normally notsufficient, and coverage of every branch in the logic may be required. Theamount of unit testing then depends principally on the complexity of thesoftware. The 'Structured Testing' method (see Section 3.6) uses thecyclomatic complexity metric to evaluate the testability of module designs.The number of test cases necessary to ensure that every branch in themodule logic is covered during testing is equivalent to the cyclomaticcomplexity of the module. The Structured Testing method is stronglyrecommended when full branch coverage is a requirement.

2.6.1.2 Unit test design

The next step in unit testing is unit test design (SVV19). Unit testdesigns should specify the details of the test approach for each softwarecomponent defined in the DDD, and identify the associated test cases andtest procedures. The description of the test approach should state theassembly sequence for constructing the architectural design units, and thetypes of tests necessary for individual modules (e.g. white-box, black-box).

The three rules of incremental assembly are:• assemble the architectural design units incrementally, module-by-

module if possible, because problems that arise in a unit test are mostlikely to be related to the module that has just been added;

• introduce producer modules before consumer modules, because theformer can provide control and data flows required by the latter.

• ensure that each step is reversible, so that rollback to a previous stagein the assembly is always possible.

A simple example of unit test design is shown in Figure 2.6.1.2A.The unit U1 is a major component of the architectural design. U1 iscomposed of modules M1, M2 and M3. Module M1 calls M2 and then M3,

Page 774: ESA Software Engineering Standards

24 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION

as shown by the structure chart. Two possible assembly sequences areshown. The sequence starting with M1 is 'top-down' and the sequencestarting with M2 is 'bottom-up'. Figure 2.6.1.2B shows that data flows fromM2 to M3 under the control of M1.

Each sequence in Figure 2.6.1.2A requires two test modules. Thetop-down sequence requires the two stub modules S2 and S3 to simulateM2 and M3. The bottom-up sequence requires the drivers D2 and D3 tosimulate M1, because each driver simulates a different interface. If M1, M2and M3 were tested individually before assembly, four drivers and stubswould be required. The incremental approach only requires two.

The rules of incremental assembly argue for top-down assemblyinstead of bottom-up because the top-down sequence introduces the:• modules one-by-one;• producer modules before consumer modules (i.e. M1 before M2 before

M3).

M1

M2 M3

U1

D2

M2

D3

M3

M1

M2 M3

M1

S2 S3

M1

M2 S3

M1

M2 M3

Bottom-up

Top-down

Step 2Step 1 Step 3

Figure 2.6.1.2A: Example of unit test design

Page 775: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 25SOFTWARE VERIFICATION AND VALIDATION

M1

M2

M3

control flow

control flow

data flow

Figure 2.6.1.2B: Data flow dependencies between the modules of U1

2.6.1.2.1 White-box unit tests

The objective of white-box testing is to check the internal logic of thesoftware. White-box tests are sometimes known as 'path test s', 'structuretests' or 'logic tests'. A more appropriate title for this kind of test is 'glass-box test', as the engineer can see almost everything that the code is doing.

White-box unit tests are designed by examining the internal logic ofeach module and defining the input data sets that force the execution ofdifferent paths through the logic. Each input data set is a test case.

Traditionally, programmers used to insert diagnostic code to followthe internal processing (e.g. statements that print out the values of programvariables during execution). Debugging tools that allow programmers toobserve the execution of a program step-by-step in a screen display makethe insertion of diagnostic code unnecessary, unless manual control ofexecution is not appropriate, such as when real-time code is tested.

When debugging tools are used for white-box testing, priorpreparation of test cases and procedures is still necessary. Test cases andprocedures should not be invented during debugging. The StructuredTesting method (see Section 3.6) is the best known method for white-boxunit testing. The cyclomatic complexity value gives the number of paths thatmust be executed, and the 'baseline method' is used to define the paths.Lastly, input values are selected that will cause each path to be executed.This is called 'sensitising the path'.

A limitation of white-box testing is its inability to show missing logic.Black-box tests remedy this deficiency.

Page 776: ESA Software Engineering Standards

26 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION

2.6.1.2.2 Black-box unit tests

The objective of black-box tests is to verify the functionality of thesoftware. The tester treats the module as 'black-box' whose internals cannotbe seen. Black-box tests are sometimes called 'function tests'.

Black-box unit tests are designed by examining the specification ofeach module and defining input data sets that will result in differentbehaviour (e.g. outputs). Each input data set is a test case.

Black-box tests should be designed to exercise the software for itswhole range of inputs. Most software items will have many possible inputdata sets and using them all is impractical. Test designers should partitionthe range of possible inputs into 'equivalence classes'. For any given error,input data sets in the same equivalence class will produce the same error[Ref 14].

-1 0 1 2 3 4 5 6 7 8 9 10 11

illegal values illegal valuesnominal values

lowerboundary

value

upperboundary

value

12

Figure 2.6.1.2.2: Equivalence partitioning example

Consider a module that accepts integers in the range 1 to 10 asinput, for example. The input data can be partitioned into five equivalenceclasses as shown in Figure 2.6.1.2.2. The five equivalence classes are theillegal values below the lower boundary, such as 0, the lower boundary value1, the nominal values 2 to 9, the upper boundary value 10, and the illegalvalues above the upper boundary, such as 11.

Output values can be used to generate additional equivalenceclasses. In the example above, if the output of the routine generated theresult TRUE for input numbers less than or equal to 5 and FALSE fornumbers greater than 5, the nominal value equivalence class should be splitinto two subclasses:• nominal values giving a TRUE result, such as 3;• boundary nominal value, i.e. 5;• nominal values giving a FALSE result, such as 7.

Page 777: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 27SOFTWARE VERIFICATION AND VALIDATION

Equivalence classes may be defined by considering all possibledata types. For example the module above accepts integers only. Testcases could be devised using real, logical and character data.

Having defined the equivalence classes, the next step is to selectsuitable input values from each equivalence class. Input values close to theboundary values are normally selected because they are usually moreeffective in causing test failures (e.g. 11 might be expected to be more likelyto produce a test failure than 99).

Although equivalence partitioning combined with boundary-valueselection is a useful technique for generating efficient input data sets, it willnot expose bugs linked to combinations of input data values. Techniquessuch as decision tables [Ref 12] and cause-effect graphs [Ref 14] can bevery useful for defining tests that will expose such bugs.

1 2 3 4

open_pressed TRUE TRUE FALSE FALSE

close_pressed TRUE FALSE TRUE FALSE

action ? OPEN CLOSE ?

Table 2.6.1.2.2: Decision table example

Table 2.6.1.2.2 shows the decision table for a module that hasBoolean inputs that indicate whether the OPEN or CLOSE buttons of anelevator door have been pressed. When open_pressed is true andclose_pressed is false, the action is OPEN. When close_pressed is true andopen_pressed is false, the action is CLOSE. Table 2.6.1.2.2 shows that theoutcomes for when open_pressed and close_pressed are both true andboth false are undefined. Additional test cases setting open_pressed andclose_pressed both true and then both false are likely to expose problems.

A useful technique for designing tests for real-time systems is thestate-transition table. These tables define what messages can be processedin each state. For example, sending the message 'open doors' to anelevator in the state 'moving' should be rejected. Just as with decisiontables, undefined outcomes shown by blank table entries make goodcandidates for testing.

Page 778: ESA Software Engineering Standards

28 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION

Decision tables, cause-effect graphs and state-transition diagramsare just three of the many analysis techniques that can be employed for testdesign. After tests have been devised by means of these techniques, testdesigners should examine them to see whether additional tests are needed,their judgement being based upon their experience of similar systems ortheir involvement in the development of the system. This technique, called'error guessing' [Ref 14], should be risk-driven, focusing on the parts of thedesign that are novel or difficult to verify by other means, or where qualityproblems have occurred before.

Test tools that allow the automatic creation of drivers, stubs and testdata sets help make black-box testing easier (see Chapter 4). Such toolscan define equivalence classes based upon boundary values in the input,but the identification of more complex test cases requires knowledge of thehow the software should work.

2.6.1.2.3 Performance tests

The DDD may have placed resource constraints on the performanceof a module. For example a module may have to execute within a specifiedelapsed time, or use less than a specified amount of CPU time, or consumeless than a specified amount of memory. Compliance with these constraintsshould be tested as directly as possible, for example by means of:• performance analysis tools;• diagnostic code;• system monitoring tools.

2.6.1.3 Unit test case definition

Each unit test design will use one or more unit test cases, whichmust also be documented in the SVVP (SVV20). Test cases should specifythe inputs, predicted results and execution conditions for a test case.

2.6.1.4 Unit test procedure definition

The unit test procedures must be described in the SVVP (SVV21).These should provide a step-by-step description of how to carry out eachtest case. One test procedure may execute one or more test cases. Theprocedures may use executable 'scripts' that control the operation of testtools. With the incremental approach, the input data required to test amodule may be created by executing an already tested module (e.g. M2 isused to create data for M1 and M3 in the example above). The testprocedure should define the steps needed to create such data.

Page 779: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 29SOFTWARE VERIFICATION AND VALIDATION

2.6.1.5 Unit test reporting

Unit test results may be reported in a variety of ways. Somecommon means of recording results are:• unit test result forms, recording the date and outcome of the test cases

executed by the procedure;• execution logfile.

2.6.2 Integration tests

A software system is composed of one or more subsystems, whichare composed of one or more units (which are composed of one or moremodules). In ESA PSS-05-0, 'integration testing' refers to the process oftesting units against the architectural design. During integration testing, thearchitectural design units are integrated to make the system.

The 'function-by-function' integration method described in Section3.3.2.1 of ESA PSS-05-05 'Guide to the Detailed Design and ProductionPhase' [Ref 3] should be used to integrate the software. As with theapproach described for unit testing, this method minimises the amount oftest software required. The steps are to:1. select the functions to be integrated;2. identify the components that carry out the functions;3. order the components by the number of dependencies (i.e. fewest

dependencies first);4. create a driver to simulate the input of the component later in the order

when a component depends on another later in the order;5. introduce the components with fewest dependencies first.

Though the errors found in integration testing should be much fewerthan those found in unit testing, they are more time-consuming to diagnoseand fix. Studies of testing [Ref 15] have shown architectural errors can be asmuch as thirty times as costly to repair as detailed design errors.

2.6.2.1 Integration test planning

The first step in integration testing is to construct an integration testplan and document it in the SVVP (SVV17). This plan is defined in the ADphase and should describe the scope, approach, resources and scheduleof the intended integration tests.

Page 780: ESA Software Engineering Standards

30 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION

The scope of integration testing is to verify the design andimplementation of all components from the lowest level defined in thearchitectural design up to the system level. The approach should outline thetypes of tests, and the amounts of testing, required.

The amount of integration testing required is dictated by the needto:• check that all data exchanged across an interface agree with the data

structure specifications in the ADD (DD07);• confirm that all the control flows in the ADD have been implemented

(DD08).

The amount of control flow testing required depends on thecomplexity of the software. The Structured Integration Testing method (seeSection 3.7) uses the integration complexity metric to evaluate the testabilityof architectural designs. The integration complexity value is the number ofintegration tests required to obtain full coverage of the control flow. TheStructured Integration Testing method is strongly recommended forestimating the amount of integration testing.

2.6.2.2 Integration test design

The next step in integration testing is integration test design(SVV19). This and subsequent steps are performed in the DD phase,although integration test design may be attempted in the AD phase.Integration test designs should specify the details of the test approach foreach software component defined in the ADD, and identify the associatedtest cases and test procedures.

The description of the test approach should state the:• integration sequence for constructing the system;• types of tests necessary for individual components (e.g. white-box,

black-box).

With the function-by-function method, the system grows duringintegration testing from the kernel units that depend upon few other units,but are depended upon by many other units. The early availability of thesekernel units eases subsequent testing.

For incremental delivery, the delivery plan will normally specify whatfunctions are required in each delivery. Even so, the number ofdependencies can be used to decide the order of integration of componentsin each delivery.

Page 781: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 31SOFTWARE VERIFICATION AND VALIDATION

P1

P2

P3

P4

0 dependencies

1 dependency

2 dependencies

3 dependencies

control flow

control flow

control flow

data flow

data flow

data flow

Figure 2.6.2A: Incremental integration sequences

Figure 2.6.2A shows a system composed of four programs P1, P2,P3 and P4. P1 is the 'program manager', providing the user interface andcontrolling the other programs. Program P2 supplies data to P3, and bothP2 and P3 supply data to P4. User inputs are ignored. P1 has zerodependencies, P2 has one, P3 has two and P4 has three. The integrationsequence is therefore P1, P2, P3 and then P4.

2.6.2.2.1 White-box integration tests

White-box integration tests should be defined to verify the data andcontrol flow across interfaces between the major components defined in theADD (DD07 and DD08). For file interfaces, test programs that print thecontents of the files provide the visibility required. With real-time systems,facilities for trapping messages and copying them to a log file can beemployed. Debuggers that set break points at interfaces can also be useful.When control or data flow traverses an interface where a break point is set,control is passed to the debugger, enabling inspection and logging of theflow.

The Structured Integration Testing method (see Section 3.7) is thebest known method for white-box integration testing. The integrationcomplexity value gives the number of control flow paths that must beexecuted, and the 'design integration testing method' is used to define thecontrol flow paths. The function-by-function integration method (see Section

Page 782: ESA Software Engineering Standards

32 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION

2.6.2) can be used to define the order of testing the required control flowpaths.

The addition of new components to a system often introduces newexecution paths through it. Integration test design should identify pathssuitable for testing and define test cases to check them. This type of pathtesting is sometimes called 'thread testing'. All new control flows should betested.

2.6.2.2.2 Black-box integration tests

Black-box integration tests should be used to fully exercise thefunctions of each component specified in the ADD. Black-box tests mayalso be used to verify that data exchanged across an interface agree withthe data structure specifications in the ADD (DD07).

2.6.2.2.3 Performance tests

The ADD may have placed resource constraints on the performanceof a unit. For example a program may have to respond to user input within aspecified elapsed time, or process a defined number of records within aspecified CPU time, or occupy less than a specified amount of disk space ormemory. Compliance with these constraints should be tested as directly aspossible, for example by means of:• performance analysis tools;• diagnostic code;• system monitoring tools.

2.6.2.3 Integration test case definition

Each integration test design will use one or more integration testcases, which must also be documented in the SVVP (SVV20). Test casesshould specify the inputs, predicted results and execution conditions for atest case.

2.6.2.4 Integration test procedure definition

The integration test procedures must be described in the SVVP(SVV21). These should provide a step-by-step description of how to carryout each test case. One test procedure may execute one or more testcases. The procedures may use executable 'scripts' that control theoperation of test tools.

Page 783: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 33SOFTWARE VERIFICATION AND VALIDATION

2.6.2.5 Integration test reporting

Integration test results may be reported in a variety of ways. Somecommon means of recording results are:• integration test result forms, recording the date and outcome of the test

cases executed by the procedure;• execution logfile.

2.6.3 System tests

In ESA PSS-05-0, 'system testing' refers to the process of testingthe system against the software requirements. The input to system testing isthe successfully integrated system.

Wherever possible, system tests should be specified andperformed by an independent testing team. This increases the objectivity ofthe tests and reduces the likelihood of defects escaping the softwareverification and validation net.

2.6.3.1 System test planning

The first step in system testing is to construct a system test planand document it in the SVVP (SVV14). This plan is defined in the SR phaseand should describe the scope, approach, resources and schedule of theintended system tests. The scope of system testing is to verify compliancewith the system objectives, as stated in the SRD (DD09 ). System testingmust continue until readiness for transfer can be demonstrated.

The amount of testing required is dictated by the need to cover allthe software requirements in the SRD. A test should be defined for everyessential software requirement, and for every desirable requirement that hasbeen implemented.

2.6.3.2 System test design

The next step in system testing is system test design (SVV19). Thisand subsequent steps are performed in the DD phase, although system testdesign may be attempted in the SR and AD phases. System test designsshould specify the details of the test approach for each softwarerequirement specified in the SRD, and identify the associated test cases andtest procedures. The description of the test approach should state the typesof tests necessary (e.g. function test, stress test etc).

Page 784: ESA Software Engineering Standards

34 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION

Knowledge of the internal workings of the software should not berequired for system testing, and so white-box tests should be avoided.Black-box and other types of test should be used wherever possible. Whena test of a requirement is not possible, an alternative method of verificationshould be used (e.g. inspection).

System testing tools can often be used for problem investigationduring the TR and OM phases. Effort invested in producing efficient easy-to-use diagnostic tools at this stage of development is often worthwhile.

If an incremental delivery or evolutionary development approach isbeing used, system tests of each release of the system should includeregression tests of software requirements verified in earlier releases.

The SRD will contain several types of requirements, each of whichneeds a distinct test approach. The following subsections discuss possibleapproaches.

2.6.3.2.1 Function tests

System test design should begin by designing black-box tests toverify each functional requirement. Working from the functional requirementsin the SRD, techniques such as decision tables, state-transition tables anderror guessing are used to design function tests.

2.6.3.2.2 Performance tests

Performance requirements should contain quantitative statementsabout system performance. They may be specified by stating the:• worst case that is acceptable;• nominal value, to be used for design;• best case value, to show where growth potential is needed.

System test cases should be designed to verify:• that all worst case performance targets have been met;• that nominal performance targets are usually achieved;• whether any best-case performance targets have been met.

In addition, stress tests (see Section 2.6.3.2.13) should be designedto measure the absolute limits of performance.

Page 785: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 35SOFTWARE VERIFICATION AND VALIDATION

2.6.3.2.3 Interface tests

System tests should be designed to verify conformance to externalinterface requirements. Interface Control Documents (ICDs) form thebaseline for testing external interfaces. Simulators and other test tools will benecessary if the software cannot be tested in the operational environment.

Tools (not debuggers) should be provided to:• convert data flows into a form readable by human operators;• edit the contents of data stores.

2.6.3.2.4 Operations tests

Operations tests include all tests of the user interface, man machineinterface, or human computer interaction requirements. They also cover thelogistical and organisational requirements. These are essential before thesoftware is delivered to the users.

Operations tests should be designed to show up deficiencies inusability such as:• instructions that are difficult to follow;• screens that are difficult to read;• commonly-used operations with too many steps;• meaningless error messages.

The operational requirements may have defined the time required tolearn and operate the software. Such requirements can be made the basisof straightforward tests. For example a test of usability might be to measurethe time an operator with average skill takes to learn how to restart thesystem.

Other kinds of tests may be run throughout the system-testingperiod, for example:• do all warning messages have a red background?• is there help on this command?

If there is a help system, every topic should be systematicallyinspected for accuracy and appropriateness.

Response times should normally be specified in the performancerequirements (as opposed to operational requirements). Even so, system

Page 786: ESA Software Engineering Standards

36 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION

tests should verify that the response time is short enough to make thesystem usable.

2.6.3.2.5 Resource tests

Requirements for the usage of resources such as CPU time, storagespace and memory may have been set in the SRD. The best way to test forcompliance to these requirements is to allocate these resources and nomore, so that a failure occurs if a resource is exhausted. If this is not suitable(e.g. it is usually not possible to specify the maximum size of a particularfile), alternative approaches are to:• use a system monitoring tool to collect statistics on resource

consumption;• check directories for file space used.

2.6.3.2.6 Security tests

Security tests should check that the system is protected againstthreats to confidentiality, integrity and availability.

Tests should be designed to verify that basic security mechanismsspecified in the SRD have been provided, for example:• password protection;• resource locking.

Deliberate attempts to break the security mechanisms are aneffective way of detecting security errors. Possible tests are attempts to:• access the files of another user;• break into the system authorisation files;• access a resource when it is locked;• stop processes being run by other users.

Security problems can often arise when users are granted systemprivileges unnecessarily. The Software User Manual should clearly state theprivileges required to run the software.

Experience of past security problems should be used to check newsystems. Security loopholes often recur.

Page 787: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 37SOFTWARE VERIFICATION AND VALIDATION

2.6.3.2.7 Portability tests

Portability requirements may require the software to be run in avariety of environments. Attempts should be made to verify portability byrunning a representative selection of system tests in all the requiredenvironments. If this is not possible, indirect techniques may be attempted.For example if a program is supposed to run on two different platforms, aprogramming language standard (e.g. ANSI C) might be specified and astatic analyser tool used to check conformance to the standard.Successfully executing the program on one platform and passing the staticanalysis checks might be adequate proof that the software will run on theother platform.

2.6.3.2.8 Reliability tests

Reliability requirements should define the Mean Time BetweenFailure (MTBF) of the software. Separate MTBF values may have beenspecified for different parts of the software.

Reliability can be estimated from the software problems reportedduring system testing. Tests designed to measure the performance limitsshould be excluded from the counts, and test case failures should becategorised (e.g. critical, non-critical). The mean time between failures canthen be estimated by dividing the system testing time by the number ofcritical failures.

2.6.3.2.9 Maintainability tests

Maintainability requirements should define the Mean Time To Repair(MTTR) of the software. Separate MTTR values may have been specified fordifferent parts of the software.

Maintainability should be estimated by averaging the differencebetween the dates of Software Problem Reports (SPRs) reporting criticalfailures that occur during system testing, and the corresponding SoftwareModification Reports (SMRs) reporting the completion of the repairs.

Maintainability requirements may have included restrictions on thesize and complexity of modules, or even the use of the programminglanguage. These should be tested by means of a static analysis tool. If astatic analysis tool is not available, samples of the code should be manuallyinspected.

Page 788: ESA Software Engineering Standards

38 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION

2.6.3.2.10 Safety tests

Safety requirements may specify that the software must avoid injuryto people, or damage to property, when it fails. Compliance to safetyrequirements can be tested by:• deliberately causing problems under controlled conditions and

observing the system behaviour (e.g. disconnecting the power duringsystem operations);

• observing system behaviour when faults occur during tests.

Simulators may have to be built to perform safety tests.

Safety analysis classifies events and states according to how muchof a hazard they cause to people or property. Hazards may be catastrophic(i.e. life-threatening), critical, marginal or negligible [Ref 24]. Safetyrequirements may identify functions whose failure may cause a catastrophicor critical hazard. Safety tests may require exhaustive testing of thesefunctions to establish their reliability.

2.6.3.2.11 Miscellaneous tests

An SRD may contain other requirements for:• documentation (particularly the SUM);• verification;• acceptance testing;• quality, other than reliability, maintainability and safety.

It is usually not possible to test for compliance to theserequirements, and they are normally verified by inspection.

2.6.3.2.12 Regression tests

Regression testing is 'selective retesting of a system or component,to verify that modifications have not caused unintended effects, and that thesystem or component still complies with its specified requirements' [Ref 6].

Regression tests should be performed before every release of thesoftware in the OM phase. If an incremental delivery or evolutionarydevelopment approach is being used, regression tests should be performedto verify that the capabilities of earlier releases are unchanged.

Traditionally, regression testing often requires much effort,increasing the cost of change and reducing its speed. Test tools that

Page 789: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 39SOFTWARE VERIFICATION AND VALIDATION

automate regression testing are now widely available and can greatlyincrease the speed and accuracy of regression testing (see Chapter 4).Careful selection of test cases also reduces the cost of regression testing,and increases its effectiveness.

2.6.3.2.13 Stress tests

Stress tests 'evaluate a system or software component at orbeyond the limits of its specified requirements' [Ref 6]. The most commonkind of stress test is to measure the maximum load the SUT can sustain fora time, for example the:• maximum number of activities that can be supported simultaneously;• maximum quantity of data that can be processed in a given time.

Another kind of stress test, sometimes called a 'volume test ' [Ref14], exercises the SUT with an abnormally large quantity of input data. Forexample a compiler might be fed a source file with very many lines of code,or a database management system with a file containing very many records.Time is not of the essence in a volume test.

Most software has capacity limits. Testers should examine thesoftware documentation for statements about the amount of input thesoftware can accept, and design tests to check that the stated capacity isprovided. In addition, testers should look for inputs that have no constrainton capacity, and design tests to check whether undocumented constraintsdo exist.

2.6.3.3 System test case definition

The system test cases must be described in the SVVP (SVV20).These should specify the inputs, predicted results and execution conditionsfor a test case.

2.6.3.4 System test procedure definition

The system test procedures must be described in the SVVP(SVV21). These should provide a step-by-step description of how to carryout each test case. One test procedure may execute one or more testcases. The procedures may use executable 'scripts' that control theoperation of test tools.

Page 790: ESA Software Engineering Standards

40 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION

2.6.3.5 System test reporting

System test results may be reported in a variety of ways. Somecommon means of recording results are:• system test result forms recording the date and outcome of the test

cases executed by the procedure;• execution logfile.

System test results should reference any Software Problem Reportsraised during the test.

2.6.4 Acceptance tests

In ESA PSS-05-0, 'acceptance testing' refers to the process oftesting the system against the user requirements. The input to acceptancetesting is the software that has been successfully tested at system level.

Acceptance tests should always be done by the user or theirrepresentatives. If this is not possible, they should witness the acceptancetests and sign off the results.

2.6.4.1 Acceptance test planning

The first step in acceptance testing is to construct an acceptancetest plan and document it in the SVVP (SVV11). This plan is defined in theUR phase and should describe the scope, approach, resources andschedule of the intended acceptance tests. The scope of acceptancetesting is to validate that the software is compliant with the userrequirements, as stated in the URD. Acceptance tests are performed in theTR phase, although some acceptance tests of quality, reliability,maintainability and safety may continue into the OM phase until finalacceptance is possible.

The amount of testing required is dictated by the need to cover allthe user requirements in the URD. A test should be defined for everyessential user requirement, and for every desirable requirement that hasbeen implemented

2.6.4.2 Acceptance test design

The next step in acceptance testing is acceptance test design(SVV19). This and subsequent steps are performed in the DD phase,although acceptance test design may be attempted in the UR, SR and ADphases. Acceptance test designs should specify the details of the test

Page 791: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 41SOFTWARE VERIFICATION AND VALIDATION

approach for a user requirement, or combination of user requirements, andidentify the associated test cases and test procedures. The description ofthe test approach should state the necessary types of tests.

Acceptance testing should require no knowledge of the internalworkings of the software, so white-box tests cannot be used.

If an incremental delivery or evolutionary development approach isbeing used, acceptance tests should only address the user requirements ofthe new release. Regression tests should have been performed in systemtesting.

Dry-runs of acceptance tests should be performed before transfer ofthe software. Besides exposing any faults that have been overlooked, dry-runs allow the acceptance test procedures to be checked for accuracy andease of understanding.

The specific requirements in the URD should be divided intocapability requirements and constraint requirements. The followingsubsections describe approaches to testing each type of user requirement.

2.6.4.2.1 Capability tests

Capability requirements describe what the user can do with thesoftware. Tests should be designed that exercise each capability. Systemtest cases that verify functional, performance and operational requirementsmay be reused to validate capability requirements.

2.6.4.2.2 Constraint tests

Constraint requirements place restrictions on how the software canbe built and operated. They may predefine external interfaces or specifyattributes such as adaptability, availability, portability and security. Systemtest cases that verify compliance with requirements for interfaces, resources,security, portability, reliability, maintainability and safety may be reused tovalidate constraint requirements.

2.6.4.3 Acceptance test case specification

The acceptance test cases must be described in the SVVP (SVV20).These should specify the inputs, predicted results and execution conditionsfor a test case.

Page 792: ESA Software Engineering Standards

42 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION

2.6.4.4 Acceptance test procedure specification

The acceptance test procedures must be described in the SVVP(SVV21). These should provide a step-by-step description of how to carryout each test case. The effort required of users to validate the softwareshould be minimised by means of test tools.

2.6.4.5 Acceptance test reporting

Acceptance test results may be reported in a variety of ways. Somecommon means of recording results are:• acceptance test result forms recording the date and outcome of the test

cases executed by the procedure;• execution logfile.

Acceptance test results should reference any Software ProblemReports raised during the test.

Page 793: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 43SOFTWARE VERIFICATION AND VALIDATION METHODS

CHAPTER 3SOFTWARE VERIFICATION AND VALIDATION METHODS

3.1 INTRODUCTION

This chapter discusses methods for software verification andvalidation that may be used to enhance the basic approach described inChapter 2. The structure of this chapter follows that of the previous chapter,as shown in Table 3.1. Supplementary methods are described for reviews,formal proof and testing.

Activity Supplementary method

review software inspection

tracing none

formal proof formal methodsprogram verification techniques

testing structured testingstructured integration testing

Table 3.1: Structure of Chapter 3

3.2 SOFTWARE INSPECTIONS

Software inspections can be used for the detection of defects indetailed designs before coding, and in code before testing. They may alsobe used to verify test designs, test cases and test procedures. Moregenerally, inspections can be used for verifying the products of anydevelopment process that is defined in terms of:• operations (e.g. 'code module');• exit criteria (e.g. 'module successfully compiles').

Software inspections are efficient. Projects can detect over 50% ofthe total number of defects introduced in development by doing them [Ref21, 22].

Software inspections are economical because they result insignificant reductions in both the number of defects and the cost of their

Page 794: ESA Software Engineering Standards

44 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION METHODS

removal. Detection of a defect as close as possible to the time of itsintroduction results in:• an increase in the developers' awareness of the reason for the defect's

occurrence, so that the likelihood that a similar defect will recur again isreduced;

• reduced effort in locating the defect, since no effort is required todiagnose which component, out of many possible components,contains the defect.

Software inspections are formal processes. They differ fromwalkthroughs (see Section 2.3.2) by:• repeating the process until an acceptable defect rate (e.g. number of

errors per thousand lines of code) has been achieved;• analysing the results of the process and feeding them back to improve

the production process, and forward to give early measurements ofsoftware quality;

• avoiding discussion of solutions;• including rework and follow-up activities.

The following subsections summarise the software inspectionprocess. The discussion is based closely on the description given by Fagan[Ref 21 and 22] and ANSI/IEEE Std 1028-1988, 'IEEE Standard for SoftwareReviews and Audits' [Ref 10].

3.2.1 Objectives

The objective of a software inspection is to detect defects indocuments or code.

3.2.2 Organisation

There are five roles in a software inspection:• moderator;• secretary;• reader;• inspector;• author.

The moderator leads the inspection and chairs the inspectionmeeting. The person should have implementation skills, but not necessarily

Page 795: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 45SOFTWARE VERIFICATION AND VALIDATION METHODS

be knowledgeable about the item under inspection. He or she must beimpartial and objective. For this reason moderators are often drawn fromstaff outside the project. Ideally they should receive some training ininspection procedures.

The secretary is responsible for recording the minutes of inspectionmeetings, particularly the details about each defect found.

The reader guides the inspection team through the review itemsduring the inspection meetings.

Inspectors identify and describe defects in the review items underinspection. They should be selected to represent a variety of viewpoints (e.g.designer, coder and tester).

The author is the person who has produced the items underinspection. The author is present to answer questions about the items underinspection, and is responsible for all rework.

A person may have one or more of the roles above. In the interestsof objectivity, no person may share the author role with another role.

3.2.3 Input

The inputs to an inspection are the:• review items;• specifications of the review items;• inspection checklist;• standards and guidelines that apply to the review items;• inspection reporting forms;• defect list from a previous inspection.

3.2.4 Activities

A software inspection consists of the following activities:• overview;• preparation;• review meeting;• rework;• follow-up.

Page 796: ESA Software Engineering Standards

46 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION METHODS

The overview is a presentation of the items being inspected.Inspectors then prepare themselves for the review meeting by familiarisingthemselves with the review items. They then examine the review items,identify defects, and decide whether they should be corrected or not, at thereview meeting. Rework activities consist of the repair of faults. Follow-upactivities check that all decisions made by the review meeting are carriedout.

Before the overview, the moderator should:• check that the review items are ready for inspection;• arrange a date, time and place for the overview and review meetings;• distribute the inputs if no overview meeting is scheduled.

Organisations should collect their own inspection statistics and usethem for deciding the number and duration of inspections. The followingfigures may be used as the starting point for inspections of code [Ref 22]:• preparation: 125 non-comment lines of source code per hour;• review meeting: 90 non-comment lines of source code per hour.

These figures should be doubled for inspections of pseudo code orprogram design language.

Review meetings should not last more than two hours. Theefficiency of defect detection falls significantly when meetings last longerthan this.

3.2.4.1 Overview

The purpose of the overview is to introduce the review items to theinspection team. The moderator describes the area being addressed andthen the specific area that has been designed in detail.

For a reinspection, the moderator should flag areas that have beensubject to rework since the previous inspection.

The moderator then distributes the inputs to participants.

3.2.4.2 Preparation

Moderators, readers and inspectors then familiarise themselves withthe inputs. They might prepare for a code inspection by reading:• design specifications for the code under inspection;

Page 797: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 47SOFTWARE VERIFICATION AND VALIDATION METHODS

• coding standards;• checklists of common coding errors derived from previous inspections;• code to be inspected.

Any defects in the review items should be noted on RID forms anddeclared at the appropriate point in the examination. Preparation should bedone individually and not in a meeting.

3.2.4.3 Review meeting

The moderator checks that all the members have performed thepreparatory activities (see Section 3.2.4.2). The amount of time spent byeach member should be reported and noted.

The reader then leads the meeting through the review items. Fordocuments, the reader may summarise the contents of some sections andcover others line-by-line, as appropriate. For code, the reader covers everypiece of logic, traversing every branch at least once. Data declarationsshould be summarised. Inspectors use the checklist to find common errors.

Defects discovered during the reading should be immediately notedby the secretary. The defect list should cover the:• severity (e.g. major, minor);• technical area (e.g. logic error, logic omission, comment error);• location;• description.

Any solutions identified should be noted. The inspection teamshould avoid searching for solutions and concentrate on finding defects.

At the end of the meeting, the inspection team takes one of thefollowing decisions:• accept the item when the rework (if any) is completed;• make the moderator responsible for accepting the item when the rework

is completed;• reinspect the whole item (usually necessary if more than 5% of the

material requires rework).

The secretary should produce the minutes immediately after thereview meeting, so that rework can start without delay.

Page 798: ESA Software Engineering Standards

48 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION METHODS

3.2.4.4 Rework

After examination, software authors correct the defects described inthe defect list.

3.2.4.5 Follow-up

After rework, follow-up activities verify that all the defects have beenproperly corrected and that no secondary defects have been introduced.The moderator is responsible for follow-up.

Other follow-up activities are the:• updating of the checklist as the frequency of different types of errors

change;• analysis of defect statistics, perhaps resulting in the redirection of SVV

effort.

3.2.5 Output

The outputs of an inspection are the:• defect list;• defect statistics;• inspection report.

The inspection report should give the:• names of the participants;• duration of the meeting;• amount of material inspected;• amount of preparation time spent;• review decision on acceptance;• estimates of rework effort and schedule.

3.3 FORMAL METHODS

Formal Methods, such as LOTOS, Z and VDM, possess an agreednotation, with well-defined semantics, and a calculus, which allow proofs tobe constructed. The first property is shared with other methods for softwarespecification, but the second sets them apart.

Page 799: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 49SOFTWARE VERIFICATION AND VALIDATION METHODS

Formal Methods may be used in the software requirementsdefinition phase for the construction of specifications. They are discussed inESA PSS-05-03, 'Guide to the Software Requirements Definition Phase' [Ref2].

3.4 PROGRAM VERIFICATION TECHNIQUES

Program verification techniques may be used in the detailed designand production phase to show that a program is consistent with itsspecification. These techniques require that the:• semantics of the programming language are formally defined;• program be formally specified in a notation that is consistent with the

mathematical verification techniques used.

If these conditions are not met, formal program verification cannotbe attempted [Ref 16].

A common approach to formal program verification is to derive, bystepwise refinement of the formal specification, 'assertions' (e.g.preconditions or postconditions) that must be true at each stage in theprocessing. Formal proof of the program is achieved by demonstrating thatprogram statements separating assertions transform each assertion into itssuccessor. In addition, it is necessary to show that the program will alwaysterminate (i.e. one or more of the postconditions will always be met).

Formal program verification is usually not possible because theprogramming language has not been formally defined. Even so, a morepragmatic approach to formal proof is to show that the:• program code is logically consistent with the program specification;• program will always terminate.

Assertions are placed in the code as comments. Verification isachieved by arguing that the code complies with the requirements present inthe assertions.

3.5 CLEANROOM METHOD

The cleanroom method [Ref 23] replaces unit testing andintegration testing with software inspections and program verificationtechniques. System testing is carried out by an independent testing team.

Page 800: ESA Software Engineering Standards

50 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION METHODS

The cleanroom method is not fully compliant with ESA PSS-05-0because:• full statement coverage is not achieved (DD06);• unit and integration testing are omitted (DD07, DD08).

3.6 STRUCTURED TESTING

Structured Testing is a method for verifying software based uponthe mathematical properties of control graphs [Ref 13]. The method:• improves testability by limiting complexity during detailed design;• guides the definition of test cases during unit testing.

Software with high complexity is hard to test. The Structured Testingmethod uses the cyclomatic complexity metric for measuring complexity,and recommends that module designs be simplified until they are within thecomplexity limits.

Structured Testing provides a technique, called the 'baselinemethod', for defining test cases. The objective is to cover every branch of theprogram logic during unit testing. The minimum number of test cases is thecyclomatic complexity value measured in the first step of the method.

The discussion of Structured Testing given below is a summary ofthat given in Reference 13. The reader is encouraged to consult thereference for a full discussion.

3.6.1 Testability

The testability of software should be evaluated during the detaileddesign phase by measuring its complexity.

The relationships between the parts of an entity determine itscomplexity. The parts of a software module are the statements in it. Theseare related to each other by sequence, selection (i.e. branches orconditions) and iteration (i.e. loops). As loops can be simulated bybranching on a condition, McCabe defined a metric that gives a complexityof 1 for a simple sequence of statements with no branches. Only one test isrequired to execute every statement in the sequence. Each branch added toa module increases the complexity by one, and requires an extra test tocover it.

Page 801: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 51SOFTWARE VERIFICATION AND VALIDATION METHODS

A control graph represents the control flow in a module. Controlgraphs are simplified flowcharts. Blocks of statements with sequential logicare represented as 'nodes' in the graph. Branches between blocks ofstatements (called 'edges') are represented as arrows connecting thenodes. McCabe defines the cyclomatic complexity 'v' of a control graph as:

v = e - n + 2

where:• e is the number of edges;• n is the number of nodes.

Figure 3.6.1A, B and C show several examples of control graphs.

e = 1, n = 2, v = 1

e = 1, n = 1, v = 2

e = 3, n = 3, v = 2

Sequence

Selection

Iteration

Node

Edge

Figure 3.6.1A: Basic control graphs

Page 802: ESA Software Engineering Standards

52 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION METHODS

e = 4, n = 4, v = 2IF .. THEN ... ELSE ... ENDIF

e = 3, n = 3, v = 2UNTIL

e = 3, n = 3, v = 2WHILE

e = 6, n = 5, v = 3CASE

Figure 3.6.1B: Control graphs for structured programming elements

e = 3, n = 4, v = 1

e = 6, n = 5, v = 3

e = 9, n = 5, v = 6

Figure 3.6.1C: Example control graphs

Page 803: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 53SOFTWARE VERIFICATION AND VALIDATION METHODS

Alternative ways to measure cyclomatic complexity are to count the:• number of separate regions in the control graph (i.e. areas separated by

edges);• number of decisions (i.e. the second and subsequent branches

emanating from a node) and add one.

Myers [Ref 14] has pointed out that decision statements withmultiple predicates must be separated into simple predicates beforecyclomatic complexity is measured. The decision IF (A .AND. B .AND. C)THEN ... is really three separate decisions, not one.

The Structured Testing Method recommends that the cyclomaticcomplexity of a module be limited to 10. Studies have shown that errorsconcentrate in modules with complexities greater than this value. Duringdetailed design, the limit of 7 should be applied because complexity alwaysincreases during coding. Modules that exceed these limits should beredesigned. Case statements are the only exception permitted.

The total cyclomatic complexity of a program can be obtained bysumming the cyclomatic complexities of the constituent modules. The fullcyclomatic complexity formula given by McCabe is:

v = e - n + 2p

where p is the number of modules. Each module has a separatecontrol graph. Figure 3.6.1D shows how the total cyclomatic complexity canbe evaluated by:• counting all edges (18), nodes (14) and modules (3) and applying the

complexity formula, 18 - 14 + 2*3 = 10;• adding the complexity of the components, 1+ 3 + 6 = 10.

Combining the modules in Figure 3.6.1D into one module gives amodule with complexity of eight. Although the total complexity is reduced,this is higher than the complexity of any of the separate modules. In general,decomposing a module into smaller modules increases the total complexitybut reduces the maximum module complexity. A useful rule for designdecomposition is to continue decomposition until the complexity of each ofthe modules is 10 or less.

Page 804: ESA Software Engineering Standards

54 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION METHODS

e = 18, n = 14, v = 18 - 14 + (2 x 3) = 10

Module A

Module B Module C

B C

v = 1

v = 3 v = 6

Figure 3.6.1D: Evaluating total cyclomatic complexity

3.6.2 Branch testing

Each branch added to a control graph adds a new path, and in thislies the importance of McCabe's complexity metric for the software tester:the cyclomatic complexity metric, denoted as 'v' by McCabe, measures theminimum number of paths through the software required to ensure that:• every statement in a program is executed at least once (DD06);• each decision outcome is executed at least once.

These two criteria imply full 'branch testing' of the software. Everyclause of every statement is executed. In simple statement testing, everyclause of every statement may not be executed. For example, branch testingof IF (A .EQ. B) X = X/Y requires that test cases be provided for both 'Aequal to B' and 'A not equal to B'. For statement testing, only one test caseneeds to be provided. ESA PSS-05-0 places statement coverage as theminimum requirement. Branch testing should be the verification requirementin most projects.

3.6.3 Baseline method

The baseline method is used in structured testing to decide whatpaths should be used to traverse every branch. The test designer should

Page 805: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 55SOFTWARE VERIFICATION AND VALIDATION METHODS

examine the main function of the module and define the 'baseline path' thatdirectly achieves it. Special cases and errors are ignored.

Complexity = 7 - 6 + 2 = 3

Test Case 1 Test Case 2 Test Case 3

Baseline path Switch decision 1 Switch decision 2

IF (A) THENX=1

ELSEIF (B) THENX=2

ELSEX=3

ENDIF

Figure 3.6.3: Baseline Method example.

Figure 3.6.3 shows the principles. The cyclomatic complexity of themodule is three, so three test cases are required for full branch coverage.Test case 1 traces the baseline path. The baseline method proceeds bytaking each decision in turn and switching it, as shown in test cases 2 and3. Each switch is reset before switching the next, resulting in the checking ofeach deviation from the baseline. The baseline method does not require thetesting of all possible paths. See the example of untested path of Figure3.6.3. Tests for paths that are not tested by the baseline method may haveto be added to the test design (e.g paths for testing safety-critical functions).

3.7 STRUCTURED INTEGRATION TESTING

Structured Integration Testing [Ref 15] is a method based upon theStructured Testing Method that:• improves testability by limiting complexity during software architectural

design;• guides the definition of test cases during integration testing.

Page 806: ESA Software Engineering Standards

56 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION METHODS

The method can be applied at all levels of design above the modulelevel. Therefore it may also be applied in unit testing when units assembledfrom modules are tested.

The discussion of Structured Integration Testing given below is asummary of that given in Reference 15. The reader is encouraged to consultthe references for a full discussion.

3.7.1 Testability

Structured Integration Testing defines three metrics for measuringtestability:• module design complexity;• design complexity;• integration complexity.

Module design complexity, denoted as 'iv' by McCabe, measuresthe individual effect of a module upon the program design [Ref 15]. Themodule design complexity is evaluated by drawing the control graph of themodule and then marking the nodes that contain calls to external modules.The control graph is then 'reduced' according to the rules listed below andshown in Figure 3.7.1A:1. marked nodes cannot be removed;2. unmarked nodes that contain no decisions are removed;3. edges that return control to the start of a loop that only contains

unmarked nodes are removed;4. edges that branch from the start of a case statement to the end are

removed if none of the other cases contain marked nodes.

As in all control graphs, edges that 'duplicate' other edges areremoved. The module design complexity is the cyclomatic complexity of thereduced graph.

In summary, module design complexity ignores paths covered inmodule testing that do not result in calls to external modules. The remainingpaths are needed to test module interfaces.

Page 807: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 57SOFTWARE VERIFICATION AND VALIDATION METHODS

Rule 1.

Rule 2.

Rule 3.

Example A.

Rule 4.

Example B.

Figure 3.7.1A: Reduction rules

3

4 5 6

7

8

9

Full Control Graph

1

2

1

3

4 5 6

7

8

9

1

3

5 6

7

9

1. Remove node 2

2. Remove node 4 loop return edge

3. Remove node 4

8

4. Remove node 7 decisionbranch with no nodes

Figure 3.7.1B: Calculating module design complexity - part 1

Page 808: ESA Software Engineering Standards

58 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION METHODS

1

3

5 6

7

9

5. Remove node 8

1

3

5 6

7

6. Remove node 9

3

5 6

7

7. Remove node 1

iv = 5 - 4 + 2 = 3

Figure 3.7.1C: Calculating module design complexity - part 2

Figures 3.7.1B and C show an example of the application of theserules. The shaded nodes 5 and 6 call lower level modules. The unshadednodes do not. In the first reduction step, rule 2 is applied to remove node 2.Rule 3 is then applied to remove the edge returning control in the looparound node 4. Rule 2 is applied to remove node 4. Rule 4 is then applied toremove the edge branching from node 7 to node 9. Rule 2 is then applied toremove nodes 8, 9 and 1. The module design complexity is the cyclomaticcomplexity (see Section 3.6.1) of the reduced control graph, 3.

The design complexity, denoted as 'So' by McCabe, of an assemblyof modules is evaluated by summing the module design complexities ofeach module in the assembly.

The integration complexity, denoted as 'S1' by McCabe, of anassembly of modules, counts the number of paths through the control flow.The integration complexity of an assembly of 'N' modules is given by theformula:

S1 = S0 - N + 1

The integration complexity of N modules each containing nobranches is therefore 1.

The testability of a design is measured by evaluating its integrationcomplexity. Formally the integration complexity depends on measuring themodule design complexity of each module. During architectural design, the

Page 809: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 59SOFTWARE VERIFICATION AND VALIDATION METHODS

full control graph of each constituent module will not usually be available.However sufficient information should be available to define the moduledesign complexity without knowing all the module logic.

iv=1 iv=1 iv=1 iv=1

iv=1 iv=4

iv=2

A

B C

D E F G

S0=11; S1=11-7+1=5

Figure 3.7.1D: Estimating integration complexity from a structure chart

Figure 3.7.1D shows a structure chart illustrating how S 1 can beevaluated just from knowing the conditions that govern the invocation ofeach component, i.e. the control flow. Boxes in the chart correspond todesign components, and arrows mark transfer of the control flow. Adiamond indicates that the transfer is conditional. The control flow is definedto be:• component A calls either component B or component C;• component B sequentially calls component D and then component E;• component C calls either component E or component F or component

G or none of them.

The integration complexity of the design in shown Figure 3.7.1D istherefore 5.

Page 810: ESA Software Engineering Standards

60 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION METHODS

3.7.2 Control flow testing

Just as cyclomatic complexity measures the number of test casesrequired to cover every branch of a module, integration complexity measuresthe number of tests required to cover all the control flow paths. Structuredintegration testing can therefore be used for verifying that all the controlflows defined in the ADD have been implemented (DD08).

3.7.3 Design integration testing method

The design integration testing method is used in structuredintegration testing to enable the test designer to decide what control flowpaths should be tested. The test designer should:• evaluate the integration complexity, S1 for the N modules in the design

(see Section 3.7.1);• construct a blank matrix of dimension S1 rows by N columns, called the

'integration path test matrix';• mark each column, the module of which is conditionally called, with a

'p' (for predicate), followed by a sequence number for the predicate;• fill in the matrix with 1's or 0's to show whether the module is called in

each test case.

The example design shown in Figure 3.7.1D has an integrationcomplexity of 5. The five integration test cases to verify the control flow areshown in Table 3.7.3.

Case A

P1

B

P2

C D

P3

E

P4

F

P5

G Control flow path

1 1 1 0 1 1 0 0 A calls B; B calls D and E

2 1 0 1 0 0 0 0 A calls C and then returns

3 1 0 1 0 1 0 0 A calls C and then C calls E

4 1 0 1 0 0 1 0 A calls C and then C calls F

5 1 0 1 0 0 0 1 A calls C and then C calls G

Table 3.7.3: Example Integration Path Test Matrix

Page 811: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 61SOFTWARE VERIFICATION AND VALIDATION METHODS

In Table 3.7.3, P1 and P2 are two predicates (contained withinmodule A) that decide whether B or C are called. Similarly, P3, P4 and P5are three predicates (contained within module C) that decide whether E, F orG are called.

Page 812: ESA Software Engineering Standards

62 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION METHODS

This page is intentionally left blank.

Page 813: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 63SOFTWARE VERIFICATION AND VALIDATION TOOLS

CHAPTER 4SOFTWARE VERIFICATION AND VALIDATION TOOLS

4.1 INTRODUCTION

A software tool is a 'computer program used in the development,testing, analysis, or maintenance of a program or its documentation' [Ref 6].Software tools, more commonly called 'Computer Aided SoftwareEngineering' (CASE) tools enhance the software verification and validationprocess by:• reducing the effort needed for mechanical tasks, therefore increasing

the amount of software verification and validation work that can bedone;

• improving the accuracy of software verification and validation (forexample, measuring test coverage is not only very time consuming, it isalso very difficult to do accurately without tools).

Both these benefits result in improved software quality andproductivity.

ESA PSS-05-0 defines the primary software verification andvalidation activities as:• reviewing;• tracing;• formal proof;• testing;• auditing.

The following sections discuss tools for supporting each activity.This chapter does not describe specific products, but contains guidelinesfor their selection.

4.2 TOOLS FOR REVIEWING

4.2.1 General administrative tools

General administrative tools may be used for supporting reviewing,as appropriate. Examples are:• word processors that allow commenting on documents;

Page 814: ESA Software Engineering Standards

64 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION TOOLS

• word processors that can show changes made to documents;• electronic mail systems that support the distribution of review items;• notes systems that enable communal commenting on a review item;• conferencing systems that allow remote participation in reviews.

4.2.2 Static analysers

Static analysis is the process of evaluating a system or componentbased on its form, structure, content or documentation [Ref 6]. Reviews,especially software inspections, may include activities such as:• control flow analysis to find errors such as unreachable code, endless

loops, violations of recursion conventions, and loops with multiple entryand exit points;

• data-use analysis to find errors such as data used before initialisation,variables declared but not used, and redundant writes;

• range-bound analysis to find errors such as array indices outside theboundaries of the array;

• interface analysis to find errors such as mismatches in argument listsbetween called modules and calling modules;

• information flow analysis to check the dependency relations betweeninput and output;

• verification of conformance to language standards (e.g. ANSI C);• verification of conformance to project coding standards, such as

departures from naming conventions;• code volume analysis, such as counts of the numbers of modules and

the number of lines of code in each module;• complexity analysis, such as measurements of cyclomatic complexity

and integration complexity.

Tools that support one or more of these static analysis activities areavailable and their use is strongly recommended.

Compilers often perform some control flow and data-use analysis.Some compile/link systems, such as those for Ada and Modula-2,automatically do interface analysis to enforce the strong type checkingdemanded by the language standards. Compilers for many languages, suchas FORTRAN and C, do not do any interface analysis. Static analysis toolsare especially necessary when compilers do not check control flow, data

Page 815: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 65SOFTWARE VERIFICATION AND VALIDATION TOOLS

flow, range bounds and interfaces. The use of a static analyser should be anessential step in C program development, for example.

Static analysers for measuring complexity and constructing callgraphs are essential when the Structured Testing method is used for unittesting (see Section 4.5.1). Producing call graphs at higher levels (i.e.module tree and program tree) is also of use in the review and testing of thedetailed design and architectural design.

4.2.3 Configuration management tools

Reviewing is an essential part of the change control process andtherefore some configuration management tools also support reviewprocesses. The preparation and tracking of RIDs and SPRs can besupported by database management systems for example. See ESA PSS-05-09, 'Guide to Software Configuration Management' [Ref 4].

4.2.4 Reverse engineering tools

Although reverse engineering tools are more commonly used duringmaintenance to enable code to be understood, they can be used duringdevelopment to permit verification that 'as-built' conforms to 'as-designed'.For example the as-built structure chart of a program can be generated fromthe code by means of a reverse engineering tool, and then compared withthe as-designed structure chart in the ADD or DDD.

4.3 TOOLS FOR TRACING

The basic tracing method is to:• uniquely identify the items to be tracked;• record relationships between items.

ESA PSS-05-0 states that cross-reference matrices must be used torecord relationships between:• user requirements and software requirements;• software requirements and architectural design components;• software requirements and detailed design components.

In addition, test designs should be traced to software componentsand requirements.

Page 816: ESA Software Engineering Standards

66 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION TOOLS

Tracing tools should:• ensure that identifiers obey the naming conventions of the project;• ensure that identifiers are unique;• allow attributes to be attached to the identified items such as the need,

priority, or stability of a requirement, or the status of a softwarecomponent (e.g. coded, unit tested, integration tested, system tested,acceptance tested);

• record all instances where an identified part of a document referencesan identified part of another document (e.g. when software requirementSR100 references user requirement UR49, the tool should record therelationship SR100-UR49);

• accept input in a variety of formats;• store the traceability records (e.g. a table can be used to store a

traceability matrix);• allow easy querying of the traceability records;• allow the extraction of information associated with a specific traceability

record (e.g. the user requirement related to a software requirement);• generate traceability reports (e.g. cross-reference matrices);• inform users of the entities that may be affected when a related entity is

updated or deleted;• be integrated with the project repository so that the relationship

database is automatically updated after a change to a document or asoftware component;

• provide access to the change history of a review item so that theconsistency of changes can be monitored.

In summary, tracing tools should allow easy and efficient navigationthrough the software.

Most word processors have indexing and cross-referencingcapabilities. These can also support traceability; for example, traceabilityfrom software requirements to user requirements can be done by creatingindex entries for the user requirement identifiers in the SRD. Thedisadvantage of this approach is that there is no system-wide database ofrelationships. Consequently dedicated tracing tools, normally based uponcommercial database management systems, are used to build relationshipdatabases. Customised frontends are required to make tracing toolcapabilities easily accessible. The database may form part of, or beintegrated with, the software development environment's repository. Tracing

Page 817: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 67SOFTWARE VERIFICATION AND VALIDATION TOOLS

tools should accept review items (with embedded identifiers) and generatethe traceability records from that input.

Traceability tool functionality is also provided by 'requirementsengineering' tools, which directly support the requirements creation andmaintenance process. Requirements engineering tools contain all thespecific requirements information in addition to the information about therelationships between the requirements and the other parts of the software.

4.4 TOOLS FOR FORMAL PROOF

Formal Methods (e.g. Z and VDM) are discussed in ESA PSS-05-03,'Guide to the Software Requirements Definition Phase' [Ref 2]. This guidealso discusses the criteria for the selection of CASE tools for softwarerequirements definition. These criteria should be used for the selection oftools to support Formal Methods.

Completely automated program verification is not possible without aformally defined programming language. However subsets of somelanguages (e.g. Pascal, Ada) can be formally defined. Preprocessors areavailable that automatically check that the code is consistent with assertionstatements placed in it. These tools can be very effective in verifying smallwell-structured programs.

Semantics is the relationship of symbols and groups of symbols totheir meaning in a given language [Ref 6]. In software engineering, semanticanalysis is the symbolic execution of a program by means of algebraicsymbols instead of test input data. Semantic analyser s use a source codeinterpreter to substitute algebraic symbols into the program variables andpresent the results as algebraic formulae [Ref 17]. Like program verifiers,semantic analysers may be useful for the verification of small well-structuredprograms.

4.5 TOOLS FOR TESTING

Testing involves many activities, most of which benefit from toolsupport. Figure 4.5 shows what test tools can be used to support the testingactivities defined in Section 2.6.

Page 818: ESA Software Engineering Standards

68 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION TOOLS

Specifytests

Make testsoftware

Link SUT

Run tests

Analysecoverage

Analyseperformance

Checkoutputs

Storetest data

URD & SVVP/AT/Test PlanSRD & SVVP/ST/Test PlanADD & SVVP/IT/Test PlanDDD & SVVP/UT/Test Plan

Testcases

TestCode

SUT

Tested CodeExecutableSUT

OutputData

OldOutputData

TestProc'sInput Data

TestData

Expected Output Data

Test Data = Input Data + Expected Output Data

Test results/coverage

Test results/performance

Test results

Static analyserTest case generator

Test harness

DebuggerCoverage analyser

Performance analyserDebugger

Test harness

Coverage analyser

Perform' analyser

Comparator

Test manager tool

System monitor

Figure 4.5: Testing Tools

The following paragraphs step through each activity depicted inFigure 4.5, discussing the tools indicated by the arrows into the bottom ofthe activity boxes.1. The 'specify tests' activity may use the Structured Testing Method and

Structured Integration Testing methods (see Sections 3.6 and 3.7).Static analysers should be used to measure the cyclomatic complexityand integration complexity metrics. This activity may also use theequivalence partitioning method for defining unit test cases. Test casegenerators should support this method.

2. The 'make test software' activity should be supported with a testharness tool. At the unit level, test harness tools that link to the SUT actas drivers for software modules, providing all the user interfacefunctions required for the control of unit tests. At the integration andsystem levels, test harness tools external to the SUT (e.g. simulators)may be required. Test procedures should be encoded in scripts thatcan be read by the test harness tool.

3. The 'link SUT' activity links the test code, SUT, and existing tested code,producing the executable SUT. Debuggers, coverage analysers andperformance analysers may also be built into the executable SUT.Coverage analysers and performance analysers (collectively known as'dynamic analysers') require 'instrumentation' of the code so that post-test analysis data can be collected.

Page 819: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 69SOFTWARE VERIFICATION AND VALIDATION TOOLS

4. The 'run tests' activity executes the tests according to the testprocedures, using the input data. Test harnesses permit control of thetest. These should have a capture and playback capability if the SUThas an interactive user interface. White-box test runs may be conductedunder the control of a debugger. This permits monitoring of each step ofexecution, and immediate detection of defects that cause test failures.System monitoring tools can be used to study program resource usageduring tests.

5. The 'analyse coverage' activity checks that the tests have executed theparts of the SUT they were intended to. Coverage analysers show thepaths that have been executed, helping testers keep track of coverage.

6. The 'analyser performance' activity should be supported with aperformance analyser (also called a 'profiler'). These examine theperformance data collected during the test run and produce reports ofresource usage at component level and statement level.

7. The 'check outputs' activity should be supported with a comparator .This tool can automatically compare the test output data with theoutputs of previous tests or the expected output data.

8. The 'store test data' activity should be supported by test manager toolsthat store the test scripts, test input and output data for future use.

Tool support for testing is weakest in the area of test design andtest case generation. Tool support is strongest in the area of running tests,analysing the results and managing test software. Human involvement isrequired to specify the tests, but tools should do most of the tedious,repetitive work involved in running them. Automated support for regressiontesting is especially good. In an evolving system this is where it is mostneeded. The bulk of software costs usually accrue during the operations andmaintenance phase of a project. Often this is due to the amount ofregression testing required to verify that changes have not introduced faultsin the system.

The following subsections discuss the capabilities of test tools inmore detail.

4.5.1 Static analysers

Static analysers that measure complexity are needed to support theStructured Testing method and for checking adherence to codingstandards. These tools may also support review activities (see Section4.2.2). The measure of complexity defines the minimum number of test

Page 820: ESA Software Engineering Standards

70 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION TOOLS

cases required for full branch coverage. Static analysers should alsoproduce module control graphs to support path definition.

The output of a static analyser should be readable by the testharness tool so that checks on values measured during static analysis canbe included in test scripts (e.g. check that the complexity of a module hasnot exceeded some threshold, such as 10). This capability eases regressiontesting.

4.5.2 Test case generators

A test case generator (or automated test generator) is 'a softwaretool that accepts as input source code, test criteria, specifications, or datastructure definitions; uses these inputs to generate test input data; and,sometimes, determines the expected results' [Ref 6].

The test case generation methods of equivalence partitioning andboundary value analysis can be supported by test case generators. Thesemethods are used in unit testing. Automated test case generation atintegration, system and acceptance test level is not usually possible.

Expected output data need to be in a form usable by comparators.

4.5.3 Test harnesses

A test harness (or test driver) is a ‘software module used to invoke amodule under test and, often, provide test inputs, control and monitorexecution, and report test results’ [Ref 6]. Test harness tools should:• provide a language for programming test procedures (i.e. a script

language);• generate stubs for software components called by the SUT;• not require any modifications to the SUT;• provide a means of interactive control (for debugging);• provide a batch mode (for regression testing);• enable stubs to be informed about the test cases in use, so that they

will read the appropriate test input data;• execute all test cases required of a unit, subsystem or system;• handle exceptions, so that testing can continue after test failures;• record the results of all test cases in a log file;• check all returns from the SUT;• record in the test results log whether the test was passed or failed;• be usable in development and target environments.

Page 821: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 71SOFTWARE VERIFICATION AND VALIDATION TOOLS

A test harness is written as a driver for the SUT. The harness iswritten in a scripting language that provides module, sequence, selectionand iteration constructs. Scripting languages are normally based upon astandard programming language. The ability to call one script module fromanother, just as in a conventional programming language, is very importantas test cases may only differ in their input data. The scripting languageshould include directives to set up, execute, stop and check the results ofeach test case. Software components that implement these directives areprovided as part of the test harness tool. Test harnesses may be compiledand linked with the SUT, or exist externally as a control program.

Test harness tools can generate input at one or more of thefollowing levels [Ref 19]:• application software level, either as input files or call arguments;• operating system level, for example as X-server messages;• hardware level, as keyboard or mouse input.

Test harness tools should capture output at the level at which theyinput it. It should be possible to generate multiple input and output datastreams. This is required in stress tests (e.g. multiple users logging onsimultaneously) and security tests (e.g. resource locking).

A synchronisation mechanism is required to co-ordinate the testharness and the software under test. During unit testing this is rarely aproblem, as the driver and the SUT form a single program. Synchronisationmay be difficult to achieve during integration and system testing, when thetest harness and SUT are separate programs. Synchronisation techniquesinclude:• recording and using the delays of the human operators;• waiting for a specific output (i.e. a handshake technique);• using an algorithm to decide the wait time;• monitoring keyboard lock indicators.

The first technique is not very robust. The other techniques arepreferable.

For interactive testing, the most efficient way to define the test scriptis to capture the manually generated input and store it for later playback. Forgraphical user interfaces this may be the only practical method.

Page 822: ESA Software Engineering Standards

72 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION TOOLS

Graphical user interfaces have window managers to control the useof screen resources by multiple applications. This can cause problems totest harness tools because the meaning of a mouse click depends uponwhat is beneath the cursor. The window manager decides this, not the testharness. This type of problem can mean that test scripts need to befrequently modified to cope with changes in layout. Comparators that canfilter out irrelevant changes in screen layout (see Section 4.5.7) reduce theproblems of testing software with a graphical user interface.

4.5.4 Debuggers

Debuggers are used in white-box testing for controlling theexecution of the SUT. Capabilities required for testing include the ability to:• display and interact with the tester using the symbols employed in the

source code;• step through code instruction-by-instruction;• set watch points on variables;• set break points;• maintain screen displays of the source code during test execution;• log the session for inclusion in the test results.

Session logs can act as test procedure files for subsequent testsand also as coverage reports, since they describe what happened.

4.5.5 Coverage analysers

Coverage analysis is the process of:• instrumenting the code so that information is collected on the parts of it

that are executed when it is run;• analysing the data collected to see what parts of the SUT have been

executed.

Instrumentation of the code should not affect its logic. Any effectson performance should be easy to allow for. Simple coverage analysers willjust provide information about statement coverage, such as indications ofthe program statements executed in one run. More advanced coverageanalysers can:• sum coverage data to make reports of total coverage achieved in a

series of runs;• display control graphs, so that branch coverage and path coverage can

be monitored;

Page 823: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 73SOFTWARE VERIFICATION AND VALIDATION TOOLS

• output test coverage information so that coverage can be checked bythe test harness, therefore aiding regression testing;

• operate in development and target environments.

4.5.6 Performance analysers

Performance analysis is the process of:• instrumenting the code so that information is collected about resources

used when it is run;• analysing the data collected to evaluate resource utilisation;• optimising performance.

Instrumentation of the code should not affect the measuredperformance. Performance analysers should provide information on a varietyof metrics such as:• CPU usage by each line of code;• CPU usage by each module;• memory usage;• input/output volume.

Coverage analysers are often integrated with performance analysersto make 'dynamic analysers'.

4.5.7 Comparators

A comparator is a ‘software tool used to compare two computerprograms, files, or sets of data to identify commonalities and differences’[Ref 6]. Differencing tools are types of comparators. In testing, comparatorsare needed to compare actual test output data with expected test outputdata.

Expected test output data may have originated from:• test case generators;• previous runs.

Comparators should report about differences between actual andexpected output data. It should be possible to specify tolerances ondifferences (floating point operations can produce slightly different resultseach time they are done). The output of comparators should be usable bytest harness tools, so that differences can be used to flag a test failure. Thismakes regression testing more efficient.

Page 824: ESA Software Engineering Standards

74 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)SOFTWARE VERIFICATION AND VALIDATION TOOLS

Screen comparators [Ref 19] are useful for testing interactivesoftware. These may operate at the character or bitmap level. The ability toselect parts of the data, and attributes of the data, for comparison is a keyrequirement of a screen comparator. Powerful screen comparisoncapabilities are important when testing software with a graphical userinterface.

Some comparators may be run during tests. When a difference isdetected the test procedure is suspended. Other comparators run off-line,checking for differences after the test run.

4.5.8 Test management tools

All the software associated with testing: test specifications, SUT,drivers, stubs, scripts, tools, input data, expected output data and outputdata must be placed under configuration management (SCM01). Theconfiguration management system is responsible for identifying, storing,controlling and accounting for all configuration items.

Test data management tools provide configuration managementfunctions for the test data and scripts. Specifically designed for supportingtesting, they should:• enable tests to be set up and run with the minimum of steps;• automatically manage the storage of outputs and results;• provide test report generation facilities;• provide quality and reliability statistics;• provide support for capture and playback of input and output data

during interactive testing.

Test managers are essential for efficient regression testing.

Page 825: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 75THE SOFTWARE VERIFICATION AND VALIDATION PLAN

CHAPTER 5THE SOFTWARE VERIFICATION AND VALIDATION PLAN

5.1 INTRODUCTION

All software verification and validation activities must bedocumented in the Software Verification and Validation Plan (SVVP) (SVV04).The SVVP is divided into seven sections that contain the verification plans forthe SR, AD and DD phases and the unit, integration, system andacceptance test specifications. Figure 5.1A summarises when and whereeach software verification and validation activity is documented in the SVVP.

SVVPSECTION

PHASE USERREQUIREMENTSDEFINITION

SOFTWAREREQUIREMENTSDEFINITION

ARCHITECTURALDESIGN

DETAILEDDESIGN ANDPRODUCTION

TRANSFER

UT PlanUT DesignsUT CasesUT ProceduresUT Reports

IT Plan IT DesignsIT CasesIT ProceduresIT Reports

ST Plan ST DesignsST CasesST ProceduresST Reports

AT PlanAT DesignsAT CasesAT Procedures AT Reports

DD Phase Plan

AD Phase Plan

SR Phase Plan

SVVP/SR

SVVP/AD

SVVP/DD

SVVP/AT

SVVP/ST

SVVP/IT

SVVP/UT

Figure 5.1A: Life cycle production of SVV documentation

Each row of Figure 5.1A corresponds to a section of the SVVP andeach column entry corresponds to a subsection. For example, the entry 'STPlan' in the SVVP/ST row means that the System Test Plan is drawn up in theSR phase and placed in the SVVP section 'System Tests', subsection 'TestPlan'.

Page 826: ESA Software Engineering Standards

76 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)THE SOFTWARE VERIFICATION AND VALIDATION PLAN

The SVVP must ensure that the verification activities:• are appropriate for the degree of criticality of the software (SVV05);• meet the verification and acceptance testing requirements (stated in the

SRD) (SVV06);• verify that the product will meet the quality, reliability, maintainability and

safety requirements (stated in the SRD) (SVV07);• are sufficient to assure the quality of the product (SVV08).

The table of contents for the verification sections is derived from theIEEE Standard for Verification and Validation Plans (ANSI/IEEE Std 1012-1986). For the test sections it is derived from the IEEE Standard for SoftwareTest Documentation (ANSI/IEEE Std 829-1983).

The relationship between test specifications, test plans, testdesigns, test cases, test procedures and test results may sometimes be asimple hierarchy but usually it will not be. Figure 5.1B shows therelationships between the sections and subsections of the SVVP. Sectionsare shown in boxes. Relationships between sections are shown by lineslabelled with the verb in the relationship (e.g. 'contains'). The one-to-onerelationships are shown by a plain line and one-to-many relationships areshown by the crow's feet.

TestDesign

TestCase

TestProcedure

TestReport

Test Specification

contains contains contains contains

TestPlan

contains

SVVP

contains

controls uses uses outputs

Phase

Plancontains

SR,AD,DD

UT,IT,ST,AT

1,n 1,n 1,n 1,n

uses

Figure 5.1B: Relationships between sections of the SVVP

Figure 5.1B illustrates that a test plan controls the test designprocess, defining the software items to be tested. Test designs define thefeatures of each software item to be tested, and specify the test cases andtest procedures that will be used to test the features. Test cases may be

Page 827: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 77THE SOFTWARE VERIFICATION AND VALIDATION PLAN

used by many test designs and many test procedures. Each execution of atest procedure produces a new set of test results.

Software verification and validation procedures should be easy tofollow, efficient and wherever possible, reusable in later phases. Poor testdefinition and record keeping can significantly reduce the maintainability ofthe software.

The key criterion for deciding the level of documentation of testing isrepeatability. Tests should be sufficiently documented to allow repetition bydifferent people, yet still yield the same results for the same software. Thelevel of test documentation depends very much upon the software toolsused to support testing. Good testing tools should relieve the developerfrom much of the effort of documenting tests.

5.2 STYLE

The SVVP should be plain and concise. The document should beclear, consistent and modifiable.

Authors should assume familiarity with the purpose of the software,and not repeat information that is explained in other documents.

5.3 RESPONSIBILITY

The developer is normally responsible for the production of theSVVP. The user may take responsibility for producing the Acceptance TestSpecification (SVVP/AT), especially when the software is to be embedded ina larger system.

5.4 MEDIUM

It is usually assumed that the SVVP is a paper document. The SVVPcould be distributed electronically to participants with the necessaryequipment.

Page 828: ESA Software Engineering Standards

78 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)THE SOFTWARE VERIFICATION AND VALIDATION PLAN

5.5 SERVICE INFORMATION

The SR, AD, DD, UT, IT, ST and AT sections of the SVVP areproduced at different times in a software project. Each section should bekept separately under configuration control and contain the following serviceinformation:

a - Abstractb - Table of Contentsc - Document Status Sheetd - Document Change records made since last issue

5.6 CONTENT OF SVVP/SR, SVVP/AD & SVVP/DD SECTIONS

These sections define the review, proof and tracing activities in theSR, AD and DD phases of the lifecycle. While the SPMP may summarisethese activities, the SVVP should provide the detailed information. Forexample the SPMP may schedule an AD/R, but this section of the SVVPdefines the activities for the whole AD phase review process.

These sections of the SVVP should avoid repeating material fromthe standards and guidelines and instead define how the procedures will beapplied.

ESA PSS-05-0 recommends the following table of contents for theSVVP/SR, SVVP/AD and SVVP/DD sections:

1 Purpose2 Reference Documents3 Definitions4. Verification overview

4.1 Organisation4.2 Master schedule4.3 Resources summary4.4 Responsibilities4.5 Tools, techniques and methods

5. Verification Administrative Procedures5.1 Anomaly reporting and resolution5.2 Task iteration policy5.3 Deviation policy5.4 Control procedures5.5 Standards, practices and conventions

Page 829: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 79THE SOFTWARE VERIFICATION AND VALIDATION PLAN

6. Verification Activities6.1 Tracing1

6.2 Formal proofs6.3 Reviews

7 Software Verification Reporting

Additional material should be inserted in additional appendices. Ifthere is no material for a section then the phrase 'Not Applicable' should beinserted and the section numbering preserved.

1 PURPOSE

This section should:• briefly define the purpose of this part of the SVVP, stating the part of the

lifecycle to which it applies;• identify the software project for which the SVVP is written;• identify the products to be verified, and the specifications that they are

to be verified against;• outline the goals of verification and validation;• specify the intended readers of this part of the SVVP.

2 REFERENCE DOCUMENTS

This section should provide a complete list of all the applicable andreference documents, such as the ESA PSS-05 series of standards andguides. Each document should be identified by title, author and date. Eachdocument should be marked as applicable or reference. If appropriate,report number, journal name and publishing organisation should beincluded.

3 DEFINITIONS

This section should provide the definitions of all terms, acronyms,and abbreviations used in the plan, or refer to other documents where thedefinitions can be found.

1 Note that in ESA PSS-05-0 Issue 2 this section is called 'traceability matrix template'

Page 830: ESA Software Engineering Standards

80 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)THE SOFTWARE VERIFICATION AND VALIDATION PLAN

4 VERIFICATION OVERVIEW

This section should describe the organisation, schedule, resources,responsibilities, tools, techniques and methods necessary to performreviews, proofs and tracing.

4.1 Organisation

This section should describe the organisation of the review, proofsand tracing activities for the phase. Topics that should be included are:• roles;• reporting channels;• levels of authority for resolving problems;• relationships to the other activities such as project management,

development, configuration management and quality assurance.

The description should identify the people associated with the roles.Elsewhere the plan should only refer to roles.

4.2 Master schedule

This section should define the schedule for the review, proofs andtracing activities in the phase.

Reviews of large systems should be broken down by subsystem. Inthe DD phase, critical design reviews of subsystems should be held whenthe subsystem is ready, not when all subsystems have been designed.

4.3 Resources summary

This section should summarise the resources needed to performreviews, proofs and tracing such as staff, computer facilities, and softwaretools.

4.4 Responsibilities

This section should define the specific responsibilities associatedwith the roles described in section 4.1.

4.5 Tools, techniques and methods

This section should identify the software tools, techniques andmethods used for reviews, proofs and tracing in the phase. Training plansfor the tools, techniques and methods may be included.

Page 831: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 81THE SOFTWARE VERIFICATION AND VALIDATION PLAN

5 VERIFICATION ADMINISTRATIVE PROCEDURES

5.1 Anomaly reporting and resolution

The Review Item Discrepancy (RID) form is normally used forreporting and resolving anomalies found in documents and code submittedfor formal review. The procedure for handling this form is normally describedin Section 4.3.2 of the SCMP, which should be referenced here and notrepeated. This section should define the criteria for activating the anomalyreporting and resolution process.

5.2 Task iteration policy

This section should define the criteria for deciding whether a taskshould be repeated when a change has been made. The criteria mayinclude assessments of the scope of a change, the criticality of thefunction(s) affected, and any quality effects.

5.3 Deviation policy

This section should describe the procedures for deviating from theplan, and define the levels of authorisation required for the approval ofdeviations. The information required for deviations should include taskidentification, deviation rationale and the effect on software quality.

5.4 Control procedures

This section should identify the configuration managementprocedures of the products of review, proofs and tracing. Adequateassurance that they are secure from accidental or deliberate alteration isrequired.

5.5 Standards, practices and conventions

This section should identify the standards, practices andconventions that govern review, proof and tracing tasks, including internalorganisational standards, practices and policies (e.g. practices for safety-critical software).

6 VERIFICATION ACTIVITIES

This section should describe the procedures for review, proof andtracing activities.

Page 832: ESA Software Engineering Standards

82 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)THE SOFTWARE VERIFICATION AND VALIDATION PLAN

6.1 Tracing

This section should describe the procedures for tracing each part ofthe input products to the output products, and vice-versa.

6.2 Formal proofs

This section should define or reference the methods andprocedures used (if any) for proving theorems about the behaviour of thesoftware.

6.3 Reviews

This section should define or reference the methods andprocedures used for technical reviews, walkthroughs, software inspectionsand audits.

This section should list the reviews, walkthroughs and audits thatwill take place during the phase and identify the roles of the peopleparticipating in them.

This section should not repeat material found in the standards andguides, but should specify project-specific modifications and additions.

7 SOFTWARE VERIFICATION REPORTING

This section should describe how the results of implementing theplan will be documented. Types of reports might include:• summary report for the phase;• technical review report;• walkthrough report;• audit report.

RIDs should be attached to the appropriate verification report.

5.7 CONTENT OF SVVP/UT, SVVP/IT, SVVP/ST & SVVP/AT SECTIONS

The SVVP contains four sections dedicated to each testdevelopment phase. These sections are called:• Unit Test Specification (SVVP/UT);• Integration Test Specification (SVVP/IT);• System Test Specification (SVVP/ST);• Acceptance Test Specification (SVVP/AT).

Page 833: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 83THE SOFTWARE VERIFICATION AND VALIDATION PLAN

ESA PSS-05-0 recommends the following table of contents for eachtest section of the SVVP.

1 Test Plan1.1 Introduction1.2 Test items1.3 Features to be tested1.4 Features not to be tested1.5 Approach1.6 Item pass/fail criteria1.7 Suspension criteria and resumption requirements1.8 Test deliverables1.9 Testing tasks1.10 Environmental needs1.11 Responsibilities1.12 Staffing and training needs1.13 Schedule1.14 Risks and contingencies1.15 Approvals

2 Test Designs (for each test design...)2.n.1 Test design identifier2.n.2 Features to be tested2.n.3 Approach refinements2.n.4 Test case identification2.n.5 Feature pass/fail criteria

3 Test Case Specifications (for each test case...)3.n.1 Test case identifier3.n.2 Test items3.n.3 Input specifications3.n.4 Output specifications3.n.5 Environmental needs3.n.6 Special procedural requirements3.n.7 Intercase dependencies

4 Test Procedures (for each test case...)4.n.1 Test procedure identifier4.n.2 Purpose4.n.3 Special requirements4.n.4 Procedure steps

5 Test Reports (for each execution of a test procedure ...)5.n.1 Test report identifier5.n.2 Description5.n.3 Activity and event entries

Page 834: ESA Software Engineering Standards

84 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)THE SOFTWARE VERIFICATION AND VALIDATION PLAN

1 TEST PLAN

1.1 Introduction

This section should summarise the software items and softwarefeatures to be tested. A justification of the need for testing may be included.

1.2 Test items

This section should identify the test items. References to othersoftware documents should be supplied to provide information about whatthe test items are supposed to do, how they work, and how they areoperated.

Test items should be grouped according to release number whendelivery is incremental.

1.3 Features to be tested

This section should identify all the features and combinations offeatures that are to be tested. This may be done by referencing sections ofrequirements or design documents.

References should be precise yet economical, e.g:• 'the acceptance tests will cover all requirements in the User

Requirements Document except those identified in Section 1.4';• 'the unit tests will cover all modules specified in the Detailed Design

Document except those modules listed in Section 1.4'.

Features should be grouped according to release number whendelivery is incremental.

1.4 Features not to be tested

This section should identify all the features and significantcombinations of features that are not to be tested, and why.

1.5 Approach

This section should specify the major activities, methods (e.g.structured testing) and tools that are to be used to test the designatedgroups of features.

Page 835: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 85THE SOFTWARE VERIFICATION AND VALIDATION PLAN

Activities should be described in sufficient detail to allowidentification of the major testing tasks and estimation of the resources andtime needed for the tests. The coverage required should be specified.

1.6 Item pass/fail criteria

This section should specify the criteria to be used to decide whethereach test item has passed or failed testing.

1.7 Suspension criteria and resumption requirements

This section should specify the criteria used to suspend all, or a partof, the testing activities on the test items associated with the plan.

This section should specify the testing activities that must berepeated when testing is resumed.

1.8 Test deliverables

This section should identify the items that must be delivered beforetesting begins, which should include:• test plan;• test designs;• test cases;• test procedures;• test input data;• test tools.

This section should identify the items that must be delivered whentesting is finished, which should include:• test reports;• test output data;• problem reports.

1.9 Testing tasks

This section should identify the set of tasks necessary to prepare forand perform testing. This section should identify all inter-task dependenciesand any special skills required.

Page 836: ESA Software Engineering Standards

86 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)THE SOFTWARE VERIFICATION AND VALIDATION PLAN

Testing tasks should be grouped according to release numberwhen delivery is incremental.

1.10 Environmental needs

This section should specify both the necessary and desiredproperties of the test environment, including:• physical characteristics of the facilities including hardware;• communications software;• system software;• mode of use (i.e. standalone, networked);• security;• test tools.

Environmental needs should be grouped according to releasenumber when delivery is incremental.

1.11 Responsibilities

This section should identify the groups responsible for managing,designing, preparing, executing, witnessing, and checking tests.

Groups may include developers, operations staff, userrepresentatives, technical support staff, data administration staff,independent verification and validation personnel and quality assurancestaff.

1.12 Staffing and training needs

This section should specify staffing needs according to skill. Identifytraining options for providing necessary skills.

1.13 Schedule

This section should include test milestones identified in the softwareproject schedule and all item delivery events, for example:• programmer delivers unit for integration testing;• developers deliver system for independent verification.

Page 837: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 87THE SOFTWARE VERIFICATION AND VALIDATION PLAN

This section should specify:• any additional test milestones and state the time required for each

testing task;• the schedule for each testing task and test milestone;• the period of use for all test resources (e.g. facilities, tools, staff).

1.14 Risks and contingencies

This section should identify the high-risk assumptions of the testplan. It should specify contingency plans for each.

1.15 Approvals

This section should specify the names and titles of all persons whomust approve this plan. Alternatively, approvals may be shown on the titlepage of the plan.

2 TEST DESIGNS

2.n.1 Test Design identifier

The title of this section should specify the test design uniquely. Thecontent of this section should briefly describe the test design.

2.n.2 Features to be tested

This section should identify the test items and describe the features,and combinations of features, that are to be tested.

For each feature or feature combination, a reference to itsassociated requirements in the item requirement specification (URD, SRD)or design description (ADD, DDD) should be included.

2.n.3 Approach refinements

This section should describe the results of the application of themethods described in the approach section of the test plan. Specifically itmay define the:• module assembly sequence (for unit testing);• paths through the module logic (for unit testing);• component integration sequence (for integration testing);• paths through the control flow (for integration testing);• types of test (e.g. white-box, black-box, performance, stress etc).

Page 838: ESA Software Engineering Standards

88 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)THE SOFTWARE VERIFICATION AND VALIDATION PLAN

The description should provide the rationale for test-case selectionand the packaging of test cases into procedures. The method for analysingtest results should be identified (e.g. compare with expected output,compare with old results, proof of consistency etc).

The tools required to support testing should be identified.

2.n.4 Test case identification

This section should list the test cases associated with the designand give a brief description of each.

2.n.5 Feature pass/fail criteria

This section should specify the criteria to be used to decide whetherthe feature or feature combination has passed or failed.

3 TEST CASE SPECIFICATION

3.n.1 Test Case identifier

The title of this section should specify the test case uniquely. Thecontent of this section should briefly describe the test case.

3.n.2 Test items

This section should identify the test items. References to othersoftware documents should be supplied to help understand the purpose ofthe test items, how they work and how they are operated.

3.n.3 Input specifications

This section should specify the inputs required to execute the testcase. File names, parameter values and user responses are possible typesof input specification. This section should not duplicate information heldelsewhere (e.g. in test data files).

3.n.4 Output specifications

This section should specify the outputs expected from executing thetest case relevant to deciding upon pass or failure. File names and systemmessages are possible types of output specification. This section shouldnot duplicate information held elsewhere (e.g. in log files).

Page 839: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 89THE SOFTWARE VERIFICATION AND VALIDATION PLAN

3.n.5 Environmental needs

3.n.5.1 Hardware

This section should specify the characteristics and configurations ofthe hardware required to execute this test case.

3.n.5.2 Software

This section should specify the system and application softwarerequired to execute this test case.

3.n.5.3 Other

This section should specify any other requirements such as specialequipment or specially trained personnel.

3.n.6 Special procedural requirements

This section should describe any special constraints on the testprocedures that execute this test case.

3.n.7 Intercase dependencies

This section should list the identifiers of test cases that must beexecuted before this test case. The nature of the dependencies should besummarised.

4 TEST PROCEDURES

4.n.1 Test Procedure identifier

The title of this section should specify the test procedure uniquely.This section should reference the related test design.

4.n.2 Purpose

This section should describe the purpose of this procedure. Areference for each test case the test procedure uses should be given.

4.n.3 Special requirements

This section should identify any special requirements for theexecution of this procedure.

Page 840: ESA Software Engineering Standards

90 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)THE SOFTWARE VERIFICATION AND VALIDATION PLAN

4.n.4 Procedure steps

This section should include the steps described in the subsectionsbelow as applicable.

4.n.4.1 Log

This section should describe any special methods or formats forlogging the results of test execution, the incidents observed, and any otherevents pertinent to the test.

4.n.4.2 Set up

This section should describe the sequence of actions necessary toprepare for execution of the procedure.

4.n.4.3 Start

This section should describe the actions necessary to beginexecution of the procedure.

4.n.4.4 Proceed

This section should describe the actions necessary during theexecution of the procedure.

4.n.4.5 Measure

This section should describe how the test measurements will bemade.

4.n.4.6 Shut down

This section should describe the actions necessary to suspendtesting when interruption is forced by unscheduled events.

4.n.4.7 Restart

This section should identify any procedural restart points anddescribe the actions necessary to restart the procedure at each of thesepoints.

4.n.4.8 Stop

This section should describe the actions necessary to bringexecution to an orderly halt.

Page 841: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 91THE SOFTWARE VERIFICATION AND VALIDATION PLAN

4.n.4.9 Wrap up

This section should describe the actions necessary to terminatetesting.

4.n.4.10 Contingencies

This section should describe the actions necessary to deal withanomalous events that may occur during execution.

5 TEST REPORTS

5.n.1 Test Report identifier

The title of this section should specify the test report uniquely.

5.n.2 Description

This section should identify the items being tested including theirversion numbers. The attributes of the environment in which testing wasconducted should be identified.

5.n.3 Activity and event entries

This section should define the start and end time of each activity orevent. The author should be identified.

One or more of the descriptions in the following subsections shouldbe included.

5.n.3.1 Execution description

This section should identify the test procedure being executed andsupply a reference to its specification.

The people who witnessed each event should be identified.

5.n.3.2 Procedure results

For each execution, this section should record the visuallyobservable results (e.g. error messages generated, aborts and requests foroperator action). The location of any output, and the result of the test,should be recorded.

Page 842: ESA Software Engineering Standards

92 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)THE SOFTWARE VERIFICATION AND VALIDATION PLAN

5.n.3.3 Environmental information

This section should record any environmental conditions specific forthis entry, particularly deviations from the normal.

5.8 EVOLUTION

5.8.1 UR phase

By the end of the UR review, the SR phase section of the SVVP mustbe produced (SVVP/SR) (SVV09). The SVVP/SR must define how to traceuser requirements to software requirements so that each softwarerequirement can be justified (SVV10). It should describe how the SRD is tobe evaluated by defining the review procedures. The SVVP/SR may includespecifications of the tests to be done with prototypes.

The initiator(s) of the user requirements should lay down theprinciples upon which the acceptance tests should be based. The developermust construct an acceptance test plan in the UR phase and document it inthe SVVP (SVV11). This plan should define the scope, approach, resourcesand schedule of acceptance testing activities.

5.8.2 SR phase

During the SR phase, the AD phase section of the SVVP must beproduced (SVVP/AD) (SVV12). The SVVP/AD must define how to tracesoftware requirements to components, so that each software componentcan be justified (SVV13). It should describe how the ADD is to be evaluatedby defining the review procedures. The SVVP/AD may include specificationsof the tests to be done with prototypes.

During the SR Phase, the developer analyses the user requirementsand may insert 'acceptance testing requirements' in the SRD. Theserequirements constrain the design of the acceptance tests. This must berecognised in the statement of the purpose and scope of the acceptancetests.

The planning of the system tests should proceed in parallel with thedefinition of the software requirements. The developer may identify'verification requirements' for the software. These are additional constraintson the unit, integration and system testing activities. These requirements arealso stated in the SRD.

Page 843: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) 93THE SOFTWARE VERIFICATION AND VALIDATION PLAN

The developer must construct a system test plan in the SR phaseand document it in the SVVP (SVV14). This plan should define the scope,approach, resources and schedule of system testing activities.

5.8.3 AD phase

During the AD phase, the DD phase section of the SVVP must beproduced (SVVP/DD) (SVV15). The SVVP/AD must describe how the DDDand code are to be evaluated by defining the review and traceabilityprocedures (SVV16).

The developer must construct an integration test plan in the ADphase and document it in the SVVP (SVV17). This plan should describe thescope, approach, resources and schedule of intended integration tests.Note that the items to be integrated are the software components describedin the ADD.

5.8.4 DD phase

In the DD phase, the SVVP sections on testing are developed as thedetailed design and implementation information become available.

The developer must construct a unit test plan in the DD phase anddocument it in the SVVP (SVV18). This plan should describe the scope,approach, resources and schedule of the intended unit tests.

The test items are the software components described in the DDD.

The unit, integration, system and acceptance test designs must bedescribed in the SVVP (SVV19). These should specify the details of the testapproach for a software feature, or combination of software features, andidentify the associated test cases and test procedures.

The unit integration, system and acceptance test cases must bedescribed in the SVVP (SVV20). These should specify the inputs, predictedresults and execution conditions for a test case.

The unit, integration, system and acceptance test procedures mustbe described in the SVVP (SVV21). These should provide a step-by-stepdescription of how to carry out each test case.

The unit, integration, system and acceptance test reports must becontained in the SVVP (SVV22).

Page 844: ESA Software Engineering Standards

94 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)THE SOFTWARE VERIFICATION AND VALIDATION PLAN

This page is intentionally left blank.

Page 845: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) A-1GLOSSARY

APPENDIX AGLOSSARY

A.1 LIST OF TERMS

Definitions of SVV terms are taken from IEEE Standard Glossary ofSoftware Engineering Terminology ANSI/IEEE Std 610.12-1990 [Ref 6]. If nosuitable definition is found in the glossary, the definition is taken from areferenced text or the Concise Oxford Dictionary.

Acceptance testing

Formal testing conducted to determine whether or not a system satisfies itsacceptance criteria (i.e. the user requirements) to enable the customer (i.e.initiator) to determine whether or not to accept the system [Ref 6].

Assertion

A logical expression specifying a program state that must exist or a set ofconditions that program variables must satisfy at a particular point duringprogram execution [Ref 6].

Audit

An independent examination of a work product or set of work products toassess compliance with specifications, baselines, standards, contractualagreements or other criteria [Ref 6].

Back-to-back test

Back-to-back tests execute two or more variants of a program with the sameinputs. The outputs are compared, and any discrepancies are analysed tocheck whether or not they indicate a fault [Ref 6].

Comparator

A software tool that compares two computer programs, files, or sets of datato identify commonalities and differences [Ref 6].

Component

One of the parts that make up a system [Ref 6]. A component may be amodule, a unit or a subsystem. This definition is similar to that used inReference 1 and more general than the one in Reference 5.

Page 846: ESA Software Engineering Standards

A-2 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)GLOSSARY

Critical design review

A review conducted to verify that the detailed design of one or moreconfiguration items satisfies specified requirements [Ref 6]. Critical designreviews must be held in the DD phase to review the detailed design of amajor component to certify its readiness for implementation (DD10).

Decision table

A table used to show sets of conditions and the actions resulting from them[Ref 6].

Defect

An instance in which a requirement is not satisfied [Ref 22].

Driver

A software module that invokes and, perhaps, controls and monitors theexecution of one or more other software modules [Ref 6].

Dynamic analysis

The process of evaluating a computer program based upon its behaviourduring execution [Ref 6].

Formal

Used to describe activities that have explicit and definite rules of procedure(e.g. formal review) or reasoning (e.g. formal method and formal proof).

Formal review

A review that has explicit and definite rules of procedure such as a technicalreview, walkthrough or software inspection [Ref 1].

Inspection

A static analysis technique that relies on visual examination of developmentproducts to detect errors, violations of development standards, and otherproblems [Ref 6]. Same as 'software inspection'.

Integration

The process of combining software elements, hardware elements or bothinto an overall system [Ref 6].

Page 847: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) A-3GLOSSARY

Integration testing

Testing in which software components, hardware components, or both arecombined and tested to evaluate the interaction between them [Ref 6]. InESA PSS-05-0, the lowest level software elements tested during integrationare the lowest level components of the architectural design.

Module

A program unit that is discrete and identifiable with respect to compiling,combining with other units, and loading; for example the input or outputfrom a compiler or linker; also a logically separable part of a program [Ref6].

Regression test

Selective retesting of a system or component to verify that modificationshave not caused unintended effects and that the system or component stillcomplies with its specified requirements [Ref 6].

Review

A process or meeting during which a work product, or set of work products,is presented to project personnel, managers, users, customers, or otherinterested parties for comment or approval [Ref 6] ].

Semantics

The relationships of symbols and groups of symbols to their meanings in agiven language [Ref 6].

Semantic analyser

A software tool that substitutes algebraic symbols into the programvariables and present the results as algebraic formulae [Ref 17].

Static analysis

The process of evaluating a system or component based on its form,structure, content or documentation [Ref 6].

Stress test

A test that evaluates a system or software component at or beyond thelimits of its specified requirements [Ref 6].

Page 848: ESA Software Engineering Standards

A-4 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)GLOSSARY

System

A collection of components organised to accomplish a specific function orset of functions [Ref 6]. A system is composed of one or more subsystems.

Subsystem

A secondary or subordinate system within a larger system [Ref 6]. Asubsystem is composed of one or more units.

System testing

Testing conducted on a complete, integrated system to evaluate thesystem’s compliance with its specified requirements [Ref 6].

Test

An activity in which a system or component is executed under specifiedconditions, the results are observed or recorded, and an evaluation is madeof some aspect of the system or component [Ref 6].

Test case

A set of test inputs, execution conditions, and expected results developedfor a particular objective, such as to exercise a particular program path or toverify compliance with a specified requirement [Ref 6].

Test design

Documentation specifying the details of the test approach for a softwarefeature or combination of software features and identifying associated tests[Ref 6].

Test case generator

A software tool that accepts as input source code, test criteria,specifications, or data structure definitions; uses these inputs to generatetest input data, and, sometimes, determines the expected results [Ref 6].

Test coverage

The degree to which a given test or set of tests addresses all specifiedrequirements for a given system or component [Ref 6]; the proportion ofbranches in the logic that have been traversed during testing.

Page 849: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) A-5GLOSSARY

Test harness

A software module used to invoke a module under test and, often, providetest inputs, control and monitor execution, and report test results [Ref 6].

Test plan

A document prescribing the scope, approach resources, and schedule ofintended test activities [Ref 6].

Test procedure

Detailed instructions for the setup, operation, and evaluation of the resultsfor a given test [Ref 6].

Test report

A document that describes the conduct and results of the testing carried outfor a system or component [Ref 6].

Testability

The degree to which a system or component facilitates the establishment oftest criteria and the performance of tests to determine whether those criteriahave been met [Ref 6].

Tool

A computer program used in the development, testing, analysis, ormaintenance of a program or its documentation [Ref 6].

Tracing

The act of establishing a relationship between two or more products of thedevelopment process; for example, to establish the relationship between agiven requirement and the design element that implements that requirement[Ref 6].

Unit

A separately testable element specified in the design of a computer softwarecomponent [Ref 6]. A unit is composed of one or more modules.

Page 850: ESA Software Engineering Standards

A-6 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)GLOSSARY

Validation

The process of evaluating a system or component during or at the end ofthe development process to determine whether it satisfies specifiedrequirements [Ref 6].

Verification

The act of reviewing, inspecting, testing, checking, auditing, orotherwise establishing and documenting whether items, processes, servicesor documents conform to specified requirements [Ref 5].

Walkthrough

A static analysis technique in which a designer or programmer leadsmembers of the development team and other interested parties through asegment of documentation or code, and the participants ask questions andmake comments about possible errors, violation of development standards,and other problems [Ref 6].

Page 851: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) A-7GLOSSARY

A.2 LIST OF ACRONYMS

AD Architectural DesignAD/R Architectural Design ReviewADD Architectural Design DocumentANSI American National Standards InstituteAT Acceptance TestBSSC Board for Software Standardisation and ControlCASE Computer Aided Software EngineeringDCR Document Change RecordDD Detailed Design and productionDD/R Detailed Design and production ReviewDDD Detailed Design and production DocumentESA European Space AgencyICD Interface Control DocumentIEEE Institute of Electrical and Electronics EngineersIT Integration TestPSS Procedures, Specifications and StandardsQA Quality AssuranceRID Review Item DiscrepancySCMP Software Configuration Management PlanSCR Software Change RequestSMR Software Modification ReportSPR Software Problem ReportSR Software RequirementsSR/R Software Requirements ReviewSRD Software Requirements DocumentST System TestSUT Software Under TestSVV Software Verification and ValidationSVVP Software Verification and Validation PlanUR User RequirementsUR/R User Requirements ReviewURD User Requirements DocumentUT Unit Test

Page 852: ESA Software Engineering Standards

A-8 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)GLOSSARY

This page is intentionally left blank.

Page 853: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) B-1REFERENCES

APPENDIX BREFERENCES

1. ESA Software Engineering Standards, ESA PSS-05-0 Issue 2, February1991.

2. Guide to the Software Requirements Definition Phase, ESA PSS-05-03,Issue 1, October 1991.

3. Guide to the Detailed Design and Production Phase, ESA PSS-05-05,Issue 1, May 1992.

4. Guide to Software Configuration Management, ESA PSS-05-09, Issue 1,November 1992.

5. ANSI/ASQC A3-1978, Quality Systems Terminology6. IEEE Standard Glossary of Software Engineering Terminology,

ANSI/IEEE Std 610.12-1990.7. IEEE Standard for Software Test Documentation, ANSI/IEEE Std 829-

1983.8. IEEE Standard for Software Unit Testing, ANSI/IEEE Std 1008-1987.9. IEEE Standard for Software Verification and Validation Plans, ANSI/IEEE

Std 1012-1986.10. IEEE Standard for Software Reviews and Audits, ANSI/IEEE Std 1028-

1988.11. Managing the Software Process, Watts S. Humphrey, SEI Series in

Software Engineering, Addison-Wesley, August 1990.12. Software Testing Techniques, B.Beizer, Van Nostrand Reinhold, 1983.13. Structured Testing: A Software Testing Methodology Using the

Cyclomatic Complexity Metric, T.J.McCabe, National Bureau ofStandards Special Publication 500-99, 1982.

14. The Art of Software Testing, G.J.Myers, Wiley-Interscience, 1979.15. Design Complexity Measurement and Testing, T.J.McCabe and

C.W.Butler, Communications of the ACM, Vol 32, No 12, December1989.

16. Software Engineering, I.Sommerville, Addison-Wesley, 1992.17. The STARTs Guide - a guide to methods and software tools for the

construction of large real-time systems, NCC Publications, 1987.18. Engineering software under statistical quality control, R.H. Cobb and

H.D.Mills, IEEE Software, 7 (6), 1990.

Page 854: ESA Software Engineering Standards

B-2 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)REFERENCES

19. Dynamic Testing Tools, a Detailed Product Evaluation, S.Norman,Ovum, 1992.

20. Managing Computer Projects, R.Gibson, Prentice-Hall, 1992.21. Design and Code Inspections to Reduce Errors in Program

Development, M.E.Fagan, IBM Systems Journal, No 3, 197622. Advances in Software Inspections, M.E. Fagan, IEEE Transactions on

Software Engineering, Vol. SE-12, No. 7, July 1986.23. The Cleanroom Approach to Quality Software Development, Dyer, Wiley,

1992.24. System safety requirements for ESA space systems and related

equipment, ESA PSS-01-40 Issue 2, September 1988, ESTEC

Page 855: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) C-1MANDATORY PRACTICES

APPENDIX CMANDATORY PRACTICES

This appendix is repeated from ESA PSS-05-0, appendix D.10.SVV01 Forward traceability requires that each input to a phase shall be traceable to

an output of that phase.SVV02 Backward traceability requires that each output of a phase shall be

traceable to an input to that phase.SVV03 Functional and physical audits shall be performed before the release of the

software.SVV04 All software verification and validation activities shall be documented in the

Software Verification and Validation Plan (SVVP).The SVVP shall ensure that the verification activities:

SVV05 • are appropriate for the degree of criticality of the software;SVV06 • meet the verification and acceptance testing requirements (stated in the

SRD);SVV07 • verify that the product will meet the quality, reliability, maintainability and

safety requirements (stated in the SRD);SVV08 • are sufficient to assure the quality of the product.SVV09 By the end of the UR review, the SR phase section of the SVVP shall be

produced (SVVP/SR).SVV10 The SVVP/SR shall define how to trace user requirements to software

requirements, so that each software requirement can be justified.SVV11 The developer shall construct an acceptance test plan in the UR phase and

document it in the SVVP.SVV12 During the SR phase, the AD phase section of the SVVP shall be produced

(SVVP/AD).SVV13 The SVVP/AD shall define how to trace software requirements to

components, so that each software component can be justified.SVV14 The developer shall construct a system test plan in the SR phase and

document it in the SVVP.SVV15 During the AD phase, the DD phase section of the SVVP shall be produced

(SVVP/DD).SVV16 The SVVP/AD shall describe how the DDD and code are to be evaluated by

defining the review and traceability procedures.

Page 856: ESA Software Engineering Standards

C-2 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)MANDATORY PRACTICES

SVV17 The developer shall construct an integration test plan in the AD phase anddocument it in the SVVP.

SVV18 The developer shall construct a unit test plan in the DD phase anddocument it in the SVVP.

SVV19 The unit, integration, system and acceptance test designs shall bedescribed in the SVVP.

SVV20 The unit integration, system and acceptance test cases shall be describedin the SVVP.

SVV21 The unit, integration, system and acceptance test procedures shall bedescribed in the SVVP.

SVV22 The unit, integration, system and acceptance test reports shall be describedin the SVVP.

Page 857: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) D-1INDEX

APPENDIX DINDEX

Page 858: ESA Software Engineering Standards

D-2 ESA PSS-05-10 Issue 1 Revision 1 (March 1995)INDEX

acceptance test, 6, 40acceptance test case, 41acceptance test design, 40acceptance test plan, 40, 92acceptance test procedure, 42acceptance test result, 42acceptance testing requirement, 92AD phase, 93AD16, 7ANSI/IEEE Std 1012-1986, 76ANSI/IEEE Std 1028-1988, 7, 13, 16, 44ANSI/IEEE Std 829-1983, 76assertion, 49audit, 7, 15backward traceability, 18baseline method, 25, 54black-box integration test, 32branch testing, 54cause-effect graph, 27cleanroom method, 49comparator, 69, 73control flow analysis, 64control flow testing, 60control graph, 51coverage analyser, 68, 72criticality, 76cyclomatic complexity, 23, 51, 68data-use analysis, 64DD06, 23, 50, 54DD07, 30, 31, 32, 50DD08, 30, 31, 50, 60DD09, 33DD10, 7, 2DD11, 7debugger, 31, 68, 72debugging, 25decision table, 27, 34design integration testing method, 31, 60diagnostic code, 25driver, 24, 29dynamic analyser, 68, 73equivalence classes, 26equivalence partitioning, 68error guessing, 28, 34formal method, 48formal proof, 19forward traceability, 18function test, 26, 34functional audit, 15informal review, 6information flow analysis, 64inspection, 43

instrumentation, 68integration complexity, 68integration complexity metric, 30integration test, 6integration test case, 32integration test design, 30integration test plan, 93integration test procedure, 32integration test result, 33interface analysis, 64interface test, 35life cycle verification approach, 5logic test, 25LOTOS, 48maintainability test, 37miscellaneous test, 38operations test, 35partition, 26path test, 25performance analyser, 68, 69, 73performance test, 34physical audit, 15portability test, 37profiler, 69program verification, 49range-bound analysis, 64regression test, 38reliability test, 37resource test, 36reverse engineering tools, 65review, 6safety test, 38SCM01, 74script, 68security test, 36semantic analyser, 67semantic analysis, 67sensitising the path, 25software inspection, 43software verification and validation, 3SR09, 7state-transition table, 27, 34static analyser, 68, 69static analysers, 64stress test, 34, 39structure test, 25structured integration testing, 30, 31, 55, 68structured testing, 23, 25, 50, 68SVV, 3SVV01, 18SVV02, 18SVV03, 15

Page 859: ESA Software Engineering Standards

ESA PSS-05-10 Issue 1 Revision 1 (March 1995) D-3INDEX

SVV04, 75SVV05, 76SVV06, 76SVV07, 76SVV08, 76SVV09, 92SVV10, 92SVV11, 40, 92SVV12, 92SVV13, 92SVV14, 33, 93SVV15, 93SVV16, 93SVV17, 29, 93SVV18, 22, 93SVV19, 23, 30, 33, 40, 93SVV20, 28, 32, 39, 41, 93SVV21, 28, 32, 39, 42, 93SVV22, 93system monitoring, 69system test, 6, 33system test design, 33system test plan, 33, 93system test result, 40technical review, 7test case generator, 70test case generators, 68test harness, 68, 70test manager, 69, 74testability, 20, 50, 56testing, 19thread testing, 32tools, 63traceability matrix, 18tracing, 18tracing tool, 65unit test, 5unit test case, 28unit test design, 23unit test plan, 22, 93unit test procedure, 28Unit test result, 29UR phase, 92UR08, 7validation, 4VDM, 48verification, 4volume test, 39walkthrough, 7, 12white-box integration test, 31white-box unit test, 25Z, 48

Page 860: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1March 1995

european space agency / agence spatiale européenne8-10, rue Mario-Nikis, 75738 PARIS CEDEX, France

Guide tosoftwarequalityassurancePrepared by:ESA Board for SoftwareStandardisation and Control(BSSC)

Approved by:The Inspector General, ESA

Page 861: ESA Software Engineering Standards

ii ESA PSS-05-11 Issue 1 Revision 1 (March 1995)DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

1. DOCUMENT TITLE: ESA PSS-05-11 Guide to software quality assurance

2. ISSUE 3. REVISION 4. DATE 5. REASON FOR CHANGE

1 0 1993 First issue

1 1 1995 Minor updates for publication

Issue 1 Revision 1 approved, May 1995Board for Software Standardisation and ControlM. Jones and U. Mortensen, co-chairmen

Issue 1 approved, July 13th 1993Telematics Supervisory Board

Issue 1 approved by:The Inspector General, ESA

Published by ESA Publications Division,ESTEC, Noordwijk, The Netherlands.Printed in the Netherlands.ESA Price code: E1ISSN 0379-4059

Copyright © 1995 by European Space Agency

Page 862: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) iiiTABLE OF CONTENTS

TABLE OF CONTENTS

CHAPTER 1 INTRODUCTION..................................................................................11.1 PURPOSE ................................................................................................................. 11.2 OVERVIEW................................................................................................................ 1

CHAPTER 2 SOFTWARE QUALITY ASSURANCE .................................................32.1 INTRODUCTION...................................................................................................... 32.2 USER REQUIREMENTS DEFINITION PHASE........................................................ 6

2.2.1 Checking technical activities............................................................................. 62.2.2 Checking the management plans..................................................................... 7

2.2.2.1 Software project management plan ....................................................... 82.2.2.2 Software configuration management plan ............................................. 92.2.2.3 Software verification and validation plan..............................................11

2.3 SOFTWARE REQUIREMENTS DEFINITION PHASE.............................................122.3.1 Checking technical activities...........................................................................122.3.2 Checking the management plans...................................................................14

2.3.2.1 Software project management plan .....................................................142.3.2.2 Software configuration management plan ...........................................142.3.2.3 Software verification and validation plan..............................................15

2.4 ARCHITECTURAL DESIGN PHASE.......................................................................152.4.1 Checking technical activities...........................................................................152.4.2 Checking the management plans...................................................................17

2.4.2.1 Software project management plan .....................................................172.4.2.2 Software configuration management plan ...........................................182.4.2.3 Software verification and validation plan..............................................20

2.5 DETAILED DESIGN AND PRODUCTION PHASE .................................................212.5.1 Checking technical activities...........................................................................21

2.5.1.1 Detailed design .....................................................................................212.5.1.2 Coding .................................................................................................222.5.1.3 Reuse .................................................................................................222.5.1.4 Testing .................................................................................................232.5.1.5 Formal review ........................................................................................25

2.5.2 Checking the management plans...................................................................262.5.2.1 Software project management plan .....................................................272.5.2.2 Software configuration management plan ...........................................272.5.2.3 Software verification and validation plan..............................................28

2.6 TRANSFER PHASE.................................................................................................28

Page 863: ESA Software Engineering Standards

iv ESA PSS-05-11 Issue 1 Revision 1 (March 1995)PREFACE

CHAPTER 3 THE SOFTWARE QUALITY ASSURANCE PLAN.............................313.1 INTRODUCTION.....................................................................................................313.2 STYLE......................................................................................................................313.3 RESPONSIBILITY....................................................................................................313.4 MEDIUM .................................................................................................................323.5 CONTENT...............................................................................................................323.6 EVOLUTION............................................................................................................39

3.6.1 UR phase .................................................................................................393.6.2 SR phase .................................................................................................403.6.3 AD phase .................................................................................................403.6.4 DD phase .................................................................................................40

APPENDIX A GLOSSARY ....................................................................................A-1APPENDIX B REFERENCES................................................................................B-1APPENDIX C MANDATORY PRACTICES ...........................................................C-1APPENDIX D INDEX.............................................................................................D-1

Page 864: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) vPREFACE

PREFACE

This document is one of a series of guides to software engineering produced bythe Board for Software Standardisation and Control (BSSC), of the European SpaceAgency. The guides contain advisory material for software developers conforming toESA's Software Engineering Standards, ESA PSS-05-0. They have been compiled fromdiscussions with software engineers, research of the software engineering literature,and experience gained from the application of the Software Engineering Standards inprojects.

Levels one and two of the document tree at the time of writing are shown inFigure 1. This guide, identified by the shaded box, provides guidance aboutimplementing the mandatory requirements for software quality assurance described inthe top level document ESA PSS-05-0.

Guide to theSoftware Engineering

Guide to theUser Requirements

Definition Phase

Guide toSoftware Project

Management

PSS-05-01

PSS-05-02 UR GuidePSS-05-03 SR Guide

PSS-05-04 AD GuidePSS-05-05 DD Guide

PSS-05-07 OM Guide

PSS-05-08 SPM GuidePSS-05-09 SCM Guide

PSS-05-11 SQA Guide

ESASoftware

EngineeringStandards

PSS-05-0

Standards

Level 1

Level 2

PSS-05-10 SVV Guide

PSS-05-06 TR Guide

Figure 1: ESA PSS-05-0 document tree

The Guide to the Software Engineering Standards, ESA PSS-05-01, containsfurther information about the document tree. The interested reader should consult thisguide for current information about the ESA PSS-05-0 standards and guides.

The following past and present BSSC members have contributed to theproduction of this guide: Carlo Mazza (chairman), Gianfranco Alvisi, Michael Jones,Bryan Melton, Daniel de Pablo and Adriaan Scheffer.

Page 865: ESA Software Engineering Standards

vi ESA PSS-05-11 Issue 1 Revision 1 (March 1995)PREFACE

The BSSC wishes to thank Jon Fairclough for his assistance in the developmentof the Standards and Guides, and to all those software engineers in ESA and Industrywho have made contributions.

Requests for clarifications, change proposals or any other comment concerningthis guide should be addressed to:

BSSC/ESOC Secretariat BSSC/ESTEC SecretariatAttention of Mr C Mazza Attention of Mr B MeltonESOC ESTECRobert Bosch Strasse 5 Postbus 299D-64293 Darmstadt NL-2200 AG NoordwijkGermany The Netherlands

Page 866: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) 1INTRODUCTION

CHAPTER 1INTRODUCTION

1.1 PURPOSE

ESA PSS-05-0 describes the software engineering standards thatapply to all deliverable software implemented for the European SpaceAgency (ESA), either in house or by industry [Ref 1].

ESA PSS-05-0 requires that all software projects assure thatproducts and procedures conform to standards and plans. This is called‘Software Quality Assurance' (SQA). Projects must define their softwarequality assurance activities in a Software Quality Assurance Plan (SQAP)1.

This guide defines and explains what software quality assurance is,provides guidelines on how to do it, and defines in detail what a SoftwareQuality Assurance Plan should contain.

Everyone who is concerned with software quality should read thisguide, i.e. software project managers, software engineers and softwarequality assurance personnel.

1.2 OVERVIEW

Chapter 2 contains a general discussion of the principles ofSoftware Quality Assurance, expanding upon ESA PSS-05-0. Chapter 3describes how to write the SQAP, in particular how to fill out the documenttemplate.

All the mandatory practices in ESA PSS-05-0 that apply to softwarequality assurance are repeated in this document. The identifier of thepractice in parentheses marks a repetition. Practices that apply to otherchapters of ESA PSS-05-0 are referenced in the same way. This documentcontains no new mandatory practices.

1 Where ESA PSS-01 series documents are applicable, ESA PSS-01-21 ‘Software Product Assurance Requirements for ESA Space

Systems' also applies [Ref 9].

Page 867: ESA Software Engineering Standards

2 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)INTRODUCTION

This page is intentionally left blank.

Page 868: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) 3SOFTWARE QUALITY ASSURANCE

CHAPTER 2SOFTWARE QUALITY ASSURANCE

2.1 INTRODUCTION

ESA PSS-05-0 defines Software Quality Assurance (SQA) as a‘planned and systematic pattern of all actions necessary to provideadequate confidence that the item or product conforms to establishedtechnical requirements'. SQA does this by checking that:• plans are defined according to standards;• procedures are performed according to plans;• products are implemented according to standards.

A procedure defines how an activity will be conducted. Proceduresare defined in plans, such as a Software Configuration Management Plan. Aproduct is a deliverable to a customer. Software products are code, usermanuals and technical documents, such as an Architectural DesignDocument. Examples of product standards are design and codingstandards, and the standard document templates in ESA PSS-05-0.

SQA is not the only checking activity in a project. Whereas SQAchecks procedures against plans and output products against standards,Software Verification and Validation (SVV) checks output products againstinput products. Figure 2.1A illustrates the difference.

Standards

Plans

Input

Products

Output

Products

SVV

reports

SQA

reports

Development Activity

SQA

SVVCheck outputproducts againstinput products

Check outputproducts againststandards and plans

Figure 2.1A: Differences between SQA and SVV

Page 869: ESA Software Engineering Standards

4 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)SOFTWARE QUALITY ASSURANCE

Complete checking is never possible, and the amount requireddepends upon the type of project. All projects must do some checks.Projects developing safety-critical software should be checked morerigorously.

SQA must plan what checks to do early in the project. The mostimportant selection criterion for software quality assurance planning is risk.Common risk areas in software development are novelty, complexity, staffcapability, staff experience, manual procedures and organisational maturity.

SQA staff should concentrate on those items that have a stronginfluence on product quality. They should check as early as possible thatthe:• project is properly organised, with an appropriate life cycle;• development team members have defined tasks and responsibilities;• documentation plans are implemented;• documentation contains what it should contain;• documentation and coding standards are followed;• standards, practices and conventions are adhered to;• metric data is collected and used to improve products and processes;• reviews and audits take place and are properly conducted;• tests are specified and rigorously carried out;• problems are recorded and tracked;• projects use appropriate tools, techniques and methods;• software is stored in controlled libraries;• software is stored safely and securely;• software from external suppliers meets applicable standards;• proper records are kept of all activities;• staff are properly trained;• risks to the project are minimised.

Project management is responsible for the organisation of SQAactivities, the definition of SQA roles and the allocation of staff to those roles.

Page 870: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) 5SOFTWARE QUALITY ASSURANCE

Within a project, different groups have their own characteristicrequirements for SQA. Project management needs to know that the softwareis built according to the plans and procedures it has defined. Developmentpersonnel need to know that their work is of good quality and meets thestandards. Users should get visibility of SQA activities, so that they will beassured that the product will be fit for its purpose.

The effective management of a project depends upon the controlloop shown in Figure 2.1B.

SPMPSCMPSVVPSQAP

Make Plans

Make Products

Reports

Products

User Requirements

Standardsfor control

for monitoring

Figure 2.1B: Management control loop

SQA staff make a critical contribution to both control and monitoringactivities in the management of a project by:• writing the SQAP;• reviewing the SPMP, SCMP and SVVP;• advising project management on how to apply the standards to

planning;• advising development staff on how to apply the standards to production

tasks;• reporting to the development organisation's management about the

implementation of plans and adherence to standards.

The project manager supervises the work of the development staff,and normally carries out the SQA role in small projects (two man years orless). In large projects (twenty man-years or more), specialist personnel

Page 871: ESA Software Engineering Standards

6 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)SOFTWARE QUALITY ASSURANCE

should be dedicated to the SQA role because of the amount of workrequired. It is normal for the SQA staff to report to both the project managerand the corporate SQA manager.

On some projects, SQA functions may be performed in parallel by aseparate team, independent of the development organisation, which reportsdirectly to the user or customer. Such teams should have their own SQAP.

In a multi-contractor project, the SQA staff of the prime contractoroversee the activities of the subcontractors by participating in reviews andconducting audits.

The rest of this chapter discusses the software quality assuranceactivities required in each life cycle phase. As verifying conformance to ESAPSS-05-0 is the primary SQA task, each section discusses all the mandatorypractices for the phase from the SQA point of view. Each practice is markedin parentheses and indexed for ease of reference. Management practicesthat are of special importance in the phase are also included. Each sectionis therefore an annotated SQA checklist for a life cycle phase. The guidelinesshould be used to prepare the SQAP for each phase.

In each phase, SQA staff should prepare a report of their activities intime for the formal review (SR/R, AD/R, DD/R). The report should cover allthe activities described in the SQAP and should contain recommendationsabout the readiness to proceed. SQA may also prepare interim reportsduring the phase to bring concerns to the attention of management.

2.2 USER REQUIREMENTS DEFINITION PHASE

Software quality assurance staff should be involved in a softwaredevelopment project at the earliest opportunity. They should check the userrequirements and plans for conformance to standards. These documentshave a strong influence on the whole development, and it is extremely cost-effective for any problems in them to be identified and corrected as early aspossible.

2.2.1 Checking technical activities

The user requirements document is the users' input to the project.The users are responsible for it (UR01), and SQA should not permit asoftware development project to start until it has been produced (UR10,UR11).

Page 872: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) 7SOFTWARE QUALITY ASSURANCE

SQA staff should check the URD for conformance to standards byreading the whole document. They should look for a general description ofthe software (UR12). This must contain a step-by-step account of what theuser wants the software to do (UR14). The URD must also contain specificrequirements with identifiers, need attributes, priority attributes andreferences to sources (UR02, UR03, UR04 and UR05). The URD shouldcontain all the known user requirements (UR13). To confirm completeness,SQA should look for evidence of requirements capture activities such assurveys and interviews.

SQA should check that users have stated in the URD all theconstraints they want to impose on the software, such as portability,availability and usability (UR15). The URD must also describe all the externalinterfaces, or reference them in Interface Control Documents (UR16).External interfaces constrain the design.

The user requirements must be verifiable (UR06). The acceptancetest plan defines the scope and approach towards validation, and SQA staffshould check that the plan will allow the user requirements to be validated.This plan is documented in the Software Verification and Validation Plan,Acceptance Test section (SVVP/AT).

An important responsibility of SQA staff is to check that the user hasdescribed the consequences of losses of availability or breaches of security(UR07). This is needed if the developers are to fully appreciate the criticalityof each function.

At the end of the phase the URD is reviewed at the formal userrequirements review (UR08). SQA staff should take part in the review, andcheck that the proper procedures are observed. The review may decide thatsome requirements should not be made applicable. Such requirementsmust be clearly flagged in the URD (UR09). They should not be deleted, asthey indicate to developers where future growth potential may be needed.

2.2.2 Checking the management plans

Four plans must be produced by the developers by the end of theuser requirements review to define how the development will be managed inthe SR phase. These are the:• Software Project Management Plan (SPMP/SR) (SPM02);• Software Configuration Management Plan (SCMP/SR) (SCM42);• Software Verification and Validation Plan (SVVP/SR) (SVV09);• Software Quality Assurance Plan (SQAP/SR) (SQA03).

Page 873: ESA Software Engineering Standards

8 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)SOFTWARE QUALITY ASSURANCE

SQA staff must produce the SQAP/SR. They also need to review theother plans to check that they conform to standards. The SQAP/SR alsooutlines SQA activities for the whole of the project.

The same structure is used for each plan in every phase of theproject. The importance of different parts of the structure varies with thephase. The rest of this section picks out the parts of each plan that areimportant for the SQA activities in the SR phase.

2.2.2.1 Software project management plan

Every project must produce a software project management plan(SPM01). The plan should declare the objectives of the project. These maytake the form of statements of deliverables and delivery dates, or ofstatements of required functionality, performance and quality. The objectivesshould be prioritised. SQA staff should examine the statement of objectivesand check that all plans are consistent with them.

The plan should describe all the SR phase activities and contain aprecise estimate of the SR phase cost (SPM04). The plan should outlineactivities in the remaining phases (SPM03) and estimate the total cost of theproject.

The plan should define the life cycle approach to be used in theproject, such as waterfall, incremental delivery or evolutionary. The choice oflife cycle approach is crucial to the success of the project. SQA shouldcheck that the life cycle approach is appropriate and has been correctlytailored to the project.

A ‘process model', based upon the selected life cycle approachshould be described in the plan. The process model defines all the majoractivities. For each activity it should define:• entry criteria;• inputs;• tasks;• outputs;• exit criteria.

SQA staff should check the model for consistency. Commonproblems are documents with no destination, impossible entry criteria andthe absence of exit criteria (e.g. successful review).

Page 874: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) 9SOFTWARE QUALITY ASSURANCE

SQA should check that the plan is sufficient to minimise the risks onthe project. For example prototyping is an important risk reduction activity,especially when the project is developing novel software.

SQA should check that the project management plan is realistic.The plan should explicitly declare the:• assumptions made in planning (e.g. feasibility);• dependencies upon external events (e.g. supplier delivery);• constraints on the plan (e.g. availability of staff).

All plans make assumptions, have dependencies, and are subjectto constraints. SQA should check that the assumptions, dependencies andconstraints have been addressed in the risk analysis.

Quantitative measures (metrics) are important for evaluating projectperformance [Ref 5]. SQA should check that any metrics proposed in plansare appropriate.

2.2.2.2 Software configuration management plan

SQA need to check that the development of the software is properlycontrolled. Each project defines the control procedures in the SCMP, beforethe production starts (SCM41). SQA should verify that they are properlydefined and carried out. Examples of poor software configurationmanagement planning are:• no SCMP;• an incomplete SCMP;• no-one with overall responsibility for SCM;• no identification conventions;• no change control procedures;• inefficient change control procedures;• no approval procedures;• no storage procedures;• no mechanisms for tracking changes.

Documents are the primary output of the SR phase. The softwareconfiguration management plan for the phase must describe thedocumentation configuration management procedures. These should be

Page 875: ESA Software Engineering Standards

10 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)SOFTWARE QUALITY ASSURANCE

flexible enough to be usable throughout the project. Key aspects that mustbe addressed are:• document identification;• document storage;• document change control;• document status accounting.

All relevant project documentation must be uniquely identified(SCM06). Document identifiers should include the company name, theproject name, the document type, the document number and/or name, andthe document version number (SCM07, SCM08, SCM09, SCM10). Theidentification scheme must be extensible (SCM11). Every page of thedocument should be marked with the identifier. SQA staff should check thatall project documents have been included in the configuration identificationscheme. One way to do this is to examine the process model and decidewhat kind of documents are input to and output from each activity in themodel.

Multiple versions of documents will be generated during a project. Acopy of the current issue of every project document must be stored in themaster library. Another copy must be archived. SQA staff check that masterand archive libraries are complete by performing physical audits.

ESA PSS-05-0 defines the Review Item Discrepancy (RID)procedure for document change control (SCM29). This procedure must beused for reporting problems in documents that undergo formal review. SQAshould confirm that all problems discovered during formal review arereported on RID forms and processed by the review.

Document status accounting includes the management of the RIDstatus information (one of CLOSE/UPDATE/ACTION/REJECT). Hundreds ofRIDs can be generated on small projects. Large projects, with manyreviewers, can produce thousands. Tool support for analysing and reportingRID status is therefore usually needed. SQA staff should ensure that the RIDstatus accounting system is efficient, accurate and gives good visibility ofstatus.

The software configuration management plan should address theidentification, storage, control and status accounting of CASE tool outputs,such as files containing design information (SCM43). Plans for theconfiguration management of prototype software should also be made. Key

Page 876: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) 11SOFTWARE QUALITY ASSURANCE

project decisions may turn upon the results of prototyping, and SQA staffshould ensure that the prototypes can be trusted. Poor configurationmanagement of prototypes can cause spurious results.

2.2.2.3 Software verification and validation plan

By the end of the UR phase, the software verification and validationplan needs to include:• a plan for verifying the outputs of the SR phase;• an acceptance test plan (SVV11).

Verification techniques used in the SR phase include reviewing,formal proof and tracing. Plans for reviews should include intermediate andformal reviews. Intermediate reviews are used to verify products, and later toclear up problems before formal review. The primary formal review is theSoftware Requirements Review (SR/R).

SQA should examine the software verification and validation planand check that the review procedures are well-defined, the right people areinvolved, and that enough reviews are held. Documents that have not beenreviewed are just as likely to contain problems as code that has not beentested.

SQA staff should examine the software verification and validationplan and check that the verification approach is sufficiently rigorous toassure the correctness of the software requirements document. Formalmethods for specification and verification of software are often necessarywhen safety and security issues are involved.

The software verification and validation plan needs to define howuser requirements will be traced to software requirements. SQA staff oftenhave to check the traceability matrix later in the project, and it is importantthat traceability procedures are easy to use.

The acceptance test plan must be produced as soon as the URD isavailable (SVV11). Although the level of detail required depends upon theproject, all plans should define the approach to validating the userrequirements. SQA staff should check that the plan is sufficiently detailed toallow estimation of the effort required for acceptance testing. Overlookingthe need for expensive items of test equipment, such as a simulator, is alltoo common.

Page 877: ESA Software Engineering Standards

12 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)SOFTWARE QUALITY ASSURANCE

2.3 SOFTWARE REQUIREMENTS DEFINITION PHASE

Software quality assurance staff's primary responsibility in the SRphase is to check that the planned activities are carried out (SR01).

2.3.1 Checking technical activities

All projects should adopt a recognised method for softwarerequirements analysis that is appropriate for the project, and then apply itconsistently. The best sign of recognition of a method is publication in abook or journal. All methods should produce a ‘logical model' that expressesthe logic of the system without using implementation terminology, such as‘record', ‘file' or ‘event flag', to describe how the software will work (SR02,SR15, SR16, SR17).

Modelling is an essential technical activity in the SR phase andSQA should check that modelling is being done. SQA should check at thebeginning of the phase that:• an analysis method will be used;• staff have used the analysis method before, or will receive training;• the analysis method can be supported with CASE tools.

SQA should check that methods, tools and techniques are beingproperly applied. An important reason for using a tool is that it enforces thecorrect use of the method. Even though using a tool can slow the creativestages, it pays off in the long run by minimising the amount of work thatneeds to be redone because rules were broken. CASE tools are notmandatory, but they are strongly recommended.

Techniques such as:• Failure Modes, Effects and Criticality Analysis (FMECA);• Common Mode Failure Analysis (CMFA);• Fault Tree Analysis (FTA);• Hazard Analysis;

may be employed by developers to verify that software will meetrequirements for reliability, maintainability and safety [Ref 8]. The role of SQAis to check that appropriate analysis methods have been properly applied.SQA may introduce requirements for quality, reliability, maintainability andsafety based upon the results of the analysis.

Page 878: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) 13SOFTWARE QUALITY ASSURANCE

The software requirements should specify metrics for measuringquality, reliability (e.g. MTBF), and maintainability (e.g. MTTR). Additionalquality-related metrics may be defined by the project. Values of complexitymetrics may be defined in the quality requirements to limit design complexityfor example.

The Software Requirements Document (SRD) is the primary productof the SR phase (SR10). SQA should read the whole document and checkthat it has the structure and contents required by the standards (SR18). Thegeneral description section should contain the logical model, and SQAshould check that it conforms to the rules of the method. If no CASE toolhas been used, SQA should check the consistency of the model. Anunbalanced data flow can give rise to an interface problem at a later stage,for example. During the intermediate review process, SQA should helpanalysts make the model understandable to non-experts.

Each requirement in the specific requirements section needs tohave an identifier, for traceability (SR04), a flag marking it as essential ordesirable (SR05), a flag marking priority (SR06), and references to the URD(SR07). The requirements should be structured according to the logicalmodel.

The SRD needs to be complete, consistent and cover all therequirements in the URD (SR11, SR12, and SR14). SQA should verifycompleteness by checking the traceability matrix in the SRD (SR13).

The software requirements must be verifiable (SR08). Developerscheck verifiability by examining each specific requirement and decidingwhether it can be verified by testing. If a requirement cannot be tested, othermethods, such as inspection or formal proof, must be found. The systemtest plan defines the scope and approach towards verification of the SRD.This plan is documented in the Software Verification and Validation Plan,System Test section (SVVP/ST). SQA should check that the SVVP/STdefines, in outline, how the software requirements will be verified.

At the end of the phase the SRD is reviewed at the formal softwarerequirements review (SR09). SQA staff should take part in the review, andcheck that the proper procedures are observed.

Page 879: ESA Software Engineering Standards

14 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)SOFTWARE QUALITY ASSURANCE

2.3.2 Checking the management plans

During the SR phase, four plans must be produced by thedevelopers to define how the development will be managed in the ADphase. These are the:• Software Project Management Plan (SPMP/AD) (SPM05);• Software Configuration Management Plan (SCMP/AD) (SCM44);• Software Verification and Validation Plan (SVVP/AD) (SVV12);• Software Quality Assurance Plan (SQAP/AD) (SQA06).

SQA staff must produce the SQAP/AD. They also need to review theother plans to check that they conform to standards. The rest of this sectionpicks out the parts of each plan that are important for the AD phase.

2.3.2.1 Software project management plan

In most respects, the SPMP/AD will resemble the SPMP/SR. SQAstaff should check that the plan analyses the risks to the project anddevotes adequate resources to them. Experimental prototyping may berequired for the demonstration of feasibility, for example.

The plan should describe all the AD phase activities and contain aprecise estimate of the AD phase cost (SPM07). The most important newcomponent of the plan is the refined cost estimate for the whole project(SPM06). Using the logical model, managers should be able to produce acost estimate accurate to 30%. SQA staff should check that the costestimate has been obtained methodically and includes all the necessarydevelopment activities. Possible methods include using historical cost datafrom similar projects, and Albrecht's function point analysis method [Ref 6].

Projects with few technical risks should require little or noexperimental prototyping. Accurate cost estimates should therefore bepossible. SQA staff should check that there is no discrepancy between theamount of prototyping planned and accuracy of cost estimates.

2.3.2.2 Software configuration management plan

Software configuration management procedures in the AD phaseare normally similar to the those used in the SR phase. New proceduresmight be necessary for the CASE tools used for design, however.

Page 880: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) 15SOFTWARE QUALITY ASSURANCE

2.3.2.3 Software verification and validation plan

The software verification and validation plan needs to be extendedduring the SR phase to include:• a plan for verifying the outputs of the AD phase;• a system test plan (SVV14).

Verification techniques used in the AD phase are similar to the SRphase. Intermediate reviews are used to agree the design layer by layer, andto clear up problems in the ADD before formal review. The primary formalreview is the Architectural Design Review (AD/R).

The system test plan must be produced as soon as the SRD isavailable (SVV14). Although the level of detail required depends upon theproject, all plans should define the approach to verifying the softwarerequirements. Implementation of the verification requirements in the SRDshould be considered in the plan.

2.4 ARCHITECTURAL DESIGN PHASE

Software quality assurance staff's primary responsibility in the ADphase is to check that the planned activities are carried out (AD01).

2.4.1 Checking technical activities

A recognised method for software design shall be adopted andapplied consistently in the AD phase (AD02). This method must support theconstruction of ‘physical model' that defines how the software works (AD03).The method used to decompose the software into its components mustpermit a top-down approach (AD04).

Just as in the SR phase, SQA should check at the beginning of thephase that:• an appropriate design method will be used;• the developers have used the design method before, or will receive

training;• the design method can be supported with CASE tools;• the lowest level of the architectural design has been agreed.

Page 881: ESA Software Engineering Standards

16 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)SOFTWARE QUALITY ASSURANCE

The architectural design should be reviewed layer-by-layer as it isdeveloped. SQA should attend these intermediate reviews to help, forexample:• prevent the design becoming too complex to test;• check that feasibility of each major component has been proven;• ensure that the design will be reliable, maintainable and safe.

SQA should check that design quality has been optimised usingthe guidelines in ESA PSS-05-04, Guide to the Software Architectural DesignPhase [Ref 11].

The primary output of the AD phase is the Architectural DesignDocument (ADD) (AD19). Although several designs may have beenconsidered in the AD phase, only one design should be presented (AD05).Records of the investigation of alternatives should have been retained forinclusion in the Project History Document.

SQA should check that the ADD defines the functions, inputs andoutputs of all the major components of the software (AD06, AD07, AD08,AD17). The ADD must also contain definitions of the data structures thatinterface components (AD09, AD10, AD11, AD12, AD13, AD18), orreference them in Interface Control Documents (ICDs). The ADD must definethe control flow between the components (AD14).

The ADD must define the computer resources needed (AD15).Trade-offs should have been performed during the AD phase to define them.Prototyping may have been required. SQA staff should look for evidence thatthe resource estimates have been obtained methodically.

The ADD gives everyone on the project visibility of the system as awhole. SQA staff have a special responsibility to ensure that the designersmake the document easy to understand. They should review the whole ADDand familiarise themselves with the design. This will improve theireffectiveness during the DD phase.

The ADD needs to be complete (AD20) and should include across-reference matrix tracing software requirements to components(AD21). SQA should check this table, and that the ADD contains thestandard contents (AD24), to confirm completeness. SQA staff shouldconfirm that the ADD has been checked for inconsistencies (AD22), such asmismatched interfaces. The checks need to be very thorough when CASEtools have not been used. The ADD needs to be detailed enough for

Page 882: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) 17SOFTWARE QUALITY ASSURANCE

subgroups of developers to be able to work independently in the DD phase(AD23). The functions of major components and their interfaces need to bewell defined for this to be possible.

At the end of the phase the ADD is reviewed at the formalarchitectural design review (AD16). SQA staff should take part in the review,and check that the proper procedures are observed.

2.4.2 Checking the management plans

During the AD phase, four plans must be produced by thedevelopers to define how the development will be managed in the DDphase. These are the:• Software Project Management Plan (SPMP/DD)(SPM08);• Software Configuration Management Plan (SCMP/DD)(SCM46);• Software Verification and Validation Plan (SVVP/DD) (SVV15);• Software Quality Assurance Plan (SQAP/DD) (SQA08).

SQA staff must produce the SQAP/DD. They also need to review theother plans to check that they conform to standards. The rest of this sectionpicks out the parts of each plan that are important for the DD phase.

2.4.2.1 Software project management plan

The software project management plan for the DD phase is largerand more complex than SR and AD phase plans because:• most of the development effort is expended in the DD phase;• software production work packages must not consume more than one

man-month of effort (SPM12);• parallel working is usually required.

The work-breakdown structure is based upon the componentbreakdown (SPM10). The level of detail required should result in a costestimate that is accurate to 10% (SPM09).

Page 883: ESA Software Engineering Standards

18 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)SOFTWARE QUALITY ASSURANCE

SQA should check that the plan contains a network showing therelation of coding, integration and testing activities (SPM11). This networkshould have been optimised to make the best use of resources such as staffand computing equipment. SQA should check that the plan:• defines a critical path showing the time required for completion of the

phase;• includes essential activities such as reviews;• makes realistic estimates of the effort required for each activity (e.g. it is

usual to spend at least half the time in the DD phase in testing);• schedules SQA activities such as audits.

Projects should define criteria, ideally based upon software metrics,to help decide such issues as the readiness of:• a software component for integration;• the software for system testing;• the software for delivery.

SQA should check that the management plan describes the criteriafor making these decisions.

2.4.2.2 Software configuration management plan

The primary output of the DD phase is the code, and the softwareconfiguration management plan needs to be extended to cover:• code identification;• code storage;• code change control;• code status accounting.

An efficient configuration management system is vital to thesuccess of the DD phase, and this system must be described in detail in theSCMP/DD (SCM02).

SQA staff should review the SCMP/DD to confirm that all softwareitems, documentation, source code, object code, executable code, datafiles and test software, are covered by the plan (SCM01). The parts of theSCMP related to document handling can be carried over from earlierphases.

Page 884: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) 19SOFTWARE QUALITY ASSURANCE

All code must be uniquely identified (SCM06). Identifiers mustinclude the module name, type and version number (SCM07, SCM08,SCM09, SCM10, and SCM11). SQA staff should check that the names ofconfiguration items reflect their purpose.

The SCMP should define the standard header for all source codemodules (SCM15, SCM16, SCM17 and SCM18). Documentation andstorage media should be clearly labelled (SCM19, SCM20, SCM21 andSCM22). Clear labelling is a sign of a well-run project. Poor labelling isreadily apparent in a physical audit. SQA conduct physical audits to verifythat the tangible software items such as disks, tapes and files are presentand in their correct location.

The core of the software configuration management plan is thesoftware library system (SCM23, SCM24, SCM25). The library proceduresshould be vetted by SQA to check that access to libraries is controlled(SCM26). SQA should check that software cannot be lost throughsimultaneous update.

The SCMP must contain a backup plan (SCM27, SCM28). Allversions of software should be retained. SQA should check backup logsand stores in physical audits.

Good change control procedures are essential. SQA shouldexamine these procedures and estimate the total time needed to process achange through the configuration management system. This time should besmall compared with the time required to do the necessary technical work. Ifthe change process is too time-consuming then there is a high probability ofit breaking down at periods of stress, such as when a delivery isapproaching. When this happens, configuration management problems willadd to technical problems, increasing the pressure on developers. SQA staffshould check that the configuration management system can handle theexpected volume of change, and if necessary help make it more efficient.Special fast-track procedures may be required for urgent changes, forexample.

Configuration status accounting is needed for the effective controlof a project. It also gives customers and users visibility of problem trackingand corrective action. SQA should examine the configuration statusaccounting procedures and confirm that the evolution of baselines istracked (SCM32, SCM33) and records are kept of the status of RIDs, SPRs,SCRs and SMRs (SCM34). SQA may define software quality metrics that

Page 885: ESA Software Engineering Standards

20 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)SOFTWARE QUALITY ASSURANCE

use data in the configuration status accounts. Tools for monitoring qualityattributes (e.g. fault rate, repair time) may use the accounts data.

SQA should monitor the execution of the procedures, described inthe SCMP, and examine trends in problem occurrence. To make meaningfulcomparisons, SQA should check that problems are classified. A possibleclassification is:• user error;• user documentation error;• coding error;• design error;• requirements specification error.

Problems may also be categorised by subsystem, or even softwarecomponent. For each problem category, projects should record:• number of problems reported;• number of problems open;• number of problems closed• problem reporting rate;• problem close-out time.

SQA should use these statistics to evaluate product quality andreadiness for transfer.

2.4.2.3 Software verification and validation plan

The software verification and validation plan needs to be extendedduring the AD phase to include:• a plan for verifying the outputs of the DD phase;• an integration test plan (SVV17).

Page 886: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) 21SOFTWARE QUALITY ASSURANCE

The techniques used to verify the detailed design are similar tothose used in the AD phase. As before, intermediate reviews are used toagree the design layer by layer. The last review is the code inspection. Thisis done after modules have been successfully compiled but before unittesting. Peer review of code before unit testing has been shown to be verycost-effective. SQA staff should ensure that this vital step is not missed.Inspection metrics should include:• lines of code per module;• cyclomatic complexity per module;• number of errors per module.

SQA should check that the values of these metrics are within thelimits defined in the design and coding standards.

The primary formal review is the Detailed Design Review (DD/R).This is held at the end of the phase to decide whether the software is readyfor transfer.

The integration test plan should be produced as soon as the ADD isavailable. The plan should define the integration approach, and should aimto put the major infrastructure functions in place as early as possible.Thereafter the integration plan should maximise the use of integratedsoftware and minimise the use of test software that substitutes forcomponents that are integrated later.

2.5 DETAILED DESIGN AND PRODUCTION PHASE

Software quality assurance staff's primary responsibility in the DDphase is to check that the planned activities are carried out (DD01).

2.5.1 Checking technical activities

2.5.1.1 Detailed design

ESA PSS-05-0 calls for the detailed design and production ofsoftware to be based upon the principles of top-down decomposition andstructured programming (DD02, DD03). Technical staff should describe howthey will implement these design practices in part 1 of the DDD. SQA shouldexamine the DDD and check that the practices are implemented when theyreview the software.

Page 887: ESA Software Engineering Standards

22 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)SOFTWARE QUALITY ASSURANCE

SQA should check that the software is documented as it isproduced (DD04). Documentation is often deferred when staff are underpressure, and this is always a mistake. SQA can ensure that documentationand production are concurrent by reviewing software as soon as it isproduced. If SQA delay their review, developers may defer writingdocumentation until just before SQA are ready.

The detailed design should be reviewed layer-by-layer. Thesereviews should include technical managers and the designers concerned.When the design of a major component is finished, a critical design reviewof the relevant DDD sections must be held to decide upon its readiness forcoding (DD10). SQA should attend some of these review meetings,especially when they relate to the implementation of quality, reliability,maintainability and safety requirements. SQA should also check that part 2of the DDD is a logical extension of the structure defined in the ADD, andthat there is a section for every software component (DD14).

2.5.1.2 Coding

SQA should inspect the code. This activity may be part of aninspection process used by development staff, or may be a standalonequality assurance activity. They should verify that the coding standards havebeen observed. The quality of code should be evaluated with the aid ofstandard metrics such as module length, complexity and comment rate.Static analysis tools should be used to support metric data collection andevaluation.

2.5.1.3 Reuse

The ‘reuse' of software from project to project is increasinglycommon. SQA should check that the reused software was developedaccording to acceptable standards.

Software may be reliable in one operational environment but not inanother. SQA should treat with extreme caution claims that the quality ofreusable software is proven through successful operational use. SQA shouldcheck that the development and maintenance standards and records of thesoftware make it fit for reuse.

Reused software is frequently purchased off-the-shelf. Suchsoftware is usually called ‘commercial software'. SQA should check that thequality certification of the commercial software supplier meets the standardsof the project.

Page 888: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) 23SOFTWARE QUALITY ASSURANCE

2.5.1.4 Testing

Software that has not been tested is very unlikely to work. SQA havea vital role to play in reassuring management and the users that testing hasbeen done properly. To do this, SQA should check that testing activities:• are appropriate for the degree of criticality of the software (SVV05);• comply with verification and acceptance testing requirements (stated in

the SRD) (SVV06);• are properly documented in the SVVP (SVV17, SVV18, SVV19, SVV20,

SVV21, SVV22).

SQA carry out these checks by reviewing test specifications,observing tests being carried out, participating in selected tests, andreviewing the results. As part of this reviewing activity, SQA may requestadditions or modification to the test specifications.

Normally SQA will not be able to observe all the tests. The SQAPshould identify the tests they intend to observe or participate in.

SQA should monitor the progress of the project through the resultsof tests. SQA should define a metrics programme that includes measuressuch as:• number of failures;• number of failures for each test;• number of failures per test cycle.

Progress can be measured by a declining number of failures pertest cycle (i.e. edit, compile, link, test).

The unit test plan should be written early in the DD phase (SVV18)and SQA should review it. They should check that its level of detail isconsistent with the software quality, reliability and safety requirements. Thefirst unit tests should be white-box tests because they give assurance thatthe software is performing its job in the way it was intended. When fullcoverage has been achieved (see below), black-box tests should be appliedto verify functionality. Black-box tests should also be used to check for theoccurrence of likely errors (e.g. invalid input data).

SQA should check that the unit test plan defines the coveragerequirements. ESA PSS-05-0's basic requirement is for full statementcoverage (DD06). This is a minimum requirement. For most projects, full

Page 889: ESA Software Engineering Standards

24 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)SOFTWARE QUALITY ASSURANCE

branch coverage should be achieved during unit testing [Ref 7]. This isbecause coverage verification requires the tester to trace the path ofexecution. Debuggers, dynamic testing tools and test harnesses are usuallyneeded to do this, and are most conveniently applied during unit testing.SQA should examine the coverage records. Dynamic testing tools should beused to produce them.

The software verification and validation plan is expanded in the DDphase to include:• unit, integration and system test designs (SVV19);• unit, integration and system test cases (SVV20);• unit, integration and system test procedures (SVV21).

Each test design may have several test cases. A test procedureshould be defined for each test case. SQA should review all theseextensions to the SVVP test sections.

An important SQA check is to confirm that enough test cases havebeen specified. When full branch coverage has been specified, SQA shouldcheck that the:• number of test cases for each module is greater than or equal to the

cyclomatic complexity of the module [Ref 7];• paths traversed in the test cases actually result in every branch being

traversed.

In applying these rules, a module is assumed to contain a singlesoftware component such as a FORTRAN subroutine or a PASCALprocedure. Path coverage should be done for a representative set of testcases. Complete checking is most efficiently done during the testingprocess with a dynamic testing tool.

There should be an integration test design for every interface(DD07). The test cases should vary the input data so that nominal and limitsituations are explored. Integration tests should also exercise the controlflows defined in the ADD (DD08).

The system tests must verify compliance with system objectives(DD09). There should be a system test design for each softwarerequirement. System test cases should check for likely ways in which thesystem may fail to comply with the software requirement.

Page 890: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) 25SOFTWARE QUALITY ASSURANCE

Test procedures should be written so that they can be executed bysomeone other than the software author. SQA should review the testprocedures and confirm that they are self-explanatory.

The outcome of testing activities must be recorded (SVV22).Reviewing test results is an important SQA responsibility. SQA should checkthat the causes of test failures are diagnosed and corrective action taken.The problem reporting and corrective action mechanism should be invokedwhenever a fault is detected in an item of software whose control authority isnot the software author. Normally this means that SPRs are used to recordproblems discovered during integration testing and system testing. HoweverSPRs also need to be used to record problems when unit tests are done byan independent SVV team.

SQA staff should ensure that failed tests are repeated after repairshave been made. SPRs cannot be closed until the tests that originated themhave been passed.

Whereas the integration test plan defines the sequence for buildingthe system from the major components, the software configurationmanagement plan defines the procedures to be used when components arepromoted to the master libraries for integration testing (DD05). SQA shouldcheck that control authority rights are formally transferred when eachcomponent is promoted. The usual means of formal notification is signing-off unit test results. SQA should also check that software authors cannotmodify components that are stored in master libraries.

2.5.1.5 Formal review

Every project must hold a DD/R before delivering software to checkreadiness for transfer (DD11). SQA should participate in the review processand make recommendations based upon:• test results;• audit results;• analysis of outstanding problems.

SQA should conduct a physical audit of the software by checkingthe configuration item list before transfer. The list should contain everydeliverable software item specified in the project plan. The DDD, code andSUM must be included, as these are mandatory outputs of the phase(DD12, DD13 and DD17).

Page 891: ESA Software Engineering Standards

26 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)SOFTWARE QUALITY ASSURANCE

SQA should conduct a functional audit of the software by checkingthe software requirements versus software components traceability matrix inthe DDD (DD16). This checks that the DDD is complete, and accounts for allsoftware requirements (DD15). A second activity is to check that theSVVP/ST contains tests for every software requirement, and that these testshave been run. Functional audit activities should start as early as possible,as soon as the inputs, such as the DDD and test specifications, areavailable.

SQA's most important role at the DD/R is to analyse the trends inproblem occurrence and repair, and advise management about readiness.SQA should categorise failures into major and minor. Major problems putprovisional acceptance at risk. Using the records of SPRs in theconfiguration status accounts, SQA should estimate for each failurecategory:• Mean Time Between Failures (MTBF);• Mean Time To Repair (MTTR);• number of outstanding problems;• time required to repair the outstanding problems.

Using this data, management should be able to decide when thesoftware will be ready for transfer. SQA should not approve the transfer ofsoftware with major problems outstanding.

2.5.2 Checking the management plans

During the DD phase, three plans must be produced by thedevelopers to define how the development will be managed in the TR phase.These are the:• Software Project Management Plan (SPMP/TR) (SPM13);• Software Configuration Management Plan (SCMP/TR) (SCM48);• Software Quality Assurance Plan (SQAP/TR) (SQA10).

In addition, the Software Verification and Validation Plan must beextended to define the acceptance tests in detail.

SQA staff must produce the SQAP/TR. They also need to review theother plans to check that they conform to standards. The rest of this sectionpicks out the parts of each plan that are important for the TR phase.

Page 892: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) 27SOFTWARE QUALITY ASSURANCE

2.5.2.1 Software project management plan

Customer confidence in software is greatly increased when thetransfer phase is trouble-free. This is most likely when the installation andacceptance testing activities are carefully planned.

The transfer phase plan can be simple for a standalone productdeveloped in the same environment as that used for operations. Thesoftware is installed and acceptance tests are run. SQA should check thatboth these activities have been rehearsed in the DD phase. Theserehearsals should allow accurate estimation of TR phase effort.

When the software is to be embedded in a larger system oroperated in a different environment, the transfer of software can be muchmore complicated. The acceptance tests check that the software integratescorrectly with the target environment. Problems will always occur whensoftware meets its target environment for the first time. Changes will benecessary and the software project management plan should allow forthem.

SQA should check the SPMP/TR plan for realistic estimates for thetesting and repair work. They should also check that key development staffwill be retained during the phase so that new problems can be diagnosedand corrected quickly. The SPMP/TR should say who is to be involvedduring the phase. Users, development staff and SQA should attendacceptance tests.

2.5.2.2 Software configuration management plan

The SCMP/TR must define the software configuration managementprocedures for deliverables in the operational environment (SCM49). SQAshould check that the SCMP/TR:• identifies the deliverable items;• defines procedures for the storage and backup of deliverables;• defines the change control procedures;• defines the problem reporting procedures.

Users may have to apply the procedures defined in the SCMP/TR.This section of the SCMP should be simple and easy to follow, and integratewell with the SUM and STD. SQA should confirm this.

Page 893: ESA Software Engineering Standards

28 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)SOFTWARE QUALITY ASSURANCE

The change control section of the SCMP/TR should include adefinition of the terms of reference and procedures of the Software ReviewBoard (SRB). SQA staff should be members of the SRB.

2.5.2.3 Software verification and validation plan

The SVVP is expanded in the DD phase to include test designs, testcases, test procedures and test results for unit tests, integration tests andsystems tests, as discussed in Section 2.5.1.4. This section deals with theadditions to the SVVP necessary for the TR phase.

During the DD phase, the Acceptance Test section of the SoftwareVerification and Validation Plan (SVVP/AT) is extended to contain:• acceptance test designs (SVV19);• acceptance test cases (SVV20);• acceptance test procedures (SVV21).

SQA should confirm that there is a test design for each userrequirement. A test design versus user requirement cross-reference matrixmay be inserted in the SVVP/AT to demonstrate compliance.

Test cases should be flagged for use in provisional or finalacceptance (TR05). Some properties, such as reliability, may only bedemonstrable after a period of operations.

Every project should rehearse the acceptance tests before the DD/Rto confirm readiness for transfer. SQA should ensure that the rehearsals(often called ‘dry runs') are done.

2.6 TRANSFER PHASE

Software quality assurance staff's primary responsibility in the TRphase is to check that the planned activities are carried out (TR03).

The first TR phase activity is to build the software. This should bedone using the components directly modifiable by the maintenance team(TR04). The build procedures must have been defined in the SoftwareTransfer Document (STD). After building, the software is installed using theprocedures also defined in the STD. SQA should monitor the installation andbuild process.

Page 894: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) 29SOFTWARE QUALITY ASSURANCE

The acceptance tests necessary for provisional acceptance arethen run (TR05). SQA should check that the users are present to witness theacceptance tests, and that each test is signed off by developers and users(TR01).

A Software Review Board (SRB) meeting is held after theacceptance tests to review the software's performance and decide whetherthe software can be provisionally accepted (TR02). For the purposes ofacceptance, the ‘software' is the outputs of the SR, AD, DD and TR phases.Together with the URD, these outputs constitute the software system(TR07). SQA should attend this meeting. Three outcomes are possible:• rejection of the software;• unconditional provisional acceptance of the software;• conditional provisional acceptance of the software.

The third outcome is the most common. Acceptance is madeconditional upon the completion of actions defined by the SRB meeting.These ‘close-out' actions usually include the completion of modificationsfound to be necessary during the acceptance tests.

SQA should check that the statement of provisional acceptance(TR06) is signed by the:• initiator;• project manager;• project SQA manager.

The STD is a mandatory output of the TR phase and SQA shouldinspect it (TR08). The first STD inspection should be done at the beginningof the phase, before delivery. At this stage the STD contains sections listingthe deliverables (i.e. configuration item list), installation procedures and buildprocedures. The second inspection should be done at the end of the TRphase when the sections describing the acceptance test results, softwareproblem reports, software change requests and software modificationresults are added (TR10). The last step in the TR phase is the formal hand-over of the STD (TR09).

Page 895: ESA Software Engineering Standards

30 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)SOFTWARE QUALITY ASSURANCE

2.7 OPERATIONS AND MAINTENANCE PHASE

Good software can be ruined by poor maintenance. SQA shouldmonitor software quality throughout the OM phase to check that it is notdegraded. SQA should check that the:• software configuration is properly managed (OM05);• documentation and code are kept up-to-date (OM06);• MTBF increases;• MTTR decreases.

MTBF and MTTR should be regularly estimated from the data in theconfiguration status accounts.

The SRB authorises all modifications to the software (OM08). SQAshould participate in SRB meetings and advise it on issues related to quality,reliability, maintainability and safety.

Development plans should cover the period up to final acceptance(OM01). All projects must have a final acceptance milestone (OM03). Beforeit arrives, SQA should check that:• all acceptance tests have been successfully completed (OM02);• the Project History Document is ready for delivery (OM10).

SQA should check that the statement of final acceptance (OM09) issigned by the:• initiator;• project manager;• project SQA manager.

After final acceptance, an organisation must be defined to take overmaintenance (OM04). SQA should check that the maintenance organisationis properly resourced (OM07). Estimates of the resource requirementsshould be based upon:• MTBF;• MTTR;• size of the system;• number of subsystems;• type of system;• usage pattern.

Page 896: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) 31THE SOFTWARE QUALITY ASSURANCE PLAN

CHAPTER 3 THE SOFTWARE QUALITY ASSURANCE PLAN

3.1 INTRODUCTION

Plans for software quality assurance activities must be documentedin the Software Quality Assurance Plan (SQAP) (SQA02). The first issue ofthe SQAP must be prepared by the end of the UR review. This issue mustoutline the SQA activities for the whole project and define in detail SR phaseSQA activities (SQA04 and SQA05). Sections of the SQAP must beproduced for the AD, DD and TR phases (SQA06, SQA08, SQA10), to coverin detail all the SQA activities that will take place in those phases (SQA07,SQA09 and SQA11).

The table of contents for each section is derived from the IEEEStandard for Software Quality Assurance Plans (ANSI/IEEE Std 730-1989)[Ref 3], which adds the proviso that the table of contents ‘should not beconstrued to prohibit additional content in the SQAP'. The size and contentof the SQAP should reflect the complexity of the project. Additionalguidelines on the completion of an SQAP can be found in IEEE Guide forSoftware Quality Assurance Plans (ANSI/IEEE Std 983-1989) [Ref 4].

3.2 STYLE

The SQAP should be plain and concise. The document should beclear, consistent and modifiable.

The author of the SQAP should assume familiarity with the purposeof the software, and not repeat information that is explained in otherdocuments.

3.3 RESPONSIBILITY

A SQAP must be produced by each contractor developing software(SQA01). Review of the SQAPs produced by each contractor is part of thesupplier control activity.

The SQAP should be produced by the SQA staff. It should bereviewed by those to whom the SQA personnel report.

Page 897: ESA Software Engineering Standards

32 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)THE SOFTWARE QUALITY ASSURANCE PLAN

3.4 MEDIUM

It is usually assumed that the SQAP is a paper document. There isno reason why the SQAP should not be distributed electronically toparticipants with the necessary equipment.

3.5 CONTENT

The SQAP is divided into four sections, one for each developmentphase. These sections are called:• Software Quality Assurance Plan for the SR phase (SQAP/SR);• Software Quality Assurance Plan for the AD phase (SQAP/AD);• Software Quality Assurance Plan for the DD phase (SQAP/DD);• Software Quality Assurance Plan for the TR phase (SQAP/TR).

ESA PSS-05-0 recommends the following table of contents for eachsection of the SQAP:

Service Information: a - Abstract b - Table of Contents c - Document Status Sheet d - Document Change records made since last issue 1 Purpose 2 Reference Documents 3 Management 4 Documentation 5 Standards, practices, conventions and metrics 5.1 Documentation standards 5.2 Design standards 5.3 Coding standards 5.4 Commentary standards 5.5 Testing standards and practices 5.6 Selected software quality assurance metrics 5.7 Statement of how compliance is to be monitored 6 Review and audits2

7 Test

2 Subsections 6.1 ‘Purpose' and 6.2 ‘Minimum Requirements' in ESA PSS-05-0 should not be used. They will be removed from the

next issue of ESA PSS-05-0.

Page 898: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) 33THE SOFTWARE QUALITY ASSURANCE PLAN

8 Problem reporting and corrective action 9 Tools, techniques and methods 10 Code control 11 Media control 12 Supplier control 13 Records collection, maintenance and retention 14 Training 15 Risk Management 16 Outline of the rest of the project Appendix A Glossary

Material unsuitable for the above contents list should be inserted inadditional appendices. If there is no material for a section then the phrase‘Not Applicable' should be inserted and the section numbering preserved.

Sections 3 to 15 of the SQAP should describe how the technicalactivities and management plans will be checked. Table 3.5 traces eachsection of SQAP to the complementary documents or document sectionsthat describe what is to be checked. The SQAP should identify (i.e.reference) the material in each of the other documents, but not repeat it.Sections 3 to 15 of the SQAP should also describe how SQA activities willbe reported.

Page 899: ESA Software Engineering Standards

34 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)THE SOFTWARE QUALITY ASSURANCE PLAN

SQAP section Document/Document section Management SPMP

SCMP/Management SVVP/SR-AD-DD-TR/Reporting

Documentation SPMP/Software documentation Standards, practices and conventions SPMP/Methods, tools and techniques

SVVP/SR-AD-DD-TR/Administrativeprocedures DDD/Project standards, conventions andprocedures

Reviews and audits SVVP/SR-AD-DD-TR/Activities Test SVVP/AT-ST-IT-UT Problem reporting and corrective action SCMP/Change control

SVVP/SR-AD-DD-TR/Administrativeprocedures

Tools, techniques and methods SPMP/Methods, tools and techniques SCMP/Tools, techniques and methods SVVP/SR-AD-DD-TR/Overview ADD/Design method

Code control SCMP/Code control Media control SCMP/Media control Records collection, maintenance andretention

SCMP/Configuration status accounting SCMP/Records collection and retention

Supplier control SCMP/Supplier control Training SPMP Risk management SPMP/Risk Management

Table 3.5 Software quality assurance plan sections traced to otherproject documents

3.5.1 SQAP/1 Purpose

This section should briefly:• describe the purpose of the SQAP;• specify the intended readership of the SQAP;• list the software products to be developed;• describe the intended use of the software;• identify the phase of the life cycle to which the plan applies.

3.5.2 SQAP/2 Reference Documents

This section should provide a complete list of all the applicable andreference documents, identified by title, author and date. Each documentshould be marked as applicable or reference. If appropriate, report number,journal name and publishing organisation should be included.

Page 900: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) 35THE SOFTWARE QUALITY ASSURANCE PLAN

3.5.3 SQAP/3 Management

This section should describe the organisation of quality assurance,and the associated responsibilities. The SQAP should define the roles to becarried out, but not allocate people to roles, or define the effort andschedule. ANSI/IEEE Std 730-1989, ‘Standard for Software QualityAssurance Plans' [Ref 3] recommends that the following structure be usedfor this section.

3.5.3.1 SQAP/3.1 Organisation

This section should:• identify the organisational roles that control and monitor software quality

(e.g. project manager, team leaders, software engineers, softwarelibrarian, software verification and validation team leader, softwarequality assurance engineer);

• describe the relationships between the organisational roles;• describe the interface with the user organisation.

Relationships between the organisational roles may be shown bymeans of an organigram. This section may reference the SPMP, SCMP andSVVP.

This section should describe how the implementation of theorganisational plan will be verified.

3.5.3.2 SQAP/3.2 Tasks

This section should define the SQA tasks (selected from SQAPsections 3.4 to 3.13) that are to be carried out in the phase of the life cycle towhich this SQAP applies. This section should define the sequencing of theselected tasks. Additional tasks may be included.

3.5.3.3 SQAP/3.3 Responsibilities

This section should identify the SQA tasks for which eachorganisational role is responsible. A cross-reference matrix may be used toshow that each SQA task has been allocated.

Page 901: ESA Software Engineering Standards

36 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)THE SOFTWARE QUALITY ASSURANCE PLAN

3.5.4 SQAP/4 Documentation

This section should identify the documents to be produced in thephase. The SPMP normally contains (or references) a documentation planlisting all the documents to be produced in the phase.

This section should state how the documents will be checked forconformance to ESA PSS-05-0 and the project documentation plan.

3.5.5 SQAP/5 Standards, practices, conventions and metrics

The following subsections should identify the standards, practices,conventions and metrics used to specify software quality, and explain howSQA will check that the required quality will be achieved.

3.5.5.1 SQAP/5.1 Documentation standards

This section should identify the standards, practices, andconventions that will be used to produce the documents of the phase.Documentation standards are normally defined in the documentation plan(see section 4 of the SQAP).

3.5.5.2 SQAP/5.2 Design standards

This section should identify the standards, practices andconventions that will be used in the phase to design the software. Designstandards are normally defined or referenced in the ADD and DDD.

3.5.5.3 SQAP/5.3 Coding standards

This section should identify the standards, practices, andconventions that will be used in the phase to write code. Coding standardsare normally defined or referenced in the DDD.

3.5.5.4 SQAP/5.4 Commentary standards

This section should identify the standards, practices andconventions that will be used in the phase to comment code. Commentarystandards are normally included in the coding standards.

3.5.5.5 SQAP/5.5 Testing standards and practices

This section should identify the standards, practices andconventions that will be used in the phase to test the software. Testingstandards are normally defined in the SRD and SVVP.

Page 902: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) 37THE SOFTWARE QUALITY ASSURANCE PLAN

3.5.5.6 SQAP/5.6 Selected software quality assurance metrics

This section should identify the metrics that will be used in thephase to measure the quality of the software. Metrics are normally defined inthe project standards and plans.

3.5.5.7 SQAP/5.7 Statement of how compliance is to be monitored

This section should describe how SQA will monitor compliance tothe standards, practices, conventions and metrics.

3.5.6 SQAP/6 Review and audits

This section should identify the technical reviews, inspections,walkthroughs and audits that will be held during the phase, and the purposeof each. It should describe how adherence to the review and auditprocedures (defined in the SVVP) will be monitored, and the role of SQApersonnel in the review and audit process.

3.5.7 SQAP/7 Test

This section should describe how the testing activities described inthe SVVP will be monitored and how compliance with verification andacceptance-testing requirements in the SRD will be checked.

3.5.8 SQAP/8 Problem reporting and corrective action

This section should identify (by referencing the SCMP) the problemreporting and corrective action procedures (e.g. RID and SPR handlingprocedures).

This section should describe how adherence to the problemreporting procedures described in the SCMP will be monitored.

This section may describe the metrics that will be applied toproblem reporting process to estimate the software quality.

3.5.9 SQAP/9 Tools, techniques and methods

This section should identify the tools, techniques and methods usedto develop the software.

This section should describe how the use of the tools, techniquesand methods will be monitored.

Page 903: ESA Software Engineering Standards

38 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)THE SOFTWARE QUALITY ASSURANCE PLAN

Additional tools, techniques and methods for supporting SQA tasksmay be described here (or in the section on the task).

3.5.10 SQAP/10 Code (and document) control

This section should identify the procedures used to maintain, store,secure and document software. These procedures should be defined in theSCMP.

This section should describe how adherence to the procedures willbe monitored.

3.5.11 SQAP/11 Media control

This section should identify the procedures used to maintain, store,secure and document controlled versions of the physical media on whichthe identified software resides. These procedures should be defined in theSCMP.

This section should describe how adherence to the procedures willbe monitored.

3.5.12 SQAP/12 Supplier control

A supplier is any external organisation that develops or providessoftware to the project (e.g. subcontractor developing software for theproject, or a company providing off-the-shelf commercial software).

This section should identify the standards that will be applied bysuppliers.

This section should describe how adherence of the suppliers to theapplicable standards will be monitored.

This section should identify the procedures that will be applied togoods supplied, such as commercial software and hardware.

This section should describe how adherence to the incominginspections (i.e. goods-in procedures) will be monitored.

3.5.13 SQAP/13 Records collection, maintenance and retention

This section should identify the procedures kept by the project forrecording activities such as meetings, reviews, walkthroughs, audits and

Page 904: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) 39THE SOFTWARE QUALITY ASSURANCE PLAN

correspondence. It should describe where the records are kept, and for howlong.

This section should describe how adherence to the record-keepingprocedures will be monitored.

3.5.14 SQAP/14 Training

This section should identify any training programmes defined for theproject staff and explain how SQA will check that they have beenimplemented.

3.5.15 SQAP/15 Risk Management

This section should identify the risk management procedures usedin the project (which should be described in the SPMP).

This section should describe how adherence to the riskmanagement procedures will be monitored.

3.5.16 SQAP/16 Outline of the rest of the project

This section should be included in the SQAP/SR to provide anoverview of SQA activities in the AD, DD and TR phases.

3.5.17 SQAP/APPENDIX A Glossary

This section should provide the definitions of all terms, acronyms,and abbreviations used in the plan, or refer to other documents where thedefinitions can be found.

3.6 EVOLUTION

3.6.1 UR phase

By the end of the UR review, the SR phase section of the SQAPmust be produced (SQAP/SR) (SQA03). The SQAP/SR must describe, indetail, the quality assurance activities to be carried out in the SR phase(SQA04). The SQAP/SR must outline the quality assurance plan for the restof the project (SQA05).

Page 905: ESA Software Engineering Standards

40 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)THE SOFTWARE QUALITY ASSURANCE PLAN

3.6.2 SR phase

During the SR phase, the AD phase section of the SQAP must beproduced (SQAP/AD) (SQA06). The SQAP/AD must cover in detail all thequality assurance activities to be carried out in the AD phase (SQA07).

In the SR phase, the SRD should be analysed to extract anyconstraints that relate to software quality assurance (e.g. standards anddocumentation requirements).

3.6.3 AD phase

During the AD phase, the DD phase section of the SQAP must beproduced (SQAP/DD) (SQA08). The SQAP/DD must cover in detail all thequality assurance activities to be carried out in the DD phase (SQA09).

3.6.4 DD phase

During the DD phase, the TR phase section of the SQAP must beproduced (SQAP/TR) (SQA10). The SQAP/TR must cover in detail all thequality assurance activities to be carried out from the start of the TR phaseuntil final acceptance in the OM phase (SQA11).

Page 906: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) A-1GLOSSARY

APPENDIX AGLOSSARY

A.1 LIST OF ACRONYMS

AD Architectural DesignAD/R Architectural Design ReviewADD Architectural Design DocumentANSI American National Standards InstituteAT Acceptance TestBSSC Board for Software Standardisation and ControlCASE Computer-Aided Software EngineeringCMFA Common Mode Failure AnalysisDCR Document Change RecordDD Detailed Design and productionDD/R Detailed Design and production ReviewDDD Detailed Design and production DocumentDSS Document Status SheetESA European Space AgencyFMECA Failure Modes, Effects and Criticality AnalysisFTA Fault Tree AnalysisIEEE Institute of Electrical and Electronics EngineersIT Integration TestMTBF Mean Time Between FailuresMTTR Mean Time To RepairPA Product AssurancePSS Procedures, Specifications and StandardsQA Quality AssuranceRID Review Item DiscrepancySCM Software Configuration ManagementSCMP Software Configuration Management PlanSCR Software Change RequestSMR Software Modification ReportSPM Software Project ManagementSPMP Software Project Management PlanSPR Software Problem ReportSQA Software Quality AssuranceSQAP Software Quality Assurance PlanSR Software RequirementsSR/R Software Requirements Review

Page 907: ESA Software Engineering Standards

A-2 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)GLOSSARY

SRD Software Requirements DocumentST System TestSTD Software Transfer DocumentSVVP Software Verification and Validation PlanUR User RequirementsUR/R User Requirements ReviewURD User Requirements DocumentUT Unit Tests

Page 908: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) B-1REFERENCES

APPENDIX BREFERENCES

1. ESA Software Engineering Standards, ESA PSS-05-0 Issue 2 February1991.

2. IEEE Standard Glossary of Software Engineering Terminology,ANSI/IEEE Std 610.12-1990.

3. IEEE Standard for Software Quality Assurance Plans, ANSI/IEEE Std730-1989.

4. IEEE Guide for Software Quality Assurance Planning, ANSI/IEEE Std983-1986.

5. Application of Metrics in Industry Handbook, AMI Project, Centre forSystems and Software Engineering, South Bank University, London, UK,1992.

6. Software function, source lines of code, and development effortprediction: a software science validation, A.J.Albrecht and J.E.Gaffney,IEEE Transactions on Software Engineering, vol SE-9, No 6, November1983.

7. Structured Testing; A Software testing Methodology Using theCyclomatic Complexity Metric, T.J. McCabe, NBS Special Publication500-99, 1992.

8. Software Engineering, Design, Reliability and Management,M.L.Shooman, McGraw-Hill, 1983.

9. Software Product Assurance Requirements for ESA Space Systems,ESA PSS-01-21, Issue 2, April 1991.

10. Guide to the Software Engineering Standards, ESA PSS-05-01, Issue 1,October 1991.

11. Guide to the Software Architectural Design Phase, ESA PSS-05-04,Issue 1, January 1992.

Page 909: ESA Software Engineering Standards

B-2 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)REFERENCES

This page is intentionally left blank

Page 910: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) C-1MANDATORY PRACTICES

APPENDIX CMANDATORY PRACTICES

This appendix is repeated from ESA PSS-05-0, appendix D.12

SQA01 An SQAP shall be produced by each contractor developing software.

SQA02 All software quality assurance activities shall be documented in the SoftwareQuality Assurance Plan (SQAP).

SQA03 By the end of the UR review, the SR phase section of the SQAP shall beproduced (SQAP/SR).

SQA04 The SQAP/SR shall describe, in detail, the quality assurance activities to becarried out in the SR phase.

SQA05 The SQAP/SR shall outline the quality assurance plan for the rest of theproject.

SQA06 During the SR phase, the AD phase section of the SQAP shall be produced(SQAP/AD).

SQA07 The SQAP/AD shall cover in detail all the quality assurance activities to becarried out in the AD phase.

SQA08 During the AD phase, the DD phase section of the SQAP shall be produced(SQAP/DD).

SQA09 The SQAP/DD shall cover in detail all the quality assurance activities to becarried out in the DD phase.

SQA10 During the DD phase, the TR phase section of the SQAP shall be produced(SQAP/TR).

SQA11 The SQAP/TR shall cover in detail all the quality assurance activities to becarried out from the start the TR phase until final acceptance in the OMphase.

Page 911: ESA Software Engineering Standards

C-2 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)MANDATORY PRACTICES

This page is intentionally left blank

Page 912: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) D-1INDEX

APPENDIX DINDEX

AD01, 15AD02, 15AD03, 15AD04, 15AD05, 16AD06, 16AD07, 16AD08, 16AD09, 16AD10, 16AD11, 16AD12, 16AD13, 16AD14, 16AD15, 16AD16, 17AD17, 16AD18, 16AD19, 16AD20, 16AD21, 16AD22, 16AD23, 17AD24, 16ANSI/IEEE Std 730-1989, 31, 35ANSI/IEEE Std 983-1989, 31assumption, 9CMFA, 12constraints, 9critical path, 18DD01, 21DD02, 21DD03, 21DD04, 22DD05, 25DD06, 23DD07, 24DD08, 24DD09, 24DD10, 22DD11, 25DD12, 25DD13, 25DD14, 22DD15, 26

DD16, 26DD17, 25dependencies, 9design method, 15development team, 4documentation, 4documentation plan, 4entry criteria, 8exit criteria, 8FMECA, 12FTA, 12Hazard Analysis, 12inputs, 8metric, 4, 9, 13, 18, 19, 21, 22, 23MTBF, 26, 30MTBF;, 30MTTR, 26, 30MTTR;, 30OM01, 30OM02, 30OM03, 30OM04, 30OM05, 30OM06, 30OM07, 30OM08, 30OM09, 30OM10, 30outputs, 8plan, 3, 15problem, 4procedure, 3process model, 8product, 3project, 4record, 4review, 4risk, 4SCM01, 18SCM02, 18SCM06, 10, 19SCM07, 10, 19SCM08, 10, 19SCM09, 10, 19SCM10, 10, 19

Page 913: ESA Software Engineering Standards

D-2 ESA PSS-05-11 Issue 1 Revision 1 (March 1995)INDEX

SCM11, 10, 19SCM15, 19SCM16, 19SCM17, 19SCM18, 19SCM19, 19SCM20, 19SCM21, 19SCM22, 19SCM23, 19SCM24, 19SCM25, 19SCM26, 19SCM27, 19SCM28, 19SCM29, 10SCM32, 19SCM33, 19SCM34, 19SCM41, 9SCM42, 7SCM43, 10SCM44, 14SCM46, 17SCM48, 26SCM49, 27SCMP/AD, 14SCMP/DD, 17SCMP/SR, 7SCMP/TR, 26software, 4SPM01, 8SPM02, 7SPM03, 8SPM04, 8SPM05, 14SPM06, 14SPM07, 14SPM08, 17SPM09, 17SPM10, 17SPM11, 18SPM12, 17SPM13, 26SPMP/AD, 14SPMP/DD, 17SPMP/SR, 7SPMP/TR, 26SQA01, 31SQA02, 31SQA03, 7, 39

SQA04, 31, 39SQA05, 31, 39SQA06, 14, 31, 40SQA07, 31, 40SQA08, 17, 31, 40SQA09, 31, 40SQA10, 26, 31, 40SQA11, 31, 40SQAP/AD, 14, 32SQAP/DD, 17, 32SQAP/SR, 7, 32SQAP/TR, 26, 32SR01, 12SR02, 12SR04, 13SR05, 13SR06, 13SR07, 13SR08, 13SR09, 13SR10, 13SR11, 13SR12, 13SR13, 13SR14, 13SR15, 12SR16, 12SR17, 12SR18, 13staff, 4standards, 4SVV05, 23SVV06, 23SVV09, 7SVV11, 11SVV12, 14SVV14, 15SVV15, 17SVV17, 20, 23SVV18, 23SVV19, 23, 24, 28SVV20, 23, 24, 28SVV21, 23, 24, 28SVV22, 23, 25SVVP/AD, 14SVVP/AT, 7SVVP/DD, 17SVVP/SR, 7SVVP/ST, 13tasks, 8test, 4

Page 914: ESA Software Engineering Standards

ESA PSS-05-11 Issue 1 Revision 1 (March 1995) D-3INDEX

TR01, 29TR02, 29TR03, 28TR04, 28TR05, 28, 29TR06, 29TR07, 29TR08, 29TR09, 29TR10, 29UR01, 6UR02, 7UR03, 7UR04, 7UR05, 7UR06, 7UR07, 7UR08, 7UR09, 7UR10, 6UR11, 6UR12, 7UR13, 7UR14, 7UR15, 7UR16, 7

Page 915: ESA Software Engineering Standards

BSSC(96)2 Issue 1May 1996

european space agency / agence spatiale européenne8-10, rue Mario-Nikis, 75738 PARIS CEDEX, France

Guide toapplying theESA softwareengineering standardsto small software projects

Prepared by:ESA Board for SoftwareStandardisation and Control(BSSC)

Page 916: ESA Software Engineering Standards

ii BSSC(96)2 Issue 1DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

DOCUMENT STATUS SHEET

1. DOCUMENT TITLE: BSSC(96)2

2. ISSUE 3. REVISION 4. DATE 5. REASON FOR CHANGE

1 0 1996

Approved, May 8th, 1996Board for Software Standardisation and ControlM. Jones and U. Mortensen, BSSC co-chairmen

Copyright © 1996 by European Space Agency

Page 917: ESA Software Engineering Standards

BSSC(96)2 Issue 1 iiiTABLE OF CONTENTS

TABLE OF CONTENTS

CHAPTER 1 INTRODUCTION...................................................................................11.1 PURPOSE .................................................................................................................. 11.2 OVERVIEW................................................................................................................. 1

CHAPTER 2 SMALL SOFTWARE PROJECTS.........................................................32.1 INTRODUCTION........................................................................................................ 32.2 COMBINE THE SR AND AD PHASES ...................................................................... 42.3 SIMPLIFY DOCUMENTATION .................................................................................. 42.4 SIMPLIFY PLANS....................................................................................................... 42.5 REDUCE THE RELIABILITY REQUIREMENTS......................................................... 52.6 USE THE SYSTEM TEST SPECIFICATION FOR ACCEPTANCE TESTING ........... 6

CHAPTER 3 PRACTICES FOR SMALL SOFTWARE PROJECTS...........................73.1 INTRODUCTION........................................................................................................ 73.2 SOFTWARE LIFE CYCLE .......................................................................................... 83.3 UR PHASE ................................................................................................................. 83.4 SR/AD PHASE ........................................................................................................... 93.5 DD PHASE ............................................................................................................... 123.6 TR PHASE................................................................................................................ 143.7 OM PHASE .............................................................................................................. 153.8 SOFTWARE PROJECT MANAGEMENT................................................................. 153.9 SOFTWARE CONFIGURATION MANAGEMENT ................................................... 163.10 SOFTWARE VERIFICATION AND VALIDATION................................................... 193.11 SOFTWARE QUALITY ASSURANCE .................................................................... 21

APPENDIX A GLOSSARY ..................................................................................... A-1

APPENDIX B REFERENCES................................................................................. B-1

APPENDIX C DOCUMENT TEMPLATES..............................................................C-1

Page 918: ESA Software Engineering Standards

iv BSSC(96)2 Issue 1PREFACE

PREFACE

The ESA Software Engineering Standards, ESA PSS-05-0, define the softwarepractises that must be applied in all the Agency’s projects. While their application inlarge projects is quite straightforward, experience has shown that a simplified approachis appropriate for small software projects. This document provides guidelines abouthow to apply the ESA Software Engineering Standards to small software projects. Theguide has accordingly been given the nickname of ‘PSS-05 lite’.

The following past and present BSSC members have contributed to theproduction of this guide: Michael Jones (co-chairman), Uffe Mortensen (co-chairman),Gianfranco Alvisi, Bryan Melton, Daniel de Pablo, Adriaan Scheffer and JacquesMarcoux. The BSSC wishes to thank Jon Fairclough for drafting and editing the guide.The authors wish to thank all the people who contributed ideas about applying ESAPSS-05-0 to small projects.

Requests for clarifications, change proposals or any other commentconcerning this guide should be addressed to:

BSSC/ESOC Secretariat BSSC/ESTEC SecretariatAttention of Mr M Jones Attention of Mr U MortensenESOC ESTECRobert Bosch Strasse 5 Postbus 299D-64293 Darmstadt NL-2200 AG NoordwijkGermany The Netherlands

Page 919: ESA Software Engineering Standards

BSSC(96)2 Issue 1 1INTRODUCTION

CHAPTER 1INTRODUCTION

1.1 PURPOSE

ESA PSS-05-0 describes the software engineering standards to beapplied for all deliverable software implemented for the European SpaceAgency (ESA) [Ref 1].

This document has been produced to provide organisations andsoftware project managers with guidelines for the application of thestandards to small software projects. Additional guidelines on theapplication of the standards are provided in the series of documentsdescribed in ESA PSS-05-01, ‘Guide to the Software Engineering Standards’[Ref 2 to 12].

There are several criteria for deciding whether a software project issmall, and therefore whether this guide can be applied. These are discussedin Chapter 2. Whatever the result of the evaluation, this guide should not beapplied when the software is ‘critical’ (e.g. software failure would result in theloss of life, or loss of property, or major inconvenience to users) or whenESA PSS-01-21 ‘Software Product Assurance Requirements for ESA SpaceSystems’ applies.

1.2 OVERVIEW

Chapter 2 defines what is meant by a small software project anddiscusses some simple strategies for applying ESA PSS-05-0 to them.Chapter 3 explains how the mandatory practices of ESA PSS-05-0 should beapplied in a small software project. Appendix A contains a glossary ofacronyms and abbreviations. Appendix B contains a list of references.Appendix C contains simplified document templates.

Page 920: ESA Software Engineering Standards

2 BSSC(96)2 Issue 1INTRODUCTION

This page is intentionally left blank

Page 921: ESA Software Engineering Standards

BSSC(96)2 Issue 1 3SMALL SOFTWARE PROJECTS

CHAPTER 2SMALL SOFTWARE PROJECTS

2.1 INTRODUCTION

ESA PSS-05-0 lists several factors to consider when applying it to asoftware project (see Introduction, Section 4.1). The factors that are relatedto the ‘size’ of a software development project are:• project development cost• number of people required to develop the software• amount of software to be produced.

A software project can be considered to be small if one or more ofthe following criteria apply:• less than two man years of development effort is needed• a single development team of five people or less is required• the amount of source code is less than 10000 lines, excluding

comments.

One or more of the following strategies are often suitable for smallprojects producing non-critical software:• combine the software requirements and architectural design phases• simplify documentation• simplify plans• reduce the reliability requirements• use the system test specification for acceptance testing.

The following sections review these strategies.

Page 922: ESA Software Engineering Standards

4 BSSC(96)2 Issue 1SMALL SOFTWARE PROJECTS

2.2 COMBINE THE SR AND AD PHASES

ESA PSS-05-0 requires that software requirements definition andarchitectural design be performed in separate phases. These phases endwith formal reviews of the Software Requirements Document andArchitectural Design Document. Each of the reviews normally involve theuser, and can last two weeks to a month. For a small software project,separate reviews of the SRD and ADD lengthen the project significantly. Anefficient way to organise the project is therefore to:• combine the SR and AD phases into a single SR/AD phase• combine the SR/R and AD/R into single formal review at the end of the

SR/AD phase.

2.3 SIMPLIFY DOCUMENTATION

The document templates provided in ESA PSS-05-0 are based uponANSI/IEEE standards, and are designed to cover the documentationrequirements of all projects. Developers should use the simplified documenttemplates provided in Appendix C.

Developers should combine the SRD and ADD into a singleSoftware Specification Document (SSD) when the SR and AD phases arecombined.

Developers should document the detailed design by putting mostdetailed design information into the source code and extending the SSD tocontain any detailed design information that cannot be located in the sourcecode (e.g. information about the software structure).

The production of a Project History Document is considered to begood practice but its delivery is optional.

2.4 SIMPLIFY PLANS

In small software projects it is highly desirable to plan all phases atthe start of the project. This means that the phase sections of the SoftwareProject Management Plan, Software Configuration Management Plan andSoftware Verification and Validation Plans are combined. The plans may begenerated when writing the proposal.

The purpose of the software quality assurance function in a projectis to check that the project is adhering to standards and plans. In a small

Page 923: ESA Software Engineering Standards

BSSC(96)2 Issue 1 5SMALL SOFTWARE PROJECTS

software project this is normally done informally by the project manager. Thepractice of producing a Software Quality Assurance Plan may be waived.

ESA PSS-05-0 requires that the test approach be outlined in the testplan and refined in the test design. In small software projects, it is sufficientto outline the test approach only. This means that the Test Design sectionsof the SVVP can be omitted.

Developers may find it convenient to combine the technical processsection of the Software Project Management Plan, the SoftwareConfiguration Management Plan, the Software Verification and ValidationPlan and the Software Quality Assurance Plan into a single document (calleda ‘Quality Plan’ in ISO 9000 terminology).

2.5 REDUCE THE RELIABILITY REQUIREMENTS

The reliability requirements on software should be set by trading-offthe cost of correcting defects during development against the cost of:• correcting the defects during operations• penalties resulting from failure during operation• loss of reputation suffered by producing a faulty product.

Reliability is built into software by:• designing the software to be reliable• reviewing documents and code• testing the code.

ESA PSS-05-0 requires that every statement be executed at leastonce. This is sometimes known as the ‘statement coverage requirement’.There is good evidence that significant numbers of defects remain in thesoftware when the statement coverage falls below 70% [Ref 14], and thatthe number of defects becomes very low when the statement coverageexceeds 90%. However achieving high coverage in testing can be quiteexpensive in effort; test cases have to be invented to force execution of allthe statements. The cost of this exhaustive testing may exceed the cost ofrepairing the defects during operations.

Developers should:• set a statement testing target (e.g. 80% coverage)• review every statement not covered in testing.

Page 924: ESA Software Engineering Standards

6 BSSC(96)2 Issue 1SMALL SOFTWARE PROJECTS

Software tools are available for measuring test coverage. Theyshould be used whenever possible.

2.6 USE THE SYSTEM TEST SPECIFICATION FOR ACCEPTANCE TESTING

When the developer is responsible for producing the acceptancetest specification, the acceptance tests often repeat selected system testcases and procedures. One approach to documenting acceptance tests issimply to indicate in the system test specification which test cases andprocedures should be used in acceptance testing.

Page 925: ESA Software Engineering Standards

BSSC(96)2 Issue 1 7PRACTICES FOR SMALL SOFTWARE PROJECTS

CHAPTER 3 PRACTICES FOR SMALL SOFTWARE PROJECTS

3.1 INTRODUCTION

Figure 3.1 illustrates a small software project life cycle approachwith the following features:• the software requirements and architectural design phases have been

combined• the simplified document templates described in Appendix C are used• detailed design information has been included in the source code• plans have been combined• reliability requirements have been reduced• no test design documentation is produced• selected system tests are used for acceptance testing• all SQA practices are not applicable• the Project History Document is not a required deliverable.

UR

SR/AD

DD

TR

OM

SVVP/ST spec

SPMP

SCMP

SVVP/Reviews, Tracing /AT plan

SVVP/ST Plan/IT Plan

URD

SSD

code

SUM

STD

Software System

informalinputs

Figure 3.1: Small software project life cycle

The following sections describe the small software projectmandatory practices. The practices are tabulated with their ESA PSS-05-0identifier. Guidance (in italics) is attached where appropriate to explain how

Page 926: ESA Software Engineering Standards

8 BSSC(96)2 Issue 1PRACTICES FOR SMALL SOFTWARE PROJECTS

the practice is applied in a small project. This guidance is based upon thestrategies described in Chapter 2.

3.2 SOFTWARE LIFE CYCLESLC01 The products of a software development project shall be

delivered in a timely manner and be fit for their purpose.SLC02 Software development activities shall be systematically planned

and carried out.SLC03 All software projects shall have a life cycle approach which

includes the basic phases shown in Part 1, Figure 1.1 (of PSS-05):

UR phase - Definition of the user requirementsSR phase - Definition of the software requirementsAD phase - Definition of the architectural designDD phase - Detailed design and production of the codeTR phase - Transfer of the software to operationsOM phase - Operations and maintenance

In small projects, the SR and AD phases are combined into an SR/AD phase.

3.3 UR PHASEUR01 The definition of the user requirements shall be the responsibility

of the user.UR02 Each user requirement shall include an identifier.UR03 Essential user requirements shall be marked as such.UR04 For incremental delivery, each user requirement shall include a

measure of priority so that the developer can decide theproduction schedule.

UR05 The source of each user requirement shall be stated.UR06 Each user requirement shall be verifiable.UR07 The user shall describe the consequences of losses of

availability, or breaches of security, so that developers can fullyappreciate the criticality of each function.

UR08 The outputs of the UR phase shall be formally reviewed duringthe User Requirements Review.

Page 927: ESA Software Engineering Standards

BSSC(96)2 Issue 1 9PRACTICES FOR SMALL SOFTWARE PROJECTS

UR09 Non-applicable user requirements shall be clearly flagged in theURD.

UR10 An output of the UR phase shall be the User RequirementsDocument (URD).

UR11 The URD shall always be produced before a software project isstarted.

UR12 The URD shall provide a general description of what the userexpects the software to do.

UR13 All known user requirements shall be included in the URD.UR14 The URD shall describe the operations the user wants to perform

with the software system.UR15 The URD shall define all the constraints that the user wishes to

impose upon any solution.UR16 The URD shall describe the external interfaces to the software

system or reference them in ICDs that exist or are to be written.

3.4 SR/AD PHASESR01 SR phase activities shall be carried out according to the plans

defined in the UR phase.SR02 The developer shall construct an implementation-independent

model of what is needed by the user.SR03 A recognised method for software requirements analysis shall be

adopted and applied consistently in the SR phase.This practice applies to the SR/AD phase.

SR04 Each software requirement shall include an identifier.SR05 Essential software requirements shall be marked as such.SR06 For incremental delivery, each software requirement shall include

a measure of priority so that the developer can decide theproduction schedule.

SR07 References that trace software requirements back to the URDshall accompany each software requirement.

SR08 Each software requirement shall be verifiable.SR09 The outputs of the SR phase shall be formally reviewed during

the Software Requirements Review.

Page 928: ESA Software Engineering Standards

10 BSSC(96)2 Issue 1PRACTICES FOR SMALL SOFTWARE PROJECTS

The outputs of software requirements definition activities are reviewed in theSR/AD phase review.

SR10 An output of the SR phase shall be the Software RequirementsDocument (SRD).This practice applies to the SR/AD phase.

The SRD information is placed in the Software Specification Document (SSD).

SR11 The SRD shall be complete.This practice applies to the SSD.

SR12 The SRD shall cover all the requirements stated in the URD.This practice applies to the SSD.

SR13 A table showing how user requirements correspond to softwarerequirements shall be placed in the SRD.This practice applies to the SSD.

SR14 The SRD shall be consistent.This practice applies to the SSD.

SR15 The SRD shall not include implementation details or terminology,unless it has to be present as a constraint.This practice applies to the SSD.

SR16 Descriptions of functions ... shall say what the software is to do,and must avoid saying how it is to be done.

SR17 The SRD shall avoid specifying the hardware or equipment,unless it is a constraint placed by the user.This practice is not applicable.

SR18 The SRD shall be compiled according to the table of contentsprovided in Appendix C.This practice applies to the SSD. The SSD should be compiled according tothe table contents in Appendix C of this guide.

AD01 AD phase activities shall be carried out according to the plansdefined in the SR phase.In small projects, the AD plans should be made in the UR phase.

AD02 A recognised method for software design shall be adopted andapplied consistently in the AD phase.This practice applies to the SR/AD phase.

Page 929: ESA Software Engineering Standards

BSSC(96)2 Issue 1 11PRACTICES FOR SMALL SOFTWARE PROJECTS

AD03 The developer shall construct a ‘physical model’, whichdescribes the design of the software using implementationterminology.

AD04 The method used to decompose the software into its componentparts shall permit a top-down approach.

AD05 Only the selected design approach shall be reflected in the ADD.This practice applies to the SSD.

For each component the following information shall be detailed inthe ADD:

AD06 • data input;AD07 • functions to be performed;AD08 • data output.

AD06, 7, 8 apply to the SSD.

AD09 Data structures that interface components shall be defined in theADD.This practice applies to the SSD.

Data structure definitions shall include the:AD10 • description of each element (e.g. name, type, dimension);AD11 • relationships between the elements (i.e. the structure);AD12 • range of possible values of each element;AD13 • initial values of each element.

AD10, 11, 12, 13 apply to the SSD.

AD14 The control flow between the components shall be defined in theADD.This practice applies to the SSD.

AD15 The computer resources (e.g. CPU speed, memory, storage,system software) needed in the development environment andthe operational environment shall be estimated in the AD phaseand defined in the ADD.This practice applies to the SSD.

AD16 The outputs of the AD phase shall be formally reviewed duringthe Architectural Design Review.The outputs of software architectural design activities are reviewed in theSR/AD phase review.

Page 930: ESA Software Engineering Standards

12 BSSC(96)2 Issue 1PRACTICES FOR SMALL SOFTWARE PROJECTS

AD17 The ADD shall define the major components of the software andthe interfaces between them.This practice applies to the SSD.

AD18 The ADD shall define or reference all external interfaces.This practice applies to the SSD.

AD19 The ADD shall be an output from the AD phase.This practice applies to the SSD.

AD20 The ADD shall be complete, covering all the softwarerequirements described in the SRD.This practice applies to the SSD.

AD21 A table cross-referencing software requirements to parts of thearchitectural design shall be placed in the ADD.This practice applies to the SSD.

AD22 The ADD shall be consistent.This practice applies to the SSD.

AD23 The ADD shall be sufficiently detailed to allow the project leaderto draw up a detailed implementation plan and to control theoverall project during the remaining development phases.This practice applies to the SSD.

AD24 The ADD shall be compiled according to the table of contentsprovided in Appendix C.This practice applies to the SSD. The SSD should be compiled according tothe table contents in Appendix C of this guide.

3.5 DD PHASEDD01 DD phase activities shall be carried out according to the plans

defined in the AD phase.DD phase plans are contained in the plans made in the UR phase andupdated, as appropriate during the project.

The detailed design and production of software shall be basedon the following three principles:

DD02 • top-down decomposition;DD03 • structured programming;DD04 • concurrent production and documentation.

Page 931: ESA Software Engineering Standards

BSSC(96)2 Issue 1 13PRACTICES FOR SMALL SOFTWARE PROJECTS

DD05 The integration process shall be controlled by the softwareconfiguration management procedures defined in the SCMP.

DD06 Before a module can be accepted, every statement in a moduleshall be executed successfully at least once.Acceptance should be performed by the project manager after unit testingand by the customer at the end of the phase.

Software projects should:

- set a statement testing target (e.g. 80% coverage)

- review every statement not covered in testing.

Software tools are available for measuring test coverage. They should beused whenever possible.

DD07 Integration testing shall check that all the data exchanged acrossan interface agrees with the data structure specifications in theADD.This practice applies to the SSD.

DD08 Integration testing shall confirm that the control flows defined inthe ADD have been implemented.This practice applies to the SSD.

DD09 System testing shall verify compliance with system objectives, asstated in the SRD.This practice applies to the SSD.

DD10 When the design of a major component is finished, a criticaldesign review shall be convened to certify its readiness forimplementation.

DD11 After production, the DD Review (DD/R) shall consider the resultsof the verification activities and decide whether to transfer thesoftware.

DD12 All deliverable code shall be identified in a configuration item list.DD13 The DDD shall be an output of the DD phase.

Not applicable. Detailed design information is placed in the SSD and sourcecode.

DD14 Part 2 of the DDD shall have the same structure and identificationscheme as the code itself, with a 1:1 correspondence betweensections of the documentation and the software components.Not applicable.

Page 932: ESA Software Engineering Standards

14 BSSC(96)2 Issue 1PRACTICES FOR SMALL SOFTWARE PROJECTS

DD15 The DDD shall be complete, accounting for all the softwarerequirements in the SRD.This practice means that all requirements in the SSD must be implemented inthe code.

DD16 A table cross-referencing software requirements to the detaileddesign components shall be placed in the DDD.The traceability matrix in the SSD must be updated instead.

DD17 A Software User Manual (SUM) shall be an output of the DDphase.

3.6 TR PHASETR01 Representatives of users and operations personnel shall

participate in acceptance tests.TR02 The Software Review Board (SRB) shall review the software’s

performance in the acceptance tests and recommend, to theinitiator, whether the software can be provisionally accepted ornot.

TR03 TR phase activities shall be carried out according to the plansdefined in the DD phase.TR phase plans are established in the UR phase and updated as appropriate.

TR04 The capability of building the system from the components thatare directly modifiable by the maintenance team shall beestablished.

TR05 Acceptance tests necessary for provisional acceptance shall beindicated in the SVVP.

TR06 The statement of provisional acceptance shall be produced bythe initiator, on behalf of the users, and sent to the developer.

TR07 The provisionally accepted software system shall consist of theoutputs of all previous phases and modifications foundnecessary in the TR phase.

TR08 An output of the TR phase shall be the STD.TR09 The STD shall be handed over from the developer to the

maintenance organisation at provisional acceptance.TR10 The STD shall contain the summary of the acceptance test

reports, and all documentation about software changesperformed during the TR phase.

Page 933: ESA Software Engineering Standards

BSSC(96)2 Issue 1 15PRACTICES FOR SMALL SOFTWARE PROJECTS

3.7 OM PHASEOM01 Until final acceptance, OM phase activities that involve the

developer shall be carried out according to the plans defined inthe SPMP/TR.

OM02 All the acceptance tests shall have been successfully completedbefore the software is finally accepted.

OM03 Even when no contractor is involved, there shall be a finalacceptance milestone to arrange the formal hand-over fromsoftware development to maintenance.

OM04 A maintenance organisation shall be designated for everysoftware product in operational use.

OM05 Procedures for software modification shall be defined.OM06 Consistency between code and documentation shall be

maintained.OM07 Resources shall be assigned to a product’s maintenance until it

is retired.OM08 The SRB ... shall authorise all modifications to the software.OM09 The statement of final acceptance shall be produced by the

initiator, on behalf of the users, and sent to the developer.OM10 The PHD shall be delivered to the initiator after final acceptance.

Not applicable. However developers are encouraged to produce a PHD forinternal use.

3.8 SOFTWARE PROJECT MANAGEMENTSPM01 All software project management activities shall be documented

in the Software Project Management Plan (SPMP).SPM02 By the end of the UR review, the SR phase section of the SPMP

shall be produced (SPMP/SR).The SPMP does not have phase sections: there is a single plan for all theSR/AD, DD and TR phases.

SPM03 The SPMP/SR shall outline a plan for the whole project.This practice applies to the SPMP.

SPM04 A precise estimate of the effort involved in the SR phase shall beincluded in the SPMP/SR.This practice applies to the SPMP.

Page 934: ESA Software Engineering Standards

16 BSSC(96)2 Issue 1PRACTICES FOR SMALL SOFTWARE PROJECTS

SPM05 During the SR phase, the AD phase section of the SPMP shall beproduced (SPMP/AD).Not applicable.

SPM06 An estimate of the total project cost shall be included in theSPMP/AD.This practice applies to the SPMP.

SPM07 A precise estimate of the effort involved in the AD phase shall beincluded in the SPMP/AD.This practice applies to the SPMP.

SPM08 During the AD phase, the DD phase section of the SPMP shall beproduced (SPMP/DD).Not applicable. However the SPMP should be updated when the SSD hasbeen produced.

SPM09 An estimate of the total project cost shall be included in theSPMP/DD.This practice applies to the SPMP.

SPM10 The SPMP/DD shall contain a WBS that is directly related to thedecomposition of the software into components.This practice applies to the SPMP.

SPM11 The SPMP/DD shall contain a planning network showingrelationships of coding, integration and testing activities.A bar chart (i.e. Gantt Chart) should be used to show the relationship.

SPM12 No software production work packages in the SPMP/DD shall lastlonger than 1 man-month.This practice applies to the SPMP.

SPM13 During the DD phase, the TR phase section of the SPMP shall beproduced (SPMP/TR).This practice applies to the SPMP.

3.9 SOFTWARE CONFIGURATION MANAGEMENTSCM01 All software items, for example documentation, source code,

object or relocatable code, executable code, files, tools, testsoftware and data, shall be subjected to configurationmanagement procedures.

Page 935: ESA Software Engineering Standards

BSSC(96)2 Issue 1 17PRACTICES FOR SMALL SOFTWARE PROJECTS

SCM02 The configuration management procedures shall establishmethods for identifying, storing and changing software itemsthrough development, integration and transfer.

SCM03 A common set of configuration management procedures shall beused.Every configuration item shall have an identifier that distinguishesit from other items with different:

SCM04 • requirements, especially functionality and interfaces;SCM05 • implementation.SCM06 Each component defined in the design process shall be

designated as a CI and include an identifier.SCM07 The identifier shall include a number or a name related to the

purpose of the CI.SCM08 The identifier shall include an indication of the type of processing

the CI is intended for (e.g. filetype information).SCM09 The identifier of a CI shall include a version number.SCM10 The identifier of documents shall include an issue number and a

revision number.SCM11 The configuration identification method shall be capable of

accommodating new CIs, without requiring the modification ofthe identifiers of any existing CIs.

SCM12 In the TR phase, a list of configuration items in the first releaseshall be included in the STD.

SCM13 In the OM phase, a list of changed configuration items shall beincluded in each Software Release Note (SRN).

SCM14 An SRN shall accompany each release made in the OM phase.As part of the configuration identification method, a softwaremodule shall have a standard header that includes:

SCM15 • configuration item identifier (name, type, version);SCM16 • original author;SCM17 • creation date;SCM18 • change history (version/date/author/description).

All documentation and storage media shall be clearly labelled ina standard format, with at least the following data:

Page 936: ESA Software Engineering Standards

18 BSSC(96)2 Issue 1PRACTICES FOR SMALL SOFTWARE PROJECTS

SCM19 • project name;SCM20 • configuration item identifier (name, type, version);SCM21 • date;SCM22 • content description.

To ensure security and control of the software, at a minimum, thefollowing software libraries shall be implemented for storing allthe deliverable components (e.g. documentation, source andexecutable code, test files, command procedures):

SCM23 • Development (or Dynamic) library;SCM24 • Master (or Controlled) library;SCM25 • Static (or Archive) library.SCM26 Static libraries shall not be modified.SCM27 Up-to-date security copies of master and static libraries shall

always be available.SCM28 Procedures for the regular backup of development libraries shall

be established.SCM29 The change procedure described (in Part 2, Section 3.2.3.2.1)

shall be observed when changes are needed to a delivereddocument.

SCM30 Software problems and change proposals shall be handled bythe procedure described (in Part 2, Section 3.2.3.2.2).

SCM31 The status of all configuration items shall be recorded.To perform software status accounting, each software projectshall record:

SCM32 • the date and version/issue of each baseline;SCM33 • the date and status of each RID and DCR;SCM34 • the date and status of each SPR, SCR and SMR;SCM35 • a summary description of each Configuration Item.SCM36 As a minimum, the SRN shall record the faults that have been

repaired and the new requirements that have been incorporated.SCM37 For each release, documentation and code shall be consistent.SCM38 Old releases shall be retained, for reference.SCM39 Modified software shall be retested before release.

Page 937: ESA Software Engineering Standards

BSSC(96)2 Issue 1 19PRACTICES FOR SMALL SOFTWARE PROJECTS

SCM40 All software configuration management activities shall bedocumented in the Software Configuration Management Plan(SCMP).

SCM41 Configuration management procedures shall be in place beforethe production of software (code and documentation) starts.

SCM42 By the end of the UR review, the SR phase section of the SCMPshall be produced (SCMP/SR).This practice applies to the SCMP.

SCM43 The SCMP/SR shall cover the configuration managementprocedures for documentation, and any CASE tool outputs orprototype code, to be produced in the SR phase.This practice applies to the SCMP

SCM44 During the SR phase, the AD phase section of the SCMP shall beproduced (SCMP/AD).Not applicable.

SCM45 The SCMP/AD shall cover the configuration managementprocedures for documentation, and CASE tool outputs orprototype code, to be produced in the AD phase.This practice applies to the SCMP

SCM46 During the AD phase, the DD phase section of the SCMP shall beproduced (SCMP/DD).Not applicable.

SCM47 The SCMP/DD shall cover the configuration managementprocedures for documentation, deliverable code, and any CASEtool outputs or prototype code, to be produced in the DD phase.This practice applies to the SCMP.

SCM48 During the DD phase, the TR phase section of the SCMP shall beproduced (SCMP/TR).Not applicable.

SCM49 The SCMP/TR shall cover the procedures for the configurationmanagement of the deliverables in the operational environment.Not applicable.

3.10 SOFTWARE VERIFICATION AND VALIDATIONSVV01 Forwards traceability requires that each input to a phase shall be

traceable to an output of that phase.

Page 938: ESA Software Engineering Standards

20 BSSC(96)2 Issue 1PRACTICES FOR SMALL SOFTWARE PROJECTS

SVV02 Backwards traceability requires that each output of a phase shallbe traceable to an input to that phase.

SVV03 Functional and physical audits shall be performed before therelease of the software.

SVV04 All software verification and validation activities shall bedocumented in the Software Verification and Validation Plan(SVVP).The SVVP shall ensure that the verification activities:

SVV05 • are appropriate for the degree of criticality of the software;SVV06 • meet the verification and acceptance testing requirements

(stated in the SRD);SVV07 • verify that the product will meet the quality, reliability,

maintainability and safety requirements ...;SVV08 • are sufficient to assure the quality of the product.SVV09 By the end of the UR review, the SR phase section of the SVVP

shall be produced (SVVP/SR).This practice applies to SVVP.

SVV10 The SVVP/SR shall define how to trace user requirements tosoftware requirements, so that each software requirement can bejustified.This practice applies to SVVP.

SVV11 The developer shall construct an acceptance test plan in the URphase and document it in the SVVP.

SVV12 During the SR phase, the AD phase section of the SVVP shall beproduced (SVVP/AD).Not applicable.

SVV13 The SVVP/AD shall define how to trace software requirements tocomponents, so that each software component can be justified.This practice applies to SVVP.

SVV14 The developer shall construct a system test plan in the SR phaseand document it in the SVVP.

SVV15 During the AD phase, the DD phase section of the SVVP shall beproduced (SVVP/DD).Not applicable.

Page 939: ESA Software Engineering Standards

BSSC(96)2 Issue 1 21PRACTICES FOR SMALL SOFTWARE PROJECTS

SVV16 The SVVP/AD shall describe how the DDD and code are to beevaluated by defining the review and traceability procedures.This practice applies to the SVVP.

SVV17 The developer shall construct an integration test plan in the ADphase and document it in the SVVP.This is done in the SR/AD phase.

SVV18 The developer shall construct a unit test plan in the DD phaseand document it in the SVVP.

SVV19 The unit, integration, system and acceptance test designs shallbe described in the SVVP.Not applicable.

SVV20 The unit integration, system and acceptance test cases shall bedescribed in the SVVP.

SVV21 The unit, integration, system and acceptance test proceduresshall be described in the SVVP.

SVV22 The unit, integration, system and acceptance test reports shall bedescribed in the SVVP.

3.11 SOFTWARE QUALITY ASSURANCESQA01 An SQAP shall be produced by each contractor developing

software.Not applicable.

SQA02 All software quality assurance activities shall be documented inthe Software Quality Assurance Plan (SQAP).Not applicable.

SQA03 By the end of the UR review, the SR phase section of the SQAPshall be produced (SQAP/SR).Not applicable.

SQA04 The SQAP/SR shall describe, in detail, the quality assuranceactivities to be carried out in the SR phase.Not applicable.

SQA05 The SQAP/SR shall outline the quality assurance plan for the restof the project.Not applicable.

Page 940: ESA Software Engineering Standards

22 BSSC(96)2 Issue 1PRACTICES FOR SMALL SOFTWARE PROJECTS

SQA06 During the SR phase, the AD phase section of the SQAP shall beproduced (SQAP/AD).Not applicable.

SQA07 The SQAP/AD shall cover in detail all the quality assuranceactivities to be carried out in the AD phase.Not applicable.

SQA08 During the AD phase, the DD phase section of the SQAP shall beproduced (SQAP/DD).Not applicable.

SQA09 The SQAP/DD shall cover in detail all the quality assuranceactivities to be carried out in the DD phase.Not applicable.

SQA10 During the DD phase, the TR phase section of the SQAP shall beproduced (SQAP/TR).Not applicable.

SQA11 The SQAP/TR shall cover in detail all the quality assuranceactivities to be carried out from the start the TR phase until finalacceptance in the OM phase.Not applicable.

Page 941: ESA Software Engineering Standards

BSSC(96)2 Issue 1 A-1GLOSSARY

APPENDIX AGLOSSARY

A.1 LIST OF ACRONYMS AND ABBREVIATIONS

AD Architectural DesignADD Architectural Design DocumentANSI American National Standards InstituteAT Acceptance TestDD Detailed Design and productionDDD Detailed Design DocumentESA European Space AgencyIEEE Institute of Electrical and Electronics EngineersIT Integration TestPHD Project History DocumentPSS Procedures, Standards and Specifications1

SCMP Software Configuration Management PlanSPMP Software Project Management PlanSQAP Software Quality Assurance PlanSR Software RequirementsST System TestSTD Software Transfer DocumentSRD Software Requirements DocumentSSD Software Specification DocumentSUM Software User ManualSVVP Software Verification and Validation PlanURD User Requirements DocumentUT Unit TestWBS Work Breakdown Structure

1 Not ‘Programming Saturdays and Sundays’

Page 942: ESA Software Engineering Standards

A-2 BSSC(96)2 Issue 1GLOSSARY

This page is intentionally left blank.

Page 943: ESA Software Engineering Standards

BSSC(96)2 Issue 1 B-1REFERENCES

APPENDIX BREFERENCES

1. ESA Software Engineering Standards, ESA PSS-05-0 Issue 2 February1991.

2. Guide to the ESA Software Engineering Standards, ESA PSS-05-01 Issue1 October 1991.

3. Guide to the User Requirements Definition Phase, ESA PSS-05-02 Issue1 October 1991.

4. Guide to the Software Requirements Definition Phase, ESA PSS-05-03Issue 1 October 1991.

5. Guide to the Software Architectural Design Phase, ESA PSS-05-04 Issue1 January 1992.

6. Guide to the Software Detailed Design and Production Phase, ESA PSS-05-05 Issue 1 May 1992.

7. Guide to the Software Transfer Phase , ESA PSS-05-06 Issue 1 October1994.

8. Guide to the Software Operations and Maintenance Phase, ESA PSS-05-07 Issue 1 December 1994.

9. Guide to Software Project Management, ESA PSS-05-08 Issue 1 June1994.

10. Guide to Software Configuration Management, ESA PSS-05-09 Issue 1November 1992.

11. Guide to Software Verification and Validation, ESA PSS-05-10 Issue 1February 1994.

12. Guide to Software Quality Assurance, ESA PSS-05-11 Issue 1 July 1993.

13. Software Engineering Economics, B. Boehm, Prentice-Hall (1981)

14. Achieving Software Quality with Testing Coverage Measures, J.Horgan,S.London and M.Lyu, Computer, IEEE, September 1994.

Page 944: ESA Software Engineering Standards

B-2 BSSC(96)2 Issue 1REFERENCES

This page is intentionally left blank.

Page 945: ESA Software Engineering Standards

BSSC(96)2 Issue 1 C-1DOCUMENT TEMPLATES

APPENDIX CDOCUMENT TEMPLATES

All documents should contain the following service information:

a - Abstract b - Table of contents c - Document Status Sheet d - Document Change Records made since last issue

If there is no information pertinent to a section, the section shouldbe omitted and the sections renumbered.

Guidelines on the contents of document sections are given in italics.Section titles which are to be provided by document authors are enclosed insquare brackets.

Page 946: ESA Software Engineering Standards

C-2 BSSC(96)2 Issue 1DOCUMENT TEMPLATES

C.1 URD TABLE OF CONTENTS

1 Introduction1.1 Purpose of the document1.2 Definitions, acronyms and abbreviations1.3 References1.4 Overview of the document

2 General Description2.1 General capabilities

Describe the process to be supported by the software.Describe the main capabilities required and why they are needed.

2.2 General constraintsDescribe the main constraints that apply and why they exist.

2.3 User characteristics Describe who will use the software and when.

2.4 Operational environmentDescribe what external systems do and their interfaces with the product. Include a context diagram or system block diagram.

3 Specific RequirementsList the specific requirements, with attributes.3.1 Capability requirements3.2 Constraint requirements

Page 947: ESA Software Engineering Standards

BSSC(96)2 Issue 1 C-3DOCUMENT TEMPLATES

C.2 SSD TABLE OF CONTENTS

1 Introduction1.1 Purpose of the document1.2 Definitions, acronyms and abbreviations1.3 References1.4 Overview of the document

2 Model descriptionDescribe the logical model using a recognised analysis method.

3 Specific RequirementsList the specific requirements, with attributes.Subsections may be regrouped around high-level functions.3.1 Functional requirements3.2 Performance requirements3.3 Interface requirements3.4 Operational requirements3.5 Resource requirements3.6 Verification requirements3.7 Acceptance testing requirements3.8 Documentation requirements3.9 Security requirements3.10 Portability requirements3.11 Quality requirements3.12 Reliability requirements3.13 Maintainability requirements3.14 Safety requirements

4 System design4.1 Design method Describe or reference the design method used.

4.2 Decomposition description Describe the physical model, with diagrams. Show the components and the control and data flow between them.

Page 948: ESA Software Engineering Standards

C-4 BSSC(96)2 Issue 1DOCUMENT TEMPLATES

5 Component descriptionDescribe each component.Structure this section according to the physical model.5.n [Component identifier]

5.n.1 TypeSay what the component is (e.g. module, file, program etc)

5.n.2 FunctionSay what the component does.

5.n.3 InterfacesDefine the control and data flow to and from the component.

5.n.4 DependenciesDescribe the preconditions for using this component.

5.n.5 ProcessingDescribe the control and data flow within the component.Outline the processing of bottom-level components.

5.n.6 DataDefine in detail the data internal to components, such as files used for interfacing major components. Otherwise give an outline description.

5.n.7 ResourcesList the resources required, such as displays and printers.

6 Feasibility and Resource EstimatesSummarise the computer resources required to build, operate and maintain the software.

7 User Requirements vs Software Requirements Traceability matrixProvide tables cross-referencing user requirements to software requirements and vice versa.

8 Software Requirements vs Components Traceability matrixProvide tables cross-referencing software requirements to components and vice versa.

Page 949: ESA Software Engineering Standards

BSSC(96)2 Issue 1 C-5DOCUMENT TEMPLATES

C.3 SOURCE CODE DOCUMENTATION REQUIREMENTS

Each module should contain a header that defines:

Module NameAuthorCreation DateChange History

Version/Date/Author/DescriptionFunction

Say what the component does.Interfaces

Define the inputs and outputsDependencies

Describe the preconditions for using this component.Processing

Summarise the processing using pseudo-code or a PDL

C.4 SUM TABLE OF CONTENTS

1 Introduction1.1 Intended readership

Describe who should read the SUM.1.2 Applicability statement

State which software release the SUM applies to.1.3 Purpose

Describe the purpose of the document.Describe the purpose of the software.

1.4 How to use this document Say how the document is intended to be read.

1.5 Related documents Describe the place of the SUM in the project documentation.

1.6 ConventionsDescribe any stylistic and command syntax conventions used.

1.7 Problem reporting instructions Summarise the SPR system for reporting problems.

2 [Overview section]Describe the process to be supported by the software, and what the software does to support the process, and what the user and/or operator needs to supply to the software.

Page 950: ESA Software Engineering Standards

C-6 BSSC(96)2 Issue 1DOCUMENT TEMPLATES

3 [Installation section]Describe the procedures needed to the software on the target machine.

4 [Instruction section]From the trainee’s viewpoint, for each task, provide:

(a) Functional descriptionWhat the task will achieve.

(b) Cautions and warningsDo’s and don’ts.

(c) ProceduresInclude:

Set-up and initialisationInput operationsWhat results to expect

(d) Probable errors and possible causesWhat to do when things go wrong.

5 [Reference section]From the expert’s viewpoint, for each basic operation, provide:

(a) Functional descriptionWhat the operation does.

(b) Cautions and warningsDo’s and don’ts.

(c) Formal descriptionRequired parametersOptional parametersDefault optionsParameter order and syntax

(d) ExamplesGive worked examples of the operation.

(e) Possible error messages and causesList the possible errors and likely causes.

(f) Cross references to other operationsRefer to any complementary, predecessor or successoroperations.

Appendix A Error messages and recovery proceduresList all the error messages.

Appendix B GlossaryList all terms with specialised meanings.

Appendix C Index (for manuals of 40 pages or more)List all important terms and their locations.

Page 951: ESA Software Engineering Standards

BSSC(96)2 Issue 1 C-7DOCUMENT TEMPLATES

C.5 STD TABLE OF CONTENTS

1 Introduction1.1 Purpose of the document1.2 Definitions, acronyms and abbreviations1.3 References1.4 Overview of the document

2 Build ReportDescribe what happened when the software was built from source code

3 Installation ReportDescribe what happened when the software was installed on the target machine.

4 Configuration Item ListList all the deliverable configuration items.

5 Acceptance Test Report SummaryFor each acceptance test, give the:

user requirement identifier and summarytest report identifier in the SVVP/AT/Test Reportstest result summary

6 Software Problem ReportsList the SPRs raised during the TR phase and their status at STD issue.

7 Software Change RequestsList the SCRs raised during the TR phase and their status at STD issue.

8 Software Modification ReportsList the SMRs completed during the TR phase.

Page 952: ESA Software Engineering Standards

C-8 BSSC(96)2 Issue 1DOCUMENT TEMPLATES

C.6 SPMP TABLE OF CONTENTS

1 Introduction1.1 Purpose of the document1.2 Definitions, acronyms and abbreviations1.3 References1.4 Overview of the document

2 Project organisation2.1 Organisational roles and responsibilities

Describe project roles, responsibilities and reporting lines.2.2 Organisational boundaries and interfaces

Describe the interfaces with customers and suppliers.

3 Technical Process3.1 Project inputs

List what documents will be input to the project, when and by who3.2 Project outputs

List what will be delivered, when and where.3.3 Process model

Define the life cycle approach to the project.3.4 Methods and tools

Describe or reference the development methods used.Define and reference the design and production tools.Describe the format, style and tools for documentation.Define and reference the coding standards.

3.5 Project support functionsSummarise the procedures for SCM and SVV and identify the related procedures and plans

4 Work Packages, Schedule, and Budget4.1 Work packages

For each work package, describe the inputs, tasks, outputs, effort requirements, resources allocated and verification process.

4.2 Schedule Describe when each work package will be start and end (Gantt

Chart)4.3 Budget

Describe the total cost of the project.

Page 953: ESA Software Engineering Standards

BSSC(96)2 Issue 1 C-9DOCUMENT TEMPLATES

C.7 SCMP TABLE OF CONTENTS

1 Introduction1.1 Purpose of the document1.2 Definitions, acronyms and abbreviations1.3 References1.4 Overview of the document

2 SCM Activities2.1 Configuration Identification

Describe the CI identification conventions.For each baseline:

Describe when it will be created;Describe what it will contain;Describe how it will be established;Describe the level of authority required to change it.

2.2 Configuration Item StorageDescribe the procedures for storing CIs in software libraries.Describe the procedures for storing media containing CIs.

2.3 Configuration Item Change ControlDescribe the change control procedures.

2.4 Configuration Status AccountingDescribe the procedures for keeping an audit trail of changes.

2.5 Build proceduresDescribe the procedures for building the software from source.

2.6 Release proceduresDescribe the procedures for the release of code and documentation.

Page 954: ESA Software Engineering Standards

C-10 BSSC(96)2 Issue 1DOCUMENT TEMPLATES

C.8 SVVP TABLE OF CONTENTS

Reviews and tracing section

1 Introduction1.1 Purpose of the document1.2 Definitions, acronyms and abbreviations1.3 References1.4 Overview of the document

2 ReviewsDescribe the inspection, walkthrough and technical review procedures.

3 TracingDescribe how to trace phase inputs to outputs.

Unit, Integration, System and Acceptance Testing Sections

1 Introduction1.1 Purpose of the document1.2 Definitions, acronyms and abbreviations1.3 References1.4 Overview of the document

2 Test Plan2.1 Test items

List the items to be tested.2.2 Features to be tested

Identify the features to be tested.2.3 Test deliverables

List the items that must be delivered before testing starts.List the items that must be delivered when testing ends.

2.4 Testing tasksDescribe the tasks needed to prepare for and carry out the tests.

2.5 Environmental needsDescribe the properties required of the test environment.

2.6 Test case pass/fail criteriaDefine the criteria for pass or failing a test case

Page 955: ESA Software Engineering Standards

BSSC(96)2 Issue 1 C-11DOCUMENT TEMPLATES

3 Test Case Specifications (for each test case...)3.n.1 Test case identifier

Give a unique identifier for the test case.3.n.2 Test items

List the items to be tested.3.n.3 Input specifications

Describe the input for the test case.3.n.4 Output specifications

Describe the output required from the test case.3.n.5 Environmental needs

Describe the test environment.

4 Test Procedures (for each test procedure...)4.n.1 Test procedure identifier

Give a unique identifier for the test procedure.4.n.2 Purpose

Describe the purpose of the procedure. List the test cases this procedure executes.

4.n.3 Procedure steps Describe how to log, setup, start, proceed, measure, shut down, restart, stop, wrap-up the test, and how to handle contingencies.

5 Test Report template5.n.1 Test report identifier

Give a unique identifier for the test report.5.n.2 Description

List the items being tested.5.n.3 Activity and event entries

Identify the test procedure. Say when the test was done, who did it, and who witnessed it.

Say whether the software passed or failed each test case run by the procedure.Describe the problems.

Page 956: ESA Software Engineering Standards

C-12 BSSC(96)2 Issue 1DOCUMENT TEMPLATES