Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This...

76
Introducing Structured Testing Master’s Thesis Lund 2008-06-16 Author: Hanna Färnstrand Supervisors: Henrik Andersson, Sogeti Lars Hall, Apptus Technologies Patrik Wåhlin, Apptus Technologies Examiner: Per Runeson, Department of Computer Science, Lund University

Transcript of Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This...

Page 1: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

Introducing Structured Testing

Master’s Thesis Lund 2008-06-16 Author: Hanna Färnstrand Supervisors: Henrik Andersson, Sogeti Lars Hall, Apptus Technologies Patrik Wåhlin, Apptus Technologies Examiner: Per Runeson, Department of

Computer Science, Lund University

Page 2: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

2

Abstract Software development often runs on a tight schedule. When facing a fast approaching deadline, immature companies tend not to want to spend time on formal test processes, but would rather focus their effort on adding the required functionality into the system. Testing is often viewed as a ‘necessary evil’. However, with testing costs often reaching half of the project costs, more and more software companies become aware of the advantages of structuring the test process to save money and at the same time lessen the risks of releasing faulty products on the market. This Master’s thesis strives to introduce a structured testing process in a company where the testing is currently performed ad-hoc. A comparison of five existing test process framework models was done. Then, a summary of test process elements viewed as basic in all of the framework models was drawn up. These elements were considered necessary to introduce in the start-up phase of the process improvement. Action research was then performed through interviews, document studies, observations and questionnaires in order to assess the strong and the weak parts of the current test process, followed by improvement suggestions to the weak parts. The improvement actions were influenced by the start-up test process elements while considering the development process. The result is a set of templates, diagrams and instructions guiding the user in how to introduce some first important steps of structured software testing. Some of the improvement actions were then implemented in a chosen project and later evaluated. The introductory test process elements in the improved test process model include recognizing testing as its own process, early access to test basis, review of requirements for testability, early test planning, use of test techniques, test case specification and scripting, defect management and test reporting. The difficulties in the studied test process are thought to be common in immature test processes, and the improved test process elements suggested in this thesis could thus be generalized for use in other organizations. The thesis report provides the reader with some basic understanding of the concepts of software testing. Also, existing models on test process improvement are described.

Page 3: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

3

Acknowledgements The author would like to thank the following people for their valuable support during the making of this Master’s thesis: Henrik Andersson at Sogeti, for coming up with the subject, giving me advice on test process assessment and improvement, and giving me comments on my work. Per Runeson at SERG, for helping me decide which way to go, commenting my ideas and thoughts and by reviewing my written work. Jörgen Porath at Apptus, for providing me with insight into and information about the organization and the test process, and for letting me try my ideas. All the other people at Apptus; Patrik Wåhlin, Lars Hall, Louise Nygren, the developers in the Web Crawler project, and everybody else who in any way helped me in assessing and improving the test process.

Page 4: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

4

Table of contents

ABSTRACT........................................................................................................................................................... 2

ACKNOWLEDGEMENTS.................................................................................................................................. 3

1 INTRODUCTION ....................................................................................................................................... 6

1.1 BACKGROUND ...................................................................................................................................... 6 1.2 PURPOSE............................................................................................................................................... 6 1.3 A PRESENTATION OF THE COMPANIES INVOLVED ................................................................................. 6

1.3.1 Sogeti............................................................................................................................................... 6 1.3.2 Apptus Technologies ....................................................................................................................... 7

1.4 OUTLINE ............................................................................................................................................... 7 1.5 TERMINOLOGY...................................................................................................................................... 9

2 SOFTWARE TESTING............................................................................................................................ 10

2.1 WHAT IS SOFTWARE TESTING?............................................................................................................ 10 2.2 SOFTWARE QUALITY ........................................................................................................................... 11 2.3 LEVELS OF TESTING............................................................................................................................ 11

2.3.1 Unit test ......................................................................................................................................... 12 2.3.2 Integration test .............................................................................................................................. 12 2.3.3 System test ..................................................................................................................................... 12 2.3.4 Acceptance test.............................................................................................................................. 12 2.3.5 Regression test .............................................................................................................................. 13

2.4 THE RELATIONSHIP BETWEEN THE TEST PROCESS AND THE DEVELOPMENT PROCESS.......................... 13 2.5 PROBLEM AREAS CONCERNING TESTING............................................................................................. 15 2.6 THE NEED FOR A STRUCTURED TESTING APPROACH............................................................................ 16 2.7 RISKS IN INTRODUCING STRUCTURED TESTING................................................................................... 17 2.8 BASIC TESTING TECHNIQUES............................................................................................................... 17

2.8.1 Static techniques............................................................................................................................ 18 2.8.2 Dynamic techniques ...................................................................................................................... 19

2.9 TEST DOCUMENTATION....................................................................................................................... 22 2.9.1 The test plan .................................................................................................................................. 22 2.9.2 The test specification..................................................................................................................... 23 2.9.3 The test report ............................................................................................................................... 24

2.10 WHO SHOULD DO THE TESTING? ......................................................................................................... 24 2.11 WHEN SHOULD TESTING START?......................................................................................................... 25 2.12 WHEN SHOULD TESTING STOP? ........................................................................................................... 26 2.13 TEST TOOLS........................................................................................................................................ 27

3 TEST PROCESS IMPROVEMENT ....................................................................................................... 29

3.1 FRAMEWORK MODELS ........................................................................................................................ 29 3.1.1 The Test Process Improvement (TPI) model ................................................................................. 29 3.1.2 The Testing Maturity Model (TMM).............................................................................................. 30 3.1.3 The Testability Maturity Model ..................................................................................................... 30 3.1.4 The Test Improvement Model (TIM).............................................................................................. 30 3.1.5 The Minimal Test Practice Framework (MTPF)........................................................................... 30

4 METHODOLOGY .................................................................................................................................... 31

4.1 RESEARCH METHODS.......................................................................................................................... 31 4.1.1 Survey............................................................................................................................................ 31 4.1.2 Case Study..................................................................................................................................... 31 4.1.3 Experiment .................................................................................................................................... 31 4.1.4 Action research ............................................................................................................................. 31

4.2 THE APPLICABLE METHOD.................................................................................................................. 32 4.3 DATA COLLECTION ............................................................................................................................. 32

4.3.1 Available sources .......................................................................................................................... 32 4.3.2 Data collection methods................................................................................................................ 32 4.3.3 Drawing conclusions from collected data..................................................................................... 33 4.3.4 Validity .......................................................................................................................................... 33

Page 5: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

5

5 THE ACTION RESEARCH WORK OUTLINE ................... ................................................................ 34

5.1 ASSESSING THE TEST PROCESS............................................................................................................ 34 5.2 IMPROVING THE TEST PROCESS........................................................................................................... 34 5.3 THE ASSESSMENT INTERVIEW............................................................................................................. 35

5.3.1 Collected data from the assessment interview............................................................................... 36 5.3.2 Analysis of data from the assessment interview ............................................................................ 39

5.4 ARCHIVAL RECORDS........................................................................................................................... 39 5.4.1 Collection of data from archival records ...................................................................................... 40 5.4.2 Analysis of data from archival records ......................................................................................... 41

5.5 OBSERVATIONS................................................................................................................................... 41 5.5.1 Collection of data from observations ............................................................................................ 42 5.5.2 Analysis of data from observations ............................................................................................... 42

5.6 MEETING WITH THE TEST MANAGER................................................................................................... 43 5.7 EXISTING TERMS, CONCEPTS AND PRACTICES..................................................................................... 43 5.8 PRE-IMPROVEMENT QUESTIONNAIRE.................................................................................................. 44

5.8.1 Collection of data from the pre-improvement questionnaire ........................................................ 44 5.8.2 Analysis of data from the pre-improvement questionnaire............................................................ 44

5.9 POST-IMPROVEMENT QUESTIONNAIRES.............................................................................................. 45 5.9.1 Collection of data from the post-improvement questionnaires...................................................... 45 5.9.2 Analysis of data from the post-improvement questionnaire .......................................................... 46

6 RESULTS................................................................................................................................................... 48

6.1 RESULTS FROM THE RESEARCH........................................................................................................... 48 6.2 IMPROVEMENT MEASURES.................................................................................................................. 49

6.2.1 Specific improvements in the chosen project................................................................................. 49 6.2.2 Results from the specific improvements in the chosen project ...................................................... 50

7 CONCLUSIONS........................................................................................................................................ 52

7.1 CONCLUSIONS..................................................................................................................................... 52

8 REFERENCES .......................................................................................................................................... 54

8.1 BOOKS................................................................................................................................................ 54 8.2 JOURNAL ARTICLES............................................................................................................................. 54 8.3 CONFERENCE PAPERS.......................................................................................................................... 55

APPENDIX I – THE ASSESSMENT INTERVIEW ....................................................................................... 57

APPENDIX II – THE PRE-IMPROVEMENT QUESTIONNAIRE .... ......................................................... 60

APPENDIX III – THE CURRENT TEST PROCESS MODEL...... ............................................................... 65

APPENDIX IV – THE IMPROVED TEST PROCESS MODEL...... ............................................................. 66

APPENDIX V – A GUIDE TO TEST PLANNING ......................................................................................... 67

APPENDIX VI – A REQUIREMENTS REVIEW CHECKLIST...... ............................................................ 70

APPENDIX VII – A GUIDE TO TEST TECHNIQUES.......... ....................................................................... 71

APPENDIX VIII – A GUIDE TO TEST EVALUATION......... ...................................................................... 72

APPENDIX IX – THE IMPROVED TEST PROCESS ASSESSMENT QUESTIONNAIRE..................... 73

Page 6: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

6

1 Introduction

1.1 Background This report is the description of the work behind, and the conclusions drawn in, a Master’s Thesis in the field of Information and Communication Engineering. The thesis work was performed by the author in collaboration with Sogeti, Apptus Technologies, and the Software Engineering Research Group (SERG) at the department of Computer Science, Lund University. The subject of the thesis was proposed by Sogeti, and the research was performed on-site at Apptus Technologies. The target group intended in this report is persons with some basic knowledge of software engineering, including software development and/or software testing, but with no further expertise in the software testing area. A table explaining some words and expressions used in the report is provided at the end of chapter 1.

1.2 Purpose

The problem stated by Apptus Technologies is that a structured testing process is lacking. Testing is currently performed in an ad-hoc manner, and the results depend to a large extent on the individual tester. The company has not yet come to a point where faulty software on the market constitutes a problem, but attributes this to being successful in finding very competent developers, and not having projects large enough to make structured testing necessary. As the company grows and the magnitude of the projects increases, the need for a more structured process however becomes evident as costs for and risks related to releasing products not thoroughly tested most certainly will rise. The purpose of this Master’s Thesis is to fulfil the following goals: ◊ Evaluate the current test process at Apptus Technologies. The assessment

includes finding the strong and the weak parts of the test process, i.e. the elements that are in need of improvement.

◊ Create an improved test process, adapted to the needs, wishes and possibilities stated by Apptus Technologies.

◊ Implement the improved test process in a chosen project at Apptus Technologies. Note that in this report, the current test process refers to the test process as it was at the start of the research period, whereas the improved test process refers to the new, changed process suggested as a result of the research, and implemented at the end of the thesis work.

1.3 A Presentation of the Companies Involved

The thesis was performed by the author in cooperation with the companies Sogeti and Apptus Technologies.

1.3.1 Sogeti

Sogeti is Cap Gemini’s local professional services division. The Swedish offices of the Sogeti Center of Test Excellence (SCTE) are located in Lund and Stockholm. SCTE focuses on consulting companies in software testing-related activities such as test process improvement, test automation and test execution. Sogeti is involved in the development of the test process improvement model TPI and the renowned test

Page 7: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

7

method TMap (Koomen et al. 2006). The company’s goal with this thesis is to find a generic method for test process improvement. Sogeti’s contribution to the thesis work was to provide material, assistance and support regarding test-related questions and activities. For more information about Sogeti, the reader is referred to <www.sogeti.com>.

1.3.2 Apptus Technologies

Apptus Technologies, headquartered in Lund, Sweden, is a software company that provides database solutions for e-commerce and e-directory applications at some of the world’s most heavily trafficked sites. Customers include Eniro, CDON and the Swedish Public Radio. At the time of writing, Apptus had about 60 employees, where most of them developers. The developers worked in small project groups of up to ten people each. The company’s aim with the thesis is to find and introduce a more structured test process. Apptus’ contribution to the thesis work was to create the possibility for a study and assessment of their current test process and the chance to create and implement an improved test process in one of their projects. For further information regarding Apptus Technologies, the reader is referred to <www.apptus.com>.

1.4 Outline The outline of this report is as described: Chapter 1 – Introduction In the introductory chapter, the background and the purpose of the Master’s Thesis are explained. Also, the companies involved are presented. Last, some testing related terms are explained to the reader. Chapter 2 – Software Testing In chapter 2, some fundamentals of software testing are described. The theory explained in this chapter strives to provide the reader with enough background information to understand the concepts and terms in the following chapters. Chapter 3 – Test Process Improvement Chapter 3 explains existing models on test process improvement. Elements from these models are used in the test process assessment in chapter 5. Chapter 4 – Methodology Chapter 4 addresses research methodologies and explains the methodology used in this thesis. Data collection methods are also presented. Chapter 5 – The Action Research Work Outline The assessment process and the data collected in it are presented in chapter 5. An analysis of the collected data, resulting in a definition of the problem areas in the current test process, is also found in this chapter. Chapter 6 – Results The results from the assessment of the current test process and the implementation of the new test process are described in chapter 6.

Page 8: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

8

Chapter 7 – Conclusions In chapter 7, the conclusions drawn from the thesis work are presented. Chapter 8 – References The references used and referred to in this report are presented in chapter 8. Appendix I – The Assessment Interview The questions asked in the assessment interview can be found in Appendix I Appendix II – The Pre-Improvement Questionnaire The questionnaire used to assess the testing in a specific project is presented in Appendix II. Appendix III – The Current Test Process Model A flowchart of the current test process model in relation to the development process, can be found in Appendix III. Appendix IV – The Improved Test Process Model The improvements suggested to the test process are presented in a flowchart in Appendix IV Appendix V – A Guide to Test Planning In Appendix V, a guide to help testers plan for tests and complete the test plan template is found. Appendix VI – A Requirements Review Checklist A checklist that can be used for reviewing requirements for testability is presented in Appendix VI. Appendix VII – A Guide to Test Techniques A guide to choosing the appropriate test technique can be found in appendix VII. Appendix VIII – A Guide to Test Evaluation In Appendix VIII, a guide on how to evaluate test results is presented. Appendix IX – The Improved Test Process Assessment In Appendix IX, the assessment of the implemented improved test process is presented.

Page 9: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

9

1.5 Terminology

An explanation of terms used in this report, in alphabetical order: Black-box testing

Testing without considering the inner structures of the code; viewing the item to be tested as a ‘black box’.

Metrics Quantified observations of the characteristics of a product or process.

Milestone A tangible event that is expected to occur at a certain time in the project’s lifetime, used by managers to determine project status (Burnstein 2003).

MTPF Minimal Test Practice Framework (Karlström et al. 2005) Risk The chance that a failure occurs in relation to the expected damage

when the failure really happens (Koomen and Baarda 2005). Test A group of related test cases and test procedures (Burnstein 2003). Testability The ability to perform cost-effective testing on a system (Koomen

and Pol 1999). Test basis The documentation that the testing is based on, for example the

requirements, functional or technical specification. Test case The items that make up the basis for the execution of the test. Test

cases contain, at the least, the test inputs, the execution conditions and the expected outputs.

Test environment

The components making up the environment where the test takes place; the hardware, software, procedures, means of communication, etc.

Tester The person planning and/or executing the test cases. Test object The (part of the) software or information system to be tested

(Koomen and Baarda 2005). Test phase The phase in a development project that comprises test activities. Test plan A framework based on the planning of the test process; defining

what to test, when to test, and how to test. Test process The set of activities involved in testing, made up of at least three

phases; test planning, test setup and preparation, and test execution (Yutao et al. 2000).

Test process improvement

Optimizing the quality, costs, and lead time of the test process, in relation to the total information services (Koomen and Pol 1999).

Test technique A structured way of defining test cases from the test basis Test tools Automated aids for the test process. Test unit A collection of processes, transactions and/or functions that are

tested collectively (Koomen and Baarda 2005). Testware The products of testing, such as specifications, test plans, files, etc. TIM Test Improvement Model (Ericson et al. 1997). TMM Testing Maturity Model (Burnstein 2003). TPI Test Process Improvement (Koomen and Pol 1999). White-box testing

Testing based on the code or on the technical design documents, requiring knowledge of the internal structure of the system.

Table 1 - Terminology

Page 10: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

10

2 Software Testing

2.1 What is software testing? In the software development process, testing is used to show that the products work as intended or as required. Burnstein (2003, p. 7) gives two alternate descriptions of the definition of ‘testing’:

Testing is generally described as a group of procedures carried out to evaluate some aspect of a piece of software. Testing can be described as a process used for revealing defects in software, and for establishing that the software has attained a specified degree of quality with respect to selected attributes.

Koomen and Pol (1999, p. 5) uses the following definition for testing:

Testing is a process of planning, preparation, and measuring aimed at establishing the characteristics of an information system and demonstrating the difference between the actual and the required status.

Software testing involves comparison between a test object and a frame of reference with which that object should comply (Koomen and Pol 1999, p. 5). Testing gives an indication of the difference between the actual status of an object or output of an action, and the desired or required status or output. If the test case shows that the actual status and the desired status are one and the same, it is considered passed. On the other hand – if the actual and the desired status are not the same, the differences provide the tester with useful information about the system. The process of software testing in turn encompasses two processes. Burnstein (2003, p. 6) defines these as validation – the process of evaluating a software system or component during, or at the end of, the development cycle in order to determine whether it satisfies specified requirements; and verification – the process of evaluating a software system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. Validation includes inspecting the final product, and is done by executing software (Koomen and Pol 1999, p. 10). In short, the validation process answers the question

Have we built the right product?

The verification process means examining intermediate products of the development phases. This is sometime referred to as ‘evaluation’, and is mostly done by inspecting or reviewing documents (Koomen and Pol 1999, p. 10). Verification answers the question

Have we built the product right?

Page 11: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

11

2.2 Software quality

If the purpose of testing is to establish that the software has sufficient quality, the need for a definition of ‘quality’ is evident. What constitutes good software quality depends on what is considered important for a specific system. Criteria to base software quality on could be: ◊ The product meets the user’s expectations. This definition of software quality is

very subjective and depends on who the user is and the way he or she is going to use the product. All users will probably not have the same expectations on the product. Also, the user will only judge the product on what he or she can see and experience.

◊ The product meets all of its specified requirements. Meeting all the requirements is not on its own a sufficient measurement of software quality. To a large extent, this depends on the quality of the requirements. If requirements are wrong or missing, the quality of the software might be severely reduced.

◊ The product meets all of its specified quality metrics. Burnstein (2003, p. 24) describes a quality metric as a quantitative measurement of the degree to which an item possesses a given quality attribute. Examples of quality attributes are the so called -ilities, such as reliability, portability, usability and maintainability (Sibisi and van Waveren 2007). This is a good basis for establishing software quality, but quality attributes are often difficult to measure. Different quality attributes could also influence and counteract each other. If, for example, extra security is added to the product through a log-in feature, usability might be decreased because it will take longer and be more difficult to access the product.

2.3 Levels of testing The process of testing is divided into different levels. These levels reach from low (unit tests) to high (system and acceptance tests), as shown in figure 1 below.

Figure 1- Levels of testing

Unit (e.g. a class)

Unit test

Integration test

System test

Finished system

Acceptance test Integrated

units

System or sub-system

Page 12: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

12

2.3.1 Unit test

A unit test is a test performed on a single unit in the system. A unit is the smallest possible testable software component (Burnstein 2003, p. 137). What is viewed as a unit depends on, for example, the development method, but is often a single method or a class. Since the component is relatively small in size and often can be isolated, unit testing is often the easiest level to test on. In many organizations, the person performing unit tests is also the developer of the unit, since unit testing focuses on evaluating the correctness in the code. Wohlin (2005, p. 155) states that the purpose of unit testing is to ensure that the unit works as intended. Mainly the functionality, meaning that a set of input data generates the expected output data, is tested.

2.3.2 Integration test

Integration testing strives to ensure that the single units work together. Interfaces between units or systems are thus tested for defects. Experienced testers recognize that many defects occur at module interfaces (Burnstein 2003, p. 153). Integration tests are often iterative. They first take place between single units and then between sub-systems (or clusters in object-oriented programming), eventually building up a complete system. This can be achieved either by a top-down or a bottom-up strategy. The purpose of integration testing is to validate that the system and its interfaces work according to the specified design or system architecture (Wohlin 2005, p. 155).

2.3.3 System test

A system test tests the system as a whole. The test cases in a system test are often based on the requirements in a black-box manner (see section 2.4.1). System tests do not only test functional behaviour, but also non-functional such as performance and usability. Also, system tests can detect defects stemming from hardware/software interfaces. When the software product is intended for use in systems with heavy traffic loads such as database access systems with many simultaneous users, it is important to perform load tests. A load is a series of inputs that simulates a group of transactions (Burnstein 2003, p. 165). Load tests can be either stress tests or volume tests. A stress test tests a system with a load of transactions and data that causes it to allocate its resources in maximum amounts for a short period of time (Burnstein 2003, p. 169). The goal of stress tests is to find the circumstances under which the system will crash. They detect defects that could occur in the software’s actual working environment, and which would not turn up in other tests under normal testing conditions. Volume tests aim at measuring the maximum load of data that the software can handle by feeding the system large volumes of data. The purpose of a system test is to ensure that the system performs according to its requirements (Burnstein 2003, p. 163).

2.3.4 Acceptance test

The acceptance test resembles the system test in many ways, with the exception that it is usually performed by the customer, allowing them to validate that the product meets their goals and expectations (Burnstein 2003, p. 177). The acceptance test is

Page 13: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

13

also based on requirements, and system test cases may be reused. When the acceptance test is approved, the system is installed at the customer’s site. The approval of the acceptance test often marks the end of the software development project. If there are no customers, but the product is intended for the market, the acceptance test often takes place in the form of a beta test (Wohlin 2005, p. 156). In a beta test, a limited number of potential users use the software for a certain time, and then evaluate it. If the beta users are satisfied, the software is released to the market.

2.3.5 Regression test

Regression test is not a level of testing, but rather the retesting process that is needed on any level after a change in the software or in the requirements. If a change has been made to the code, regression testing is performed by running the old test cases again to ensure that the new version works and that no new faults have been introduced because of the change.

2.4 The relationship between the test process and the development process

A software process is a set of activities and associated results which produce a software product (Sommerville 2001, p. 8). Fundamental elements of all software processes are software specification (definition of functionality, requirements), software development (production of design and code), software validation (testing), and software evolution (further development to meet changing needs). Different processes organize these elements differently according to their needs. To describe a software process, a software process model is used. Examples of common process models are the waterfall model and evolutionary development. For further information about software process models, the reader is referred to Sommerville (2001). The development process follows a life-cycle model from the initial idea to the finished and delivered product. A common presentation of the relationship between the development process and the test process is the V-model, as shown in figure 2. In this model, the activities, divided into phases, are sequential. When one phase is finished, the next phase starts.

Page 14: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

14

Figure 2 - The V-model (Koomen and Pol 1999, p. 15)

The left side of the model represents the phases of building the system; from the initial idea to the implementation (coding) of the system. The right side of the model shows the test execution phases, or levels. The V-shaped line shows the sequence of the phases. The arrows represent the paths from the test basis to the test execution. The dotted line roughly defines the formal responsibilities for the phases; above the line, the customer is responsible, while the developers are responsible for the phases below the line. Another common process methodology is Agile (Highsmith 2002). Agile is a collection of different process models including eXtreme Programming (XP), Scrum, and Lean Software Development (Ryber 2006, p. 49). Common to all agile models is putting the person in the centre instead of tools and processes, and to support software development and respond to quick changes while avoiding unnecessary work and comprehensive documentation (Highsmith 2002, p. 29). Developers should be allowed to work fast and at the same time handle changes by using a flexible process. In agile methods, test cases are defined before programming starts. Unit tests determine if the product is ready for release and regression tests are constantly run since deliveries come at a fast pace. In agile programming, the tester can be viewed as an internal customer, who writes test cases like requirements, and accepts the software after a successful test. A graphical representation of an agile process model is shown in figure 3.

Initial idea

System test Technical design

Coding Unit and inte- gration test

Requirements, functional design

Acceptance test

Operation

Expectations

Page 15: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

15

Figure 3 - Agile development (Koomen and Baarda 2004, p. 133)

The relationship between the test process model and the development process model depends on the development process and the current test level. However, according to Koomen et al. (2006, p. 64), there are two fixed relationships that can be determined; the start of the preparation phase (requirements, functional design) directs when the test basis becomes available, and the execution phase (realization) determines when the test object becomes available.

2.5 Problem areas concerning testing

Common test process problem areas are described below (Koomen and Pol 1999). Insufficient requirements It is difficult, if not impossible, to write a perfect requirements specification. Since the requirements specification is one of the most important bases for testing, it is important for testers to engage in and provide input into the specification phase. The requirements specification should be (Lauesen 2002, pp. 376-380): ◊ Correct – each requirement is correct and reflects a customer need or expectation. ◊ Complete – all necessary requirements are included. ◊ Unambiguous – all parts know and agree upon what each requirement means. ◊ Consistent – requirements match and do not conflict. ◊ Ranked for importance and stability – requirements state a priority and the

expected frequency of changes. ◊ Modifiable – requirements are easy to change without losing consistency. ◊ Verifiable – there exists an economically feasible way to check that the product

meets a requirement. ◊ Traceable – it is possible to see where requirements come from and where they

are used in code and design. The tester’s task in the specification of requirements is first and foremost to make sure that the requirements are testable. Testability includes unambiguousness, consistency, completeness, and verifiability (Ryber 2006, p. 29). Testers should be included in the formal inspection of requirements documents, as to provide a tester’s view. Use case models, diagrams, or flowcharts of the system idea could also be used as a mean of communication when checking requirements.

Project definition

Coding

Design and adjustment

Evaluation Production

Page 16: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

16

An infeasible amount of test cases Even for a small system, there can be an infinite number of possible test cases. This is especially true for a system that can take a large number of different combinations of input data, and where many actions can take place at the same time. The tester needs to find out what input data and actions are most important to test. Since it is impossible to cover every single combination of input data, the issue is to find the subset of test cases that has the largest possibility of detecting the largest number of serious defects (Ryber 2006, p. 31). Costs of testing In many projects, testing-related costs count for up to 50% of the total software development cost. Boehm (1976) argues that these large costs stem from introducing testing too late in the process, and that these costs could be lessened if a defect could be detected and fixed at the same stage as it is introduced. This praxis needs good planning and a structured test plan.

2.6 The need for a structured testing approach The characterization of an unstructured software testing process (ad hoc testing) is that the situation is somewhat chaotic; it is nearly impossible to predict the test effort, to execute tests feasibly or to measure results effectively. Koomen et al. (2006, p. 52) reports the following characteristics of unstructured testing, based on findings from studies of both structured and unstructured testing: ◊ Time pressures – Time pressures that could be avoided mainly stem from the

absence of a good test plan and budgeting method, the absence of an approach stating which test activities should be carried out when and by whom, and the absence of solid agreements on terms and procedures for delivery and reworking of the applications.

◊ No insight in the quality of the produced system – There is no insight in or ability to supply advice on the quality of the system due to the absence of both a risk strategy and a test strategy. Also, both quality and quantity of the test cases are inadequate since no test design techniques are being used.

◊ Inefficiency and ineffectiveness of the test process – Lack of coordination between the various parties involved in the testing could result in objects being tested more than once or not at all. Inefficiency is also caused by lack of configuration and change management, incorrect use or non-use of testing tools, and the lack of prioritization and risk analysis in the test planning.

A structured test process is, according to Koomen et al. (2006) a process that lacks the disadvantages mentioned above. In other words, a structured testing approach is one that: ◊ can be adapted to any situation, regardless of the system being developed and

irrespective of who the client is, ◊ delivers insight into the qualities of the system, ◊ finds defects at an early stage, ◊ prevents defects, ◊ puts testing on the critical path for as short time as possible, ◊ generates reusable test products (e.g. test cases), and ◊ is comprehensible and manageable.

Page 17: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

17

2.7 Risks in introducing structured testing

Risks in introducing structured testing include: ◊ Introducing a test process seems too expensive – The solution to this problem is

to put the cost into perspective. Give managers tools to compare the cost of introducing structured testing to the cost of not finding an adequate number of faults and problem areas in time.

◊ Employees are reluctant to work in new ways – Improve the process gradually in small steps. Koomen and Pol (1999, p. 25) states that it is of utmost importance that employees recognize the need for improvement and agree to it. If not, the chances of failure are much greater. And if the change process has failed once, the next attempt at a process change will face even greater reluctance.

◊ There is not enough time to perform testing – Many testing activities can be performed in parallel with development. Though staff needs to be appointed to testing, it does not have to be awarded much more time.

◊ The number of possible test cases is too large to handle – There are techniques that can reduce the number of test cases needed. See section 2.5.

2.8 Basic testing techniques

A test technique is a structured approach to deriving test cases from the test basis (Koomen and Pol 1999, p. 11). Different techniques are aimed at finding different kinds of defects. Using a structured test technique most often results in more effective detection of defects than ad-hoc identification of test cases. According to Koomen et al. (2006, p. 71), some advantages of using test techniques are: ◊ the tests are reproducible since the content of the test execution is described in

detail, ◊ the test process is independent of the individual who specifies and executes the

test cases, and ◊ the test specifications are transferable and manageable. There are two kinds of testing; static and dynamic, which are explained in sections 2.8.1 and 2.8.2. When to use the respective techniques depends on what item is under test. In figure 4 below, the division between static techniques (reviews) and dynamic techniques (execution) is shown.

Page 18: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

18

Figure 4 - The roles of static and dynamic testing techniques (Burnstein 2003, p. 305)

Dynamic testing techniques are further divided into three basic categories; black-box, white-box, and grey-box testing techniques. What testing technique to use depends on the level the testing is carried out on. Figure 5 below shows the hierarchy of these categories (Ryber 2006, p. 89):

Figure 5 - Testing techniques

2.8.1 Static techniques

A static testing technique is a verification activity and involves reviews, i.e. evaluating documents, design, or code. The code is however not executed. Burnstein (2003, p. 304) defines a review as

a group meeting whose purpose is to evaluate a software artefact or software artefacts.

This could be done with varying levels of formality and either manually or with the help of tools. Two common types of reviews are inspections and walkthroughs.

Static (reviews)

Requirements

Specifications

Design

Code

Test plans

User manuals

Dynamic (Execution)

Testing techniques

Static techniques

Dynamic techniques

Black-box techniques

Grey-box techniques

White-box techniques

Functional testing

Non-functional testing

Page 19: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

19

Inspections are usually of a more formal nature and participants are provided with the material to be inspected before the meeting. They follow a prepared checklist when inspecting the document. Walkthroughs, on the other hand, are less formal. The producer of the material guides the participants through the material, often in a line-by-line manner as if he or she would manually execute the code (Burnstein 2003, p. 310). The participants can ask questions during the walkthrough. The main advantage of using reviews is that defects can be found at a very early stage, before programming has started, and are therefore prevented from being propagated into the code. Thus, the defects are cheaper to repair, and the rework time is reduced. Also, some types of faults which can not be found during execution of the code can often be found in a review (Ryber 2006, p.91). An additional advantage is that those persons involved in the review acquire an understanding of the material under review, and an understanding of the role of their own work.

2.8.2 Dynamic techniques

Burnstein (2003, p.304) describes dynamic testing techniques as techniques where the software code is tested by executing it. Input data is fed into the system, and the actual output data is compared to the expected output data. Dynamic techniques can only be applied to the code. Black-box techniques Black-box techniques are used to perform so-called behavioural testing (Ryber 2006, p.94). Input is fed into the system, and the output is compared to the expected output, verifying that the system has performed the right action from what the tester can see. The tester does not care what happens to the data within the system; he views the system as a black box. Black-box techniques can be used for testing both functional and non-functional features. Functional tests are used for testing the functionality of a system, i.e. the functions the systems should perform. Functional features of a system are related to data; what it is used for, how it is recorded, computed, transformed, updated, transmitted, and so on (Lauesen 2002, p. 71). Functional tests focus on the inputs, valid as well as invalid, and outputs for each function. Non-functional tests are supposed to test the non-functional features, sometimes referred to as quality features, of the system. Examples of non-functional features are performance, security, robustness, and maintenance. Non-functional tests do not focus only on the output, but also on measurements such as the speed of a transaction. White-box techniques White-box techniques are also known as structural testing, or glass-box testing. The purpose of white-box testing is to verify the internal logic or structure of the system. Thus, knowledge of the internal structure is necessary. The tests include data-flow or control-flow tests. Validating only the output from a test is often not enough. Tests often have an impact on the internal state of the system. The impact does not show up in the black-box tests, but may affect subsequent test execution and cause test cases to fail. Guptha and Singha (1994) argue that observing the internal state of the system is essential to

Page 20: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

20

assure that tests execute successfully. Therefore, a combination of black-box and white-box techniques is preferable. Grey-box techniques A grey-box technique is merely a combination between white-box and black-box techniques. This technique can be used for e.g. looking into the internal structure of the system in order to get an understanding of how to write better black-box test cases for the system. Examples of dynamic testing techniques are described below (Ryber 2006, p.92): Data Data testing is the most common form of black-box testing. It involves categorizing data into groups. Data test techniques include: ◊ Equivalence partitioning (EP) – The input data domain is partitioned into

equivalence classes of both valid and invalid data, where all instances of data within an equivalence class is assumed to be processed in the same way by the software. An example is a system that can take the numbers 1 to 10 as input. In this example, three equivalence classes are needed. One equivalence class is the range of valid numbers; 1-10. Another class would contain the invalid numbers below the range; < 1. The last class would contain the invalid numbers above the range; > 10. The tester can then select a chosen number of test input values from each equivalence class. In the example above, input values could be -7, 5, and 14.

◊ Boundary value analysis (BVA) – Experienced testers know that many defects occur on, above or below the edge of an equivalence class (Burnstein 2003, p. 72). In BVA, the tester chooses input values close to the edges. In the previous example, possible input values could be 0 (invalid, below lower edge), 1 (valid, lower edge), 10 (valid, upper edge), and 11 (invalid, above upper edge).

◊ Domain testing – When several variables work together and thus need to be tested together, domain testing can be used. Domain testing works in the same way as EP and BVA, but with more variables at the same time, creating domains from combining equivalence classes. One example is an insurance premium which is calculated from combining a person’s age, gender, and profession. If the number of variables is three or less, the domains can be displayed graphically. Then, BVA is used by choosing input values on the edges between domains (Ryber 2006, p. 130).

Flow Flow tests involve testing flow of information within the system. Examples of flow test techniques are: ◊ Use case model testing – A use case is a description of a workflow where an actor

(a person, the system itself, or an external system) interacts with the system and performs a task. The use case describes all the steps taken in order to accomplish the task along with alternative actions which can be taken. Input parameters which ensure that the entire workflow is run through are then chosen as test cases.

◊ Coverage and control flow graphing – Coverage and control flow graphing requires knowledge of the internal structure of the code. Coverage graphs show

Page 21: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

21

how large a portion of a specific program part – or prime – is covered by test cases. For example, the coverage of program statements, branches, conditions or paths can be measured. Combinations of primes are used to derive test cases. When a certain percentage of primes have been tested, the system is said to have reached that percentage of coverage. If, for example, all possible paths have been covered by the test, the system has achieved 100% path coverage.

◊ State transition graph testing – State transition testing is often used in embedded systems. All possible system states are first defined. Graphs show the system states as nodes and transitions between states as arcs. The actions required to change states are then defined and connected to the arcs. Test cases can then be derived from different paths of state transitions, e.g. risk-based, probability-based or coverage-based.

Logic Logic testing is primarily used when testing complicated combinations of variables. Examples of logic testing techniques are:

◊ Decision trees – Rules for the system, based on the requirements, are created. The rules cover all combinations of input data. The rules are then combined in a decision table. Then, the rules are checked for consistency using a decision tree. The branches of the decision tree represent the different groups of data on each level, further divided into other levels of branches. The test cases are derived from the leaves, i.e. the ends of each branch.

◊ Cause-and-effect graphing – Cause-and-effect graphing is used when the tester needs to test combinations of conditions. Causes (distinct input conditions) and effects (an output condition or a system state change) are displayed as nodes. Arcs between the causes and the effects show what effects come from what causes. If two or more causes need to be combined in order to reach an effect, Boolean logic (AND, OR, and NOT) is used to combine the arcs. The graph is converted into a decision table. From the table, test cases can be derived.

Risk-based testing Risk-based testing includes techniques that derive test cases in an ad-hoc manner. Much experience is needed on the part of the tester. Examples of techniques are: ◊ Error guessing – Experienced testers usually know where in a system defects tend

to occur. On those grounds, he or she can make an educated guess about where defects will occur in a similar system, and can thus design test cases to address those problem areas.

◊ Risk-based testing – Ryber (2006, p. 219) defines a risk as the probability of a defect occurring multiplied with the influence the defect will have on the system. In risk-based testing, risks are identified and test cases addressing risk areas are prioritized.

◊ Exploratory testing – Testers explore the system by testing without using predesigned test cases. While the tester learns and experiences the system, he or she comes up with new test cases. The benefits include utilizing the tester’s creativity while at the same time lessen overhead documentation. Exploratory testing is not always viewed as its own testing technique, but rather as a testing approach that can be applied to other testing techniques (Itkonen et al. 2007).

Page 22: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

22

2.9 Test documentation

Software test documentation includes documentation produced before, during and after test execution. The IEEE Standard for Software Test Documentation (IEEE 829-1998, 1998) describes a set of basic software test documents and specifies the form and contents of these, but does not specify what documents are required. The purpose of the standard is to provide a common frame of reference, a checklist for test documentation contents, or a baseline for evaluating the current test process. The standard includes a test plan, a test specification, and a test report. The documents included in these are also deliverables. When implementing test documentation into an organization, the IEEE standard recommends introducing the different documents in phases. In the initial phase, the planning and reporting documents should be introduced. The process should begin at the system level, since the need for control during system testing is critical. In subsequent phases, the needs of the organization and the results of the prior phases should dictate what sequence of introduction of the documents to use (IEEE 829-1998, 1998). The IEEE standard might not be appropriate for immature organizations. Some of these might not be able to produce the information and the metrics needed for answering the questions proposed by the IEEE test documentation. The standard could however be seen as a target to aim at when the organization accomplishes the appropriate maturity level.

2.9.1 The test plan

Burnstein (2003, p. 197) defines a plan as

a document that provides a framework or approach for achieving a set of goals

According to Burnstein, test planning is an essential practice for any organization that wishes to develop a test process that is repeatable, manageable and consistent. All of the studied test process improvement models (see chapter 3) mention test planning and the production of a test plan as vital elements of the test process. When properly planned, resources can be optimally distributed and thus more efficiently used, which in turn leads to higher productivity (Samson 1993). Since test planning should include the tools, test base documents, and such elements needed for testing, these can be produced and made available in time. Thus, no time on the critical path will be wasted on the production of them. Also, since test planning can provide earlier access to the test basis and thus enable earlier test specification, test activities can run in parallel with development activities. This will shorten the time that testing is solely on the critical path. Test planning should begin early in the software life-cycle, preferably already in the requirements definition phase. The test plan is the result of the test planning, and contains milestones to help determine the project status. These milestones can make for good points in time to follow up and evaluate the process, so that the project leader can make sure that the project runs as expected (Burnstein 2003, p. 197). Also, the test plan serves as a means of communication between different parties and

Page 23: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

23

enables transfer of learning between them, and can also serve as a method of communicating the quality of the test process to the customer (Koomen et al. 2006). Depending on the organizational policy, test plans can be organized in several ways. The structure may be hierarchical, with separate test plans for each level of testing (see section 2.3), or with just one master test plan. Koomen et al. (2006, p. 87) claim that the hierarchical approach is better suited for larger companies, and projects which adopt the waterfall development method, while agile development methods usually integrate test planning into the project plan or use a single test plan. In the IEEE standard (IEEE 829-1998, 1998), the test plan prescribes the scope, approach, resources, and schedule of the testing activities. It identifies the items to be tested, the features to be tested, the testing tasks to be performed, the personnel responsible for each task, and the risks associated with the plan. The standard further specifies structuring the test plan with, but not limited to, the following elements:

a) Test plan identifier – a unique identifier to each test plan b) Introduction – a summary of the software items and features to be tested, and

references to relevant documents such as the project plan c) Test items – the test items and their versions, and references to relevant

documents such as the requirements specification d) Features to be tested – the features and combinations of features to be tested,

and a reference to their design specifications e) Features not to be tested – the features that are not to be tested and the

reasons why f) Approach – the activities, tools and techniques that will be used in testing

each group of features, the completion criteria and constraints on testing g) Item pass/fail criteria – the criteria on how to determine whether a test

passes or fails h) Suspension criteria and resumption requirements – the criteria to determine

whether to suspend all or just a portion of the testing, and what needs to be re-tested when testing is resumed

i) Test deliverables – the deliverable documents, such as the test plan and the test design specification

j) Testing tasks – the tasks necessary to prepare for and perform testing k) Environmental needs – the necessary and desired properties of the test

environment, including hardware, security settings and test tools l) Responsibilities – the appointed responsibilities, such as test management,

design and execution m) Staffing and training needs – the skills needed by the staff, or the training

needed to acquire skills n) Schedule – the identified milestones, time needed and deadlines o) Risks and contingencies – the high-risk parts of the project, and contingency

plans for each p) Approvals – the names of each person who must approve of the test plan

2.9.2 The test specification

In the IEEE standard, the test specification is covered by three document types;

Page 24: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

24

◊ the test design specification, which refines the test strategy and identifies features to be covered by the design and its associated tests. It also identifies test cases and procedures and specifies pass/fail criteria;

◊ the test case specification, where the input and expected output values are specified; and

◊ the test procedure specification, containing the steps required to exercise the specified test cases to accomplish testing according to the test design.

The test case specification should contain the following elements:

a) Test case specification identifier b) Test items c) Input specifications d) Output specifications e) Environmental needs f) Special procedural requirements g) Intercase dependencies

For further information regarding the test specification standard, the reader is referred to IEEE 829-1998 (1998).

2.9.3 The test report

In the test report, defects are reported. The test report should work as a means of communication between testers and developers, so that the developers can understand what kind of defect the testers have discovered, and where in the system to find it. Test reports can also be used for obtaining and presenting metrics which can be used for test planning in later projects. The IEEE standard defines test reporting as made up of the following four types of documents: ◊ A test item transmittal report, which identifies what test items to be transmitted

to testing in case the testing group is separated from the development group. ◊ A test log, which is used to record things that happen during test execution. ◊ A test incident report, describing things happening during test execution that

need further investigation ◊ A test summary report, which summarizes the testing activities that took place

during the test process. For further information about the test item transmittal report, the test log, and the test incident report from the test report standard, the reader is referred to IEEE 829-1998 (1998).

2.10 Who should do the testing?

According to Burnstein (2003, p. 163), the best scenario for system tests is that they are performed by a team of testers which is part of an independent testing group. Since system tests should detect weak areas in the software, it is a good idea that the testers are not too involved in the software. While unit test can be performed by the developer who has extensive knowledge and understanding of the code, many test experts recommend using a separate test organization for integration, system, and regression tests. Koomen et al. (2006, p. 373) defines a permanent test organization as a line organization that offers test

Page 25: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

25

services. Benefits from using a permanent test organization are that the test process can easily be executed according to a fixed method, that testing in different projects will be of consistent quality, and that resources and testware are reusable. Also, test expertise (which is often scarce within an organization) can be centralized and thereby made available to the entire organization.

2.11 When should testing start? In many companies, the test phase starts when testable code units are delivered. Sometimes, even test planning is delayed until this time. However, there is a lot to benefit from earlier test planning, such as the enabling of running development and test activities in parallel (see section 2.9.1). Test planning can take place already in the requirements phase. Except for the benefits mentioned above, other benefits from early test planning are that the needed test basis can be evaluated for testability. The closer to the source the defects are discovered, the cheaper it is to repair them. This is also valid for the different test levels. Inspections and unit tests can be run earlier in the process than integration and system tests, and thus, it is cheaper to find and fix the defects in those phases. In the integration test phase, it is usually more efficient to test between every step of integration compared to testing after a larger number of units have been integrated. The more units that have been integrated, the harder it is to trace the defects (Koomen et al 2006). Many researchers recommend starting the testing with an analysis of the test basis before starting implementation or test case specification. Test base analysis includes requirements analysis such as inspections. Studies show that requirements analysis costs 5% of the total development cost while providing 50% higher quality to the system (Samson 1993). Other studies show that it takes 100 times more resources to fix requirement-related defects during the development phase that during the requirements specification phase. The mean time to fix these defects found in inspections was 0.5 hours, while fixing the same defects during the testing phase took 5-17 hours in the studied projects (Samson 1993). Many defects that are introduced in the requirements specification phase can be found and corrected before implementation has even begun by using inspections. In a case study of the trade-off between inspection and testing performed by Berling and Thelin (2003), in some tests more than half of the faults could have been found and repaired already in the requirements specification phase. All in all at least 14% of the total number of faults discovered in testing should have been discovered by inspections. Other studies show that reviews might reduce between 60 and 90% of the defects introduced before implementation (Denger and Shull 2007). Denger and Schull state that companies however may be reluctant to introducing reviews in the existing process. Reasons may be that the gain in quality is not always directly observable, or that a review could take valuable time from other activities such as implementation in a project that already suffers from the pressure of time. This time might however be reclaimed in the test execution phase, since the number of defects has already been reduced.

Page 26: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

26

2.12 When should testing stop?

Deciding when to stop testing is one of the most problematic parts of the test process. Even in the most mature and structured test organizations, knowing when enough testing has been carried out is difficult. A decision must be made and balanced between the risks of stopping too late and stopping too early. If testing is stopped too late, resources are wasted, the product release is delayed, costs are increased, and schedules delayed. If testing is stopped too early, severe and maybe even fatal defects may remain, customers get dissatisfied, and repair and support costs increase (Burnstein 2003, p. 416). Koomen and Pol (1999, p. 10) suggest that

Testing should continue as long as the costs of finding and correcting a defect are lower than the costs of failure in operation.

In many organizations, testing stops when time or money runs out. This is a poor praxis, as it can not guarantee a certain degree of quality. Ryber (2006, p. 269) suggests these five criteria for ending the test phase: ◊ The coverage goals stated in the test plan are met. ◊ The number of discovered defects per time unit is below a fixed boundary. ◊ The costs of finding more defects are larger than the costs of failure in operation. ◊ The project team decides that the system is good enough to be released. ◊ The management orders a release. There are risks involved in all of these criteria. Setting a coverage goal may cause testers to write fewer or worse test cases so that testing can be performed in time. An ordered release from the management could result in the system not being thoroughly tested. In most cases, the criticality of the system must determine the criteria for when to stop testing. Many commercial products are released with known bugs, since the release date is extremely important for keeping up with the competition on the market, while safety-critical systems such as nuclear power-plant software must be thoroughly tested not to cause fatalities. There are a number of mathematical proof techniques that can aid in determining when to stop testing. Software reliability growth models, for example, measure the failure rate in the system. Changes in the failure rate can help testers make decisions about when to stop testing. Software reliability models are predictive models, based on statistics and probability. They can be used to predict the mean time to failure (MTTF), which shows the probability of failure-free operation of the software (Burnstein 2003, p. 412). The more testing performed within a process, the more the costs rise. Hence it is desirable to release the product as soon as possible. However, defects are much more expensive to correct once the product is on the market, and therefore testing is best continued for as long as there are still defects in the product. To calculate the best time to release the product, these two factors must be balanced (Vienneau 1991). Pham (2007, p. 320) defines the total cost of a software system at time T as E(T). This cost includes the cost to perform testing, the cost incurred in removing errors during the testing phase, and the risk cost due to software failure. The reliability function is defined as R(x|T) and the desired reliability level as R0. The reliability function can be calculated by using, for example, the Goel-Okumoto software reliability model

Page 27: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

27

(Gokhale 2005). The optimal time to stop testing and release the software is, according to Pham, the time T when E(T) is minimized under the condition that R(x|T) ≥ R0.

2.13 Test tools

Test tools are aids for helping testers with their tasks. Koomen et al (2006, p. 430) defines a test tool as

an automated instrument that supports one or more test activities, such as planning, control, specification and execution.

Test tools range from simple word processing programs to advanced process management systems. There are several categories of test tools, grouped by what functionalities they support. Examples of categories are planning tools, test design tools, test execution tools, code analysis tools, debugging tools, and measurement tools. The use of tools could increase productivity and improve product and process quality, given that the tools are suited to the maturity of the organization and the test process, and the abilities of the staff. If the organization is not ready for the implementation of tools, the tools can not be fully utilized, and the investments made will not be beneficial. Koomen et al. (2006, p. 442) states that acquiring test tools should not be a goal in itself. Rather, the reason for introducing tools should be to achieve time-, money- or quality goals. Tools can add value if the process is well organized, but can be counter-productive in an insufficiently organized process (Koomen and Pol 1999, p. 47). Graham et al. (1995) state that:

Automating chaos leads to faster chaos! Burnstein (2003, p. 466) recommends ensuring that the following factors are fulfilled when introducing tools: ◊ Testers have the proper education and training to use tools; ◊ the organizational culture supports tool evaluation, tool use and technology

transfer; ◊ the tools are introduced into the process incrementally; ◊ the tools are appropriate for the maturity level of the test process and the skill

levels of the testers; and ◊ the tools support incremental test process growth and improvement. If the test process is on a low maturity level, a set of basic tools requiring little training could be introduced. The tools should show developers the benefits of tool usage and support measurement collection. Burnstein (2003, p. 472) states that a spreadsheet program is a necessary basic tool, since it can be used for many purposes such as recording measurements. Burnstein further suggests the following as introductory tools: ◊ Interactive debuggers – An interactive debugger helps locating defects in the

code. They also help developers understand the code and how the program is executed.

Page 28: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

28

◊ Configuration building tools – These tools aid in configuring software systems and support manageability and repeatability in the system construction process.

◊ Line of code (LOC) counters – A size measurement tool is useful in e.g. cost estimations and calculations of the relative number of defects in the code.

Page 29: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

29

3 Test Process Improvement

3.1 Framework models The quality of a test process is limited by the quality of each of its elements. These elements, for example the test planning process, the test tools, and the communication within the project, can be referred to as the key areas of the test process. There are a number of different framework models available, such as the Testability Maturity Model (Burnstein et al. 1999), Testing Maturity Model (TMM) (Burnstein 2003), Test Improvement Model (TIM) (Ericsson et al. 1997), Test Process Improvement (TPI) model (Koomen and Pol 1999) and Minimal Test Practice Framework (MTPF) (Karlström et al. 2005). These models specify different levels of maturity of the test process, based on how well developed a process is with respect to a certain key area. A test process can thus be at different maturity levels for different key areas. Also, the key areas differ in importance depending on different factors such as the project size and product complexity, and a test process does not have to reach the highest level in all areas to be sufficient. What maturity level is desirable also depends on different factors. When a process is on a certain level, it has also passed all criteria for all the levels beneath. There are also continuous models, where process improvement steps and goals are not grouped by levels. The models described below have many similarities. There are elements that are emphasized as lower-level or basic elements in all or most of the models, and which should be addressed first when improving the test process. These elements include: ◊ Defining testing as a process separate from development ◊ Creating the test process to be manageable and reusable ◊ Defining testing as separate from debugging ◊ Establishing a test strategy with testing goals ◊ Performing risk analysis and risk management ◊ Establishing basic testing techniques and methods ◊ Planning testing, and document the planning in a test plan according to a set

standard or template ◊ Establishing activities, schedule, roles and responsibilities, and tools and

techniques, and including them in the test plan ◊ Allowing for early detection of defects, partly by ensuring that test basis deliveries

are on time ◊ Documenting and reporting test results A framework model can be used both as an assessment tool to evaluate the maturity of the current test process, and an indicator as to what areas need to be improved and how to improve them. After the test process has been improved, the framework model can again be used to determine the maturity level of the new process and thus make the improvement visible and measurable.

3.1.1 The Test Process Improvement (TPI) model

The TPI model lists twenty key areas, namely test strategy, life-cycle model, moment of involvement, estimating and planning, test specification techniques, static test techniques, metrics, test tools, test environment, office environment, commitment

Page 30: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

30

and motivation, test functions and training, scope of methodology, communication, reporting, defect management, testware management, test process management, evaluation, and low-level testing. The key areas are each classified into a varying number of maturity levels; named A-D. The ascending levels improve in terms of time, money and/or quality. The key areas are categorized into four cornerstones; life cycle, techniques, infrastructure and tools, and organization. To describe the importance of each key area in relation to the other areas and the dependency between them, the levels are described in a Test Maturity Matrix. Checkpoints are assigned to each level, and to proceed to the next level, each checkpoint must be crossed off. The model also contains suggestions on how to improve in order to reach the next level (Koomen and Pol 1999, pp. 31-32).

3.1.2 The Testing Maturity Model (TMM)

The TMM uses a staged architecture and contains five maturity levels (initial, phase definition, integration, management and measurement, and optimization), which each contains a number of Key Process Areas (KPA). Maturity goals are related to each level. Achievement of the maturity goals results in reaching a higher maturity level. Activities, tasks and responsibilities (ATR) defined at each level indicates what steps need to be taken to achieve the maturity goals. A TMM assessment model, that supports test process evaluation, is also provided (Burnstein 2003, pp. 8-16).

3.1.3 The Testability Maturity Model

The Testability Maturity Model also uses a staged architecture, where six key areas and three maturity levels (weak, basic and strong) are defined. The key areas are test-friendly infrastructure, test-aware project planning, test-friendly product information, test-aware software design, testware, and test environment design. A scorecard where twenty test process-related issues are covered is used as an assessment tool to determine the current test process level (Burnstein et al. 1999).

3.1.4 The Test Improvement Model (TIM)

The TIM defines five key areas (organisation, planning and tracking, testware, test cases, and reviews) and five levels of maturity to each key area. It contains two parts; a framework and an assessment procedure. The TIM is based on the Testability Maturity Model but has evolved it by introducing smaller steps between the levels in order to make them more reachable (Ericson et al. 1997).

3.1.5 The Minimal Test Practice Framework (MTPF)

The MTPF (Karlström et al. 2005) was developed to suit small but rapidly growing organizations that do not have the resources to implement the models above. The MTPF focuses on creating low thresholds which are easy to climb, and also on implementing the right practice at the right time as the company is growing. The model comprises five categories of test-related elements. These categories are problem and experience reporting, roles and organization issues, verification and validation, test administration, and test planning. The model is further divided into three levels, based on the number of employees involved in development within the organization (<10, <20, and >30).

Page 31: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

31

4 Methodology

4.1 Research methods A considerable amount of the effort needed in research conducting is put into collecting and analyzing some type of data. The data could be either quantitative or qualitative. Quantitative data is data that is countable or classifiable, for example ‘X persons are employed as testers’ or ‘Y percent of the project budget is awarded to testing’. Qualitative data on the other hand is descriptive data that is not countable, e.g. ‘the tester thinks that test process A is easier to use’. Some of the most common research methods, as described in Höst et al. (2006), are explained below.

4.1.1 Survey

The purpose of surveying is to describe and explain an occurrence through an investigation or examination of a sample of individual study objects, and can be used at any time during the project life cycle. This is often done with the help of questionnaires or interviews. The advantage of this method is that if a large enough sample is used, generalizations can be made about the entire population (Rubin 1994, p. 20). One major disadvantage is that it is necessary to pick a sample that is representative of the entire population aimed at in the research. Another disadvantage is that it is fix, which means that once the surveying has started, questions in it can not be changed nor added, since this would change the conditions of the survey.

4.1.2 Case study

The purpose of a case study is to study and describe an object in depth. It is often used for describing a process or a work-flow. A case study describes a specific case chosen for a specific cause, and is not chosen as a random sample. Conclusions drawn from a case study are not required to be, and are usually not, possible to generalize. An important advantage of this method is that it often provides the researcher with in-depth knowledge of the studied object. It is also a flexible method, which means that questions can be changed or added during the study without influencing the outcome negatively. A case study is usually carried out through interviews, observations and document analysis.

4.1.3 Experiment

An experiment aims at finding relationships between causes and effects and describing why a specific phenomenon occurs. An experiment is a strictly planned, controlled and fix method, and is often carried out by varying input parameters and studying the outputs. The major advantage of this method is that it allows for a qualitative and fair comparison between two study objects. A disadvantage is that it rarely can take place in a real work environment.

4.1.4 Action research

The purpose of action research is to study and document the improvement of a process and evaluate it while continuously improving it. It is often viewed as a variation of a case study. Four steps are described; Plan, do, study and learn. First, a situation or phenomenon is observed in order to identify the problem that needs to be

Page 32: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

32

solved. This is often done through a survey or a case study. Then, a solution to the problem is proposed and implemented. When this is done, an evaluation of the implemented solution takes place. If the solution is not considered to be adequate, improvements are made. Action research is hence an iterative process. A disadvantage concerning this method is that it is difficult for researchers to be unbiased when evaluating an object that they have done work on. The major advantage is, of course, that studying and improving the object can be done in parallel.

4.2 The applicable method

Since the purpose of this thesis is to evaluate the current test process, improve it, implement it and then evaluate it, action research seemed to be the most applicable research method out of those mentioned above. Activities involved in the project were applied to the four steps described in the method as in table 2. Step Activity Plan Study the current test process by interviewing testers, test

managers and developers, studying documents such as the test plan, observing test sessions and handing out questionnaires. Compare the characteristics of the test process to those of the framework in order to find the strengths and weaknesses. Also, make an analysis of the company’s needs and requirements.

Do Compare different test methods and propose changes and improvements to the test process. Implement changes in a chosen project.

Study Study the effects of the new and improved test process or have the improvement suggestions evaluated by the company.

Learn Are the improvements satisfactory? If so, make the new process permanent. If not, revise the new test process.

Table 2 - The action research method applied to the test process improvement in this thesis

Performing the fourth step is out of the scope of this thesis.

4.3 Data collection

4.3.1 Available sources

Available sources for data collection are ◊ The test manager ◊ Developers/testers ◊ Project managers ◊ Test cases ◊ Test plans ◊ The test strategy ◊ Test sessions

4.3.2 Data collection methods

Interviews with testers, the product manager, the project manager and a company manager were carried out and resulted in mainly qualitative data. Available documents, such as test plans and test case specifications, were studied. These

Page 33: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

33

studies generated both qualitative and quantitative data. Test sessions were observed, and both qualitative and quantitative data were collected. Questionnaires were handed out to members of a chosen project, and the results were then collected and analyzed.

4.3.3 Drawing conclusions from collected data

The data was divided into different categories based on which key area it concerned. Then, the data was analyzed and compared to the maturity level ratings of the TPI and the TMM. This helped pointing out the weaker parts of the test process, and thus helped in prioritizing what areas needed the most attention. The TPI and TMM could also help in determining what steps needed to be taken in order to reach a higher maturity level.

4.3.4 Validity

The validity of the data was ensured by asking the participants in the interviews, observations, and questionnaires if the collected data and the conclusions drawn were correct and corresponded to their views and thoughts.

Page 34: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

34

5 The Action Research Work Outline

5.1 Assessing the test process Based on the Plan-Do-Study-Learn activities proposed in section 4.2, the assessment of the test process included the following steps: The first step was to prepare the study objectives. The questions to be answered were: ◊ What are the strong and weak parts of the current test process? ◊ Why is there a need for a more structured test process? ◊ What are the attitudes towards introducing a new test process? ◊ How is testing structurally organized in the company? ◊ What competence exists within the company in the testing area, what is needed,

and how should this be provided? Next, the subjects to study were chosen. The subjects included interviewees and other persons who would participate, and also available documentation. Examples of documentation are test plans and test reports (Koomen and Pol 1999). The assessment procedure was then prepared and the interviews, questionnaires and observations were planned. Then, information was collected through interviews, studying documentation, observations and questionnaires. The next step was to analyze the collected information. The information was characterized into groups loosely based on the key areas in TMM and TPI (see chapter 3). The last step was to record and present the results of the analysis. The presentation should include a summary of strengths and weaknesses, and the recommended areas for improvement (Burnstein 2003, p. 553).

5.2 Improving the test process

Following the assessment, the improvement actions were prioritized, implemented, and evaluated: First, the needs and requirements on the testing process of the organization were established through interviews with the appropriate persons. Then, criteria on how to determine what improvements needed to be done based on the organisation’s needs and available resources were put up. In the next step, important improvements were suggested with the help of the framework models and selected based on the criteria mentioned above. The needed improvements were weighed against the needs and requirements of the organization. A test process improvement plan containing the improvement suggestions was then presented. The improvement plan should include goals and improvement targets, tasks, activities, responsibilities and benefits (Burnstein 2003, p. 556). After the generic improvement steps had been drawn up, a specific project in which to evaluate the current test process was chosen. When choosing such a project, it was

Page 35: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

35

important to ensure that the project was representative of the entire process used in the organization. The project should also be representative of the software products, and have an impact on the business in terms of revenue, profit or strategic values (Burnstein 2003, p. 552). The test process was evaluated through questionnaires, and resulted in specific improvement suggestions. Next, the project in which to try the improved test process was chosen. The chosen project was the succeeding release to the project assessed in the previous step, and thus comparisons between the current test process and the improved test process could be made. After the introduction of the new test practices, the improvement suggestions were evaluated in questionnaires. The improvement suggestions were revised based on the evaluation and then incorporated into the generic improvement suggestions. An overview of the workflow can be found in figure 8.

Figure 6 - The action research workflow. The grey boxes represent elements that the author can influence and contribute to.

5.3 The assessment interview

The first part of the assessment was in the form of a semi-structured interview. First, the interviewees answered a number of background questions. Then, a number of questions about the interviewee’s perception of the strong and weak parts of the current test process were asked. In the last part of the interview, the questions concerned certain characteristics of the current test process. These questions were

Impacts

Existing process improvement

models

Study of archival records

Assessment interview

Current test process

Improved test process

Observations Questionnaire

Assessed project

Implemented project

Impacts Impacts

Evaluates

Generic improvement suggestions Leads to

Leads to

Evaluates

Specific improvement suggestions

Imple-mented

in

Questionnaire

Leads to

Results in

Test theory and best practices

Evaluates

Impacts

Page 36: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

36

loosely based on the key areas in TPI and TMM. Some of these questions could be answered with “yes” or “no” but follow-up questions gave the interviewees a chance to elaborate on the answers. The interview questions (in Swedish) can be found in Appendix I. The interview took place in a conference room and three persons with different responsibilities (a project manager, a product manager and a company executive) at the company took part. All three interviewees were interviewed at the same time, as to promote discussions about the topics. The answers were recorded on a tape recorder and later transcribed and summarized. The resulting data from the assessment interview is presented in the next section.

5.3.1 Collected data from the assessment interview

The assessment interview resulted in both qualitative and quantitative data, providing a description of the current test process and some of the wishes and needs for the improved test process. The data was divided into categories, showing the status of the current test process and the expressed wishes with respect to each area. A summary of the interview, with the data categorized, is shown below. Attitudes and motivation The attitudes towards testing vary a lot depending on the experience of the individual tester. One interviewee claimed that “less experienced testers tend to think that testing is not that necessary and do not want to put too much effort into it, while testers with more experience have a better understanding of the importance and usefulness of a structured testing process”. The experienced tester also knows that there is a risk that the code needs to be re-tested, and using a script is thus useful so that the test case does not have to be re-written again and again. Also, by using a script the test case can be handed over to somebody else. Most developers are willing to unit test their own code, whereas they would prefer to leave integration and system testing to someone else. It was mentioned that a common attitude among the developers is that “when they have performed their share of unit testing, their work is done”. The attitude towards introducing more, but structured, testing would probably be very positive. All employees want better testing, but nobody wants to have to do the work involved in starting up the process. Since the developers and the testers are one and the same, testing is a personal responsibility and therefore the quality of the testing varies a lot. Some employees think that “testing is not my duty”, while others put a lot of effort in conducting good and thorough testing so that they make sure that the product that they deliver is good. The company needs help in structuring the testing to overcome this problem. Test organization, roles and responsibilities There are no appointed testers in the organization. The developers test their own code and are thus also responsible for the quality of it. There is no formal test leader, however, the company is about to employ one. The project leader is in charge of planning the tests. There is one employee who is responsible for testing-related issues and who is also the most experienced in the area.

Page 37: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

37

In general, the developers are not educated in testing, except for a two hour long lecture held by a test consultant. There is a need for more training, especially learning how to adapt the testing to the current project and situation. There is a need for some kind of “framework” that says what to test, when to test, and how to test. The communication within the organization works satisfactory. The project groups are small in size and close in proximity, so it is easy to just go and talk to the other project members. Information can thus easily be distributed on a daily basis. The project leader also arranges weekly meetings. The project leader does not check what has been tested or how, but only wants to ensure that the object works in full. When the product is delivered, it is also assumed that it is tested thoroughly. Test planning The life-cycle model of the test process differs from project to project. The planning of the test mainly considers the time that needs to be awarded to testing. The quality of the estimations and planning depends on who is responsible for planning. Usually, planning is based on experience and instinct, but also on the experience and abilities of the individual developers involved in the project. A time slot is specified, and developers are asked to what extent they can develop the desired functionality within that time slot. It is more important to keep the time than to implement 100% of the functionality in that release. About 30-35% of the estimated total project time is allotted to testing. The planning usually turns out to be fairly correct, within an error margin of 10% of the estimated project time. A test plan1 is not always produced. Instead, the design specification constitutes the test basis. There is no structured risk analysis in the projects. Instead, planners know from experience what parts of a system can cause the most severe failures if badly implemented. These parts are tested more thoroughly. The largest problem is that it is impossible to test all problem areas, e.g. if real data (as used by the customer later on) is not accessible, or if the amount of data suddenly grows rapidly and unexpectedly. There are also hardware or network problems that can not be foreseen and therefore not tested. The test phase starts when there is something to test. The development within company is divided into two departments. The Core Department develops the core products, i.e. the search engines. The Delivery Department builds applications on top of the core products. An issue is that products developed in the Core Department do not get tested until they reach the Delivery Department. In projects where there is no test plan, the test phase begins when the product to be tested is delivered. Test environment Also the test environment differs from project to project. It often consists of a server somewhere, either at the company premises or at the customer’s site. The testers

1 In this case, a test plan is defined as a separate document which lays down the outline for what objects are to be tested, the test strategy, and the test activity schedule.

Page 38: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

38

seldom sit at their own workplace when they perform testing. The test environment should simulate the real working environment but this is not always possible due to the fact that real data is not always accessible. Nobody is appointed the responsibility of the launching and maintenance of the test environment. This also varies from project to project. The company believes that the responsibility should be appointed with the help of configuration management. Test techniques and specification System tests are based on the requirements specification. Use cases are used. Each tester is responsible for formulating his or her own test cases and decides upon what technique to use. A problem here is that the requirements specifications are often poorly written, and need to be re-structured. Another issue in specifying test cases is that it is often problematic to know what the expected output of a test should be. In most cases, this is based on subjective judgements, since the result of a test case (in this case a search) is a number of alternatives, and what alternatives are good depends on what the customer wants. However, the customer does not always know what he or she wants or what a relevant output is. The tester needs to know what has been said between customer and seller in order to be able to judge the output. Integration and acceptance tests are almost always specified in test cases. Since unit tests are the responsibility of the developers, the design varies depending on who is performing them. Test cases for unit tests are rarely specified in writing. The regression testing needs improvement. The test cases should be scripted so that regression tests can be run more easily. Walkthroughs and inspections are very rare. At some occasions, developers inspect each others’ code. The interviewees claim that there is not enough time to perform walkthroughs or inspections, but agree that they could be useful at least for less experienced developers. Test reporting and defect management The practices of test reporting and defect management are not well structured and differ between projects. Unit tests are not reported at all, since the developers perform them. Instead, faults that are found in unit tests are fixed immediately by the developer. For integration and system tests, a simple spreadsheet is often used. The project leader is responsible for collecting and compiling the test reports for these levels. The test reports should include the type of fault, the severity, a description and possibly a suggestion for solving the problem. The Core Department uses a report template, while the customer dictates the conditions in the Delivery Department. According to the interviewees, the best working practice currently is the defect management post-delivery. The company offers a support website where customers report faults and failures. The quality of the reports is dependent on the customer’s ability to describe the problem, though. In some projects, the customer has their own defect management system where they report faults during acceptance testing, and where the developers can log in to see the

Page 39: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

39

reports. The interviewees like the idea of having a central defect management system. A Subversion system, capable of handling defect management, is going to be incorporated into the organization. The interviewees state that it is important to make sure that all processes, such as defect management and release management, are put together instead of being isolated. This is preferably accomplished by using professional tools.

5.3.2 Analysis of data from the assessment interview

One of the main problems in the current test process is that it is inconsistent. The quality of the testing is mainly dependent on the individual testers. Since all testers are not equally experienced, some tests are done very thoroughly while others are more of an arbitrary nature. This is largely dependent on the fact that the testing process is not well planned and communicated, while there is not always a written test plan. There is also a problem for less experienced testers to analyze the outcome of a test. A better planning process and a common testing method among the testers are needed. To solve this problem, a template test plan could be constructed. Also, guidelines on test case specification could be distributed to test designers. Another problem is that there is a lack of knowledge of software testing. It is difficult to know what to test and when to test it, and there are no guidelines for the testers to follow. The wish to introduce testing earlier in the development process was expressed, and a way of knowing the ‘ultimate’ time to start testing. A flowchart showing the testing process in relation to the development process could be developed. There is no testing organization, and thus, developers test their own code to a large extent. The company wants to introduce a testing organization with a test manager and dedicated testers; this is however not done overnight. To solve this problem in the meantime, developers could assume the role of testers, and not test their own code in the system tests. Kit (1995, pp 163-170) states that developers should not test their own code, while human nature prevents developers from finding defects in their own code. This problem could be partly solved by letting developers test each other’s code, while the optimal solution would be to have a formal test organization with assigned testers. The interviewees are however positive towards letting developers perform their own unit tests, since they have the best understanding of their own code. Other than the problems stated above, the interviewees expressed other needs for improvements. These improvements included, for example, the use of test tools and the setup of a test organization. However, because of time and resource limitations the improvements stated above are seen as the most important and should be introduced in the first phase while other needs can be saved for further improvement later on.

5.4 Archival records

Documentation that was made available for research included examples of system level test cases, a test case template, a test plan developed by a customer, a test summary, and a test strategy describing the testing procedures that should be used in projects within the company. The documents belonged to three different projects with different project managers.

Page 40: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

40

The study of the archival records was carried out as a comparison between the contents of the documents and the contents of the equivalent documents according to the IEEE Standard for Software Test Documentation (IEEE 829-1998, 1998) and the contents of the TMap test products (Koomen et al. 2006). See section 2.5 for an explanation of the contents in the IEEE standard.

5.4.1 Collection of data from archival records

When studying the archival records, it should be taken into consideration that the documents are of varying quality since there are no templates to follow. The test plans for example, are often based on the customer’s template. The documentation studied here was chosen as random samples, and might not be representative of all documentation used within the organization (in some projects, test plans are not used at all, while customers provide sub-optimal test plan templates in other projects). The archival records are compared to the templates suggested in IEEE 829-1998 (1998) (See section 2.9). The test plan The test plan did not have a unique test plan identifier, though it did have a version number. The introduction stated the purpose of the system to be tested, and also the purpose of testing the system. The system to be tested was declared in the introduction, no details were specified though. A division between features to be tested and features not to be tested was lacking. A test process, with a start, test phases and an end was described in writing and in a diagram. In the introduction, it was mentioned that the system functionality should be confirmed according to the requirements specification and the design specification, before launched at the customer’s site. The test plan mentions a functional specification, a test specification, a test plan, a bug report and a test report as test deliverables. Pre-conditions for pre-test, the test phases, and the acceptance tests are also mentioned. The tasks needed to meet these conditions or the relationships between them are not stated. Requirements on the test environment as well as the physical working environment are stated. The roles and responsibilities of the test leader, test persons, client, project manager, technical project manager, and functional requirements manager are appointed and described. The schedule (specified in terms of weeks) is visualized in a time table. Dates for delivery of bug fixes status reports are also shown in the table. The risks identified in the testing process are stated, along with suggestions on how to avoid the risks. The test case specification The test case has a unique number, but no identifier to show what project it belongs to. A description of the function to be tested is given, but which item it is that contains the function is not mentioned. The category in which the test case belongs is mentioned. Input specifications are lacking; there is a description of what kind of input is to be given, but not the exact input. There is a description of what happens in the system if the input is a) correct, or b) incorrect, but there is no expected output defined. One precondition is stated (a table needs to be defined). The test case specification also contained the person responsible for executing the test, and the expected date of execution.

Page 41: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

41

The test summary report The report has a name and version number. An introduction is present, stating the test environment, not saying anything about the results from the tests. This information is given in a separate spreadsheet document though. The summaries of the results of the different tests are shown in diagrams. Evaluation information is found in a separate spreadsheet document, it is not summarized however.

5.4.2 Analysis of data from archival records

The test plan The test plan studied here contained some of the basic elements proposed by the IEEE standard, but several items were lacking. Among the most vital elements, the testing tasks are not described as recommended in the standard. Also, the testing techniques to be used are lacking. Describing what to test and how to test it is part of the purpose of the test plan, and can be communicated to both testers and customers in the plan. The test case specification The advantage of having a well-defined test case specification is that another tester can run the same test, or a regression test can be executed, under the same conditions. However, since the exact input and output is lacking in the studied test case specification, this is not possible. What input/output to use is solely up to the tester, which makes the test cases inconsistent. The test summary report The test summary report is fairly good – the results of the tests that have been run are presented in easy-to-read diagrams. An overall summary of the results is lacking though, and would probably be good to add since this could serve as a means of communication with the customer.

5.5 Observations

When observing, it is important to influence the observed process to the smallest extent possible. The observer must be careful when asking questions or making comments, as to make sure that the participants are not biased. It is also important to make the participants feel comfortable in the situation so that they will act as normal as possible, which will enhance the odds that the observed session is representative of a normal session. The observations were planned using the basic elements of usability testing as suggested by Rubin (1994, p. 29), translated into elements needed to conduct observations of test sessions. These elements would then include: ◊ Development of problem statements or test objectives rather than hypotheses. ◊ Use of a representative sample of participants (testers) which may or may not be

randomly chosen. ◊ Representation of the actual test environment. ◊ Observations of testers during the preparation and execution of test cases.

Controlled and sometimes extensive interrogation and probing of the participants by the test monitor.

◊ Collection of quantitative and qualitative measures.

Page 42: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

42

◊ Recommendations of improvements to the test process. The objectives of the observations were to get insight into how information from project management and developers is distributed to testers, to observe problems in the preparation or execution of a test, and study the analysis and reporting of the test results. A participant (here denoted tester A) was chosen at random, and observed while working in his normal test environment. The tester performed functional tests on a database search application. The tester was fully aware of the observations and at some points, the tester guided the observer through his actions and helped the observer by ‘thinking aloud’. The observer also asked questions during the test execution. Notes were taken during the observation.

5.5.1 Collection of data from observations

The members of the project group had previously carried out a brainstorming session where they agreed on what functions of the system and what data that needed to be tested. The tester then derived test cases for each of these functions. For each test case, a script was created. Tester A had created his own test case specification template in the form of a spreadsheet table. The table included a link to the test script. Thus, in this case, there was a well-defined test case specification. After the test cases had been run, the tester marked the test cases with different colors in the spreadsheet table, indicating whether they had passed or failed. If a test failed, a report was sent via a web page to a server. The defect reports were then handled by developers and removed from the list of defects. Tester A expressed a wish to start preparing test cases earlier. Presently, he started deriving test cases at the time of the code unit delivery. He was aware that he could have saved time if the test cases could have been prepared before the delivery. Tester A stated that the test cases were specified from the requirements specification. When asked about the quality of the requirements specification, the tester thought that it was adequate and that though changes had to be made to it during the process, this had only caused minor changes in the test cases. In this case, however, the requirements specification had been inspected and approved by the customer before set in baseline. The tester had not been given directions on how to derive test cases. Before joining the company, tester A had been trained in some testing techniques and hence knew how to use them. In many cases, he used boundary value analysis (see section 2.8.2).

5.5.2 Analysis of data from observations

As Tester A pointed out, the specification of test cases could have been done as a parallel activity to coding. However, this would have required access to the test basis such as requirements and design specifications. The spreadsheet test specification that tester A had developed was a good example of a test element that could be distributed to other testers. However, he was the only

Page 43: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

43

tester in the project using the spreadsheet. The test process would most probably benefit from having a common test reporting system.

5.6 Meeting with the test manager Halfway into the thesis work, a test manager was employed by the company. This fact had great impact on the assessment of the test process. The test manager introduced a test management tool called TestLink, which provided both the company and the author of this thesis an overlook of the current test process in the Product Department. Up until this moment, the improvement work was focused on finding common weak parts and creating a test process that was generic enough to use in every project in the company. After gaining these new insights, it was however evident that testing is performed in very different ways in the Product Department and in the Solution Department, and that the deficiencies of the testing were very diverse in the two departments. Therefore, two different approaches could be taken in the two departments. Several conversations took place between the author and the test manager. During these conversations, it was revealed that the largest problem with the testing in the Product Department was that, in most cases, the test basis was either lacking or of inferior quality. Most of the time, the development work was focused on getting forward and starting implementation instead of documenting requirements. Often, it was discovered during implementation that functionality had to be added or changed, which caused a lot of rework. After all of the desired functionality had been expressed and/or implemented, requirements and manuals were written down. Thus, the testers did not have access to material to base the tests on until this point. Valuable time then had to be spent reviewing requirements and specifying test cases. If the test basis had been provided earlier, test case specification could have been finished before code delivery. Instead of waiting for material to begin test case specification, these activities would have been run in parallel, and test cases could have been run as soon as testable code units were delivered. The developers were aware of the fact that a requirements specification was essential, but they could not find the time to write it down. Also, the test manager believes that the entire department works according to a “hands on-mentality”, implying that they need to be given practical, concrete examples in order to understand what needs to be done and how to do it.

5.7 Existing terms, concepts and practices To help the new test process getting accepted in the organization, it is important to use terms and concepts that are already employed. To gain an understanding of which terms to use, a list of terms were given to one of the project leaders. The project leader was then asked to pick out the terms that she felt was employed and understood in the projects. The chosen terms were ‘performance test’, ‘functional test’, ‘non-functional test’, ‘stress test’, ‘developer’, ‘test manager’, ‘project manager’, ‘product manager’, ‘project planning’, ‘risk analysis’, ‘requirements specification’, ‘error correction’, ‘operation’, ‘maintenance’, ‘unit test’, ‘integration test’, ‘system test’, ‘acceptance test’, and ‘regression test’. An interesting observation was that ‘test planning’ was not one of the picked-out terms.

Page 44: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

44

It is also important to point out the practices that are functioning well in some projects, and which can be transferred in their existing, or somewhat modified, form to other projects. Examples of working practices are parts from individual testers’ practices (see section 5.6.1) and projects where use cases to derive test cases from are defined. The current test process and its relationship with the development activities are shown in a diagram in appendix III.

5.8 Pre-improvement questionnaire

In order to assess the test practices within the specific project, questionnaires were handed out to four developers involved in the project. The purpose of the questionnaire was to gather information about which test practices the developers thought were functioning or not functioning well and to find out what other tools they would consider helpful when performing tests. The questionnaire (in Swedish) can be found in appendix II. The reason for using questionnaires for this part of the assessment was that the developers would be available at different times, and the intention of the author was to ask all developers the same questions as to avoid as much bias as possible. Rubin (1994, p. 199) states that written questionnaires are used to collect general information across the entire population of participants, since the same questions are asked each individual in the exact same way. Hence, precision and unambiguousness in the questionnaire are of extreme importance.

5.8.1 Collection of data from the pre-improvement questionnaire

All of the participants were positive to structuring the test process. They appreciated that the test process improvement had started, and mentioned the introduced tools, especially the unit test automation tool, as the best improvements so far. Other comments were that the communication with the test department worked better than before. Some parts that were still not functioning well were that it is difficult to know what output to expect from the test cases, and that the test process is still poorly structured. Only one of the developers had had access to the test plan, although he had not read it. All of the developers wanted better support for specifying test cases, including what technique to use. A majority of the participants wanted support in test planning and knowing when to start tests.

5.8.2 Analysis of data from the pre-improvement questionnaire

The introduction of testing tools hade been successful and the testers felt that the tools had supported them to a large extent. Therefore, the use of tools should continue to the next release. The testers however wanted more support in test case specification and test planning. For that purpose, guidelines for choosing and using test techniques and guidelines for test planning should be provided.

Page 45: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

45

5.9 Post-improvement questionnaires

After the improvement suggestions had been incorporated into a chosen project, a post-improvement assessment was done. As the time awarded to the thesis work was limited, a complete evaluation of the improved test process was infeasible. The true impact that the process improvement will have on the organization will probably not be visible until at least a couple of months have passed. The project that the improved test process was implemented in was to be finished a few weeks after the deadline of the thesis. Therefore, the improved test process was evaluated by briefing participants about their views and thoughts about the improvement steps in questionnaires. The post-improvement questionnaires (in Swedish) can be found in appendix IX. The first part of the post-improvement questionnaire contained assessment questions about the test planning guide, the test evaluation guide and the requirements checklist, documents which were first explained and presented at a project group meeting. The questionnaire was given to a test manager, two project managers, and a product manager. The participants were chosen because they were the most likely users of the particular documents in a real project. The second part of the questionnaire addressed the guide to test techniques and it was handed out to the developers in the chosen project, i.e. the participants in the pre-improvement questionnaire, after a test-technique workshop. The reason for choosing these participants was that the developers perform most of the testing and would thus benefit the most from using the guide to test techniques.

5.9.1 Collection of data from the post-improvement questionnaires

In the questionnaire the participants were asked to rate their answers to the questions, giving them a score on a scale of ‘1’ to ‘5’, meaning :

1: Definitely not 2: Probably not 3: Neutral 4: Yes, probably 5: Definitely yes

The results are shown in table 3. The numbers indicate the mean score for each question on each document. The numbers within parentheses show the lowest and the highest score.

Page 46: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

46

Test planning guide

Requirements inspection checklist

Test evaluation guide

A guide to test techniques

1. Is it easy to understand how the document is to be used? 4.50 (4.00, 5.00) 4.25 (3.00, 5.00) 4.75 (4.00, 5.00) 4.00 (3.00, 5.00) 2. Considering Your current knowledge and experience, would You be able to use the document? 4.75 (4.00, 5.00) 4.25 (3.00, 5.00) 4.75 (4.00, 5.00) 4.25 (3.00, 5.00)

3. Would You benefit from using the document in Your work? 4.00 (3.00, 5.00) 3.75 (3.00, 5.00) 4.25 (4.00, 5.00) 4.00 (4.00, 4.00)

4. Would You consider using the document in a future project? 4.25 (3.00, 5.00) 4.25 (3.00, 5.00) 4.25 (4.00, 5.00) 4.00 (4.00, 4.00) 5. Would You say that the document solves any of the problems in the current test process? 4.50 (4.00, 5.00) 3.75 (3.00, 5.00) 4.25 (4.00, 5.00) 3.50 (3.00, 4.00)

Score summary for document 4.40 4.05 4.45 3.95

Table 3 - Results from the post-improvement questionnaire

Apart from the questions, the participants were also given the chance to comment on the documents. Some comments were: ◊ The test planning guide should be better suited to the specific projects at the

company, so that a test plan can be produced as fast as possible. Otherwise, there is a risk that the development activities have come too far by the time the test plan is finished.

◊ The test planning guide should contain more techniques for developing test plans. ◊ In the test planning guide, there should also be an activity where test cases are

prioritized. ◊ The requirements that are to be reviewed sometimes have to be very abstract, and

not too technical for the customer to understand them, and that fact has to be considered when reviewing them.

◊ In the test evaluation guide, the type of release (alpha, beta, or gold) should be mentioned.

5.9.2 Analysis of data from the post-improvement questionnaire

The scores on the post-improvement questionnaire are high (with no mean score below 3.50), which implies that the proposed improvements will be helpful to the structuring of the test process at the company and that they probably will be used in future projects. On the question “Would You consider using the document in a future project?”, the mean score was 4 (yes, probably) or above for each document, which shows that that the goal of establishing the improvement actions is most likely to be

Page 47: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

47

met. The mean score on the questions concerning the ease of use of the documents was also above 4, suggesting that they will be useful to the users without special training. There were also some comments, pointing out deficiencies in and suggesting improvements to the documents. The comments were considered before the final improvement round, described in 6.2.2.

Page 48: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

48

6 Results

6.1 Results from the research Presently, in many projects the developers test their own code. This is positive for unit tests, but when it is time for system test having a separate test team is desired. In this way, testers will not be biased. The lack of knowledge is an apparent problem. Thus, all improvement actions must be on a level that is suitable for beginners and terms familiar in the organization must be used. After the evaluation of the current test process and during the analysis, a test manager was hired. This should mean that there is a good chance that some of the organizational issues will be solved and that the test process will be more cohesive. Since a very large part of the problems in the current test process stem from poor planning practices, these need to be structured. Providing a test planning guide could help in solving the problem. Also, the company needs help in determining when to start testing. One way to solve this problem could be to create a flowchart showing the testing activities in relation to the development process. In projects where testers are separated from developers, the test activities presently start after development activities. If these two can be run in parallel instead, lots of time could be saved and the testing activities would spend a shorter time on the critical path. This would require better planning, and also that testers can access the test basis (requirements, functional and technical design documents) early on. Currently, with few exceptions, no specific test techniques are used. The developers are not trained as testers, and thus they might not know how to specify test cases according to test techniques. A guide on how to test different features could possibly be of assistance here. The main problem with the current test documentation is that it is inconsistent. Also this could be solved by using templates. The templates could for example be based on the IEEE standard or elements of the TMap documentation. Guidelines should be provided along with the documents, so that they can be used without too much training. The most important part to start with is the test plan. The test manager is currently working on a test plan template. As a first step, the test plan should be simple and answer the question needed for organizing and performing test activities. These questions are: ◊ Why should we test? – What are the goals of performing tests? ◊ What should we test? – What functions, units, objects, features, system, etc.

should we test? ◊ Who should test? – Who has what role and which responsibilities? ◊ How should we test? – What techniques, tools, methods and environment are

needed?

Page 49: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

49

◊ When should we test? – How do we schedule testing and which milestones and deliverables are defined? What prerequisites are there for starting tests?

◊ When should we stop testing? – What are the pass/fail criteria? What decides when an activity or the test phase as a whole ends?

6.2 Improvement measures The interviews, observations and conversations all showed that the largest problem is that testing, if planned at all, is planned for too late. This results in time on the critical path being wasted on test preparations. This problem stems both from not recognizing and prioritizing testing as its own manageable process, and from test basis not being available on time. Test and development activities can not be run in parallel and test cases are not specified and ready by the time the code is delivered. This problem is one of the first that needs to be addressed. Also, based on the low level improvements suggested by the framework models (see section 3), the following steps will be recommended as first steps for improving the current test process: ◊ Provide support for test planning, for example by creating a test planning guide

that helps answering the questions stated in 6.1. Show the advantages of early test planning by using practical examples or metrics.

◊ Provide support as to know when to start testing (section 2.11). ◊ Provide support for test reporting, possibly with the help of a template (section

2.9.3). Koomen and Pol (1999, p. 143) consider test reporting to be the most important product of the test process since it offers insight into the quality of the system, and also the quality of the development process.

◊ Provide support for problem reporting and defect management. ◊ Provide support for analyzing test results. ◊ Visualize the relationship between the test process and the development process,

for example by creating a flowchart. ◊ Ensure that the needed test basis is present and review it for testability. For

testability in requirements, see section 2.5. ◊ Introduce the new test elements gradually in small steps since this probably would

face less resistance and reluctance (see section 2.7). ◊ Use terms and concepts that are familiar and used within the organization.

Examples of terms are presented in section 5.8. ◊ Use examples of customs and activities that are already well performed within the

organization. ◊ Introduce testing techniques. A description of some basic techniques can be found

in section 2.8. Koomen and Pol (1999, p. 105) state that a test technique requires at least a description of the starting situation, the test actions to be performed, and the expected end result.

◊ Make the new process as generic as possible, so that it will be both repeatable and transferable.

6.2.1 Specific improvements in the chosen project

The chosen project is a development project that is divided into three releases. After the first release, a pre-improvement questionnaire was handed out (see section 5.9). With information gathered from the general assessment and the questionnaire (specific assessment), improvement actions for testing in the second and third releases of the project were proposed. Since the project was iterative and some

Page 50: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

50

planning and a lot of the development were already in place, some of the improvement actions mentioned in section 6.2 could not be implemented in full at this stage. Also, the project was so small that it was considered infeasible to implement too many elements. Since the second release iteration required some revision of the test planning, there was an opportunity to try this element of the improved test process. An informal review of the requirements to ensure testability could be introduced. Also, from the questionnaires it was deduced that the testers wanted better support in specifying test cases according to techniques and in documenting test cases and test results. The last element of improvement was to introduce an analysis of the test results. This analysis would however take place after the end of the thesis work and thus a full evaluation of it could not be completed. The proposed improvement actions were: ◊ Perform risk analysis – risk analysis was introduced as an activity within test

planning (see below for test planning support). ◊ Earlier test planning – a project group meeting was held in the project start-up

phase. At the meeting, the documents described below were presented, providing the project group members with the tools to start test planning early.

◊ Support for test planning – the test manager had created a test plan template. The improvement here included a description of activities carried out in test planning, questions to be answered by the test plan, and support in accomplishing these activities and completing the test plan. The guide included reasons for introducing the activities as to motivate testers. The result was a presentation of test planning activities along with a test planning guide that was handed to the testers.

◊ Support for requirements reviews – a checklist in the form of spreadsheet document for reviewing the testability of the requirements was provided along with questions to answer for each requirement.

◊ Support for choosing test techniques – a guide showing appropriate test techniques for different types of data was handed to the testers. This guide would also include support for choosing appropriate test case parameters. Also, a workshop on test techniques was carried out.

◊ Support for evaluating test results – a set of questions to answer when analysing the outcome of the test was provided in the form of a test evaluation guide.

6.2.2 Results from the specific improvements in the chosen project

Based on the results from the post-improvement questionnaires, some changes were made to the first drafts of the guide documents. These changes included replacing some terms that were somewhat ambiguous, consideration of the current release in test evaluation and test case prioritization. One participant pointed out that the test planning guide should be more adapted to the particular situation at Apptus. However, since the goal of the other participating company (Sogeti) was to develop a generic test process improvement model, this suggestion was difficult to accommodate. These were two contradicting viewpoints, and it was decided that the best thing to do was to keep the guide as small, simple and flexible as possible, so that it could be suited to many organizations. Also, the

Page 51: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

51

problem with test planning being delayed so that development goes on too far before a test plan can be produced should be solved with better project planning and thereby earlier test planning. The resulting test process improvement documents can be found in Appendix V-VIII.

Page 52: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

52

7 Conclusions

7.1 Conclusions In many immature organizations, the existing test process focuses almost entirely on test execution. After assessing the current test process at the company studied in this thesis, it was revealed that this was the case here as well. Therefore, the aim of the improvement actions was to shift focus to test planning and test evaluation. With the help of existing test process improvement models and testing theory, practical improvement guidelines were presented. A guide to test planning was developed, along with a checklist for reviewing requirements for testability, a guide to choosing the right test techniques, and a guide to test evaluation. The four documents were introduced in a chosen project. After introducing the documents, an assessment of the improvement actions was done. The results of the assessment were quite satisfactory. However, further improvements based on comments from the assessment participants were made to the documents. During the test process evolution and the development of the guides, the author noticed a raised awareness and discussion of test-related activities and their importance within the company. Also, with the help of the test manager, test-related activities earned a position on the development project agenda. The goal of this thesis was to provide suggestions for the introduction of structured testing and at the same time develop a generic test process improvement model. The proposed improvement actions fulfil the basic elements of the test process models described in chapter 3. However, the single most important factor that will ultimately determine the success of the improvement actions is the attitudes of the people who will introduce and use them. The results from the evolution of the proposed improvement actions were satisfactory, showing that the guides were easy to understand and that the participants would most likely use them in their work. The guides are small and simple and could fit in many projects. Based on this, the author’s opinion is that the goals have been met. Focusing on the next step in further improving the test process, the company was given the following advice, based on the findings from the research (see section 6.2) and the test process improvement models: ◊ Generalize and use the improvement steps from the chosen project to suit other

projects. Agree on common templates so that test planning and reporting is performed consistently throughout different projects.

◊ Keep measurements of tests so that test results can be analyzed. ◊ Focus on the communication and quality of the bug reports and test reports.

Better reports help in the monitoring and analysis of tests, and support more efficient bug repairs.

A factor that influenced the research and the results was that the number of respondents in the interviews and questionnaires was very small. Since the company was not that large, there were only a limited number of people available who were in a position where they could influence the test process and benefit from the guide

Page 53: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

53

documents. Therefore, it was decided that who evaluated the documents was more important than how many.

Page 54: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

54

8 References

8.1 Books <Author. Year published, Title, Publisher.> Burnstein, I. 2003, Practical Software Testing, Springer. Graham, D., Herzlich, P. and Morelli, C. 1995, CAST Report, Cambridge Market Intelligence Limited. Highsmith, J. 2002, Agile Software Development Ecosystems, Addison-Wesley. Höst, M., Regnell, B. and Runeson, P. 2006, Att Genomföra Examensarbete, Studentlitteratur. Kit, E. 1995, Software Testing in the Real World: Improving the Process, Addison-Wesley Koomen, T. and Baarda, R. (ed.) 2005, TMap Test Topics, UTN Publishers. Koomen, T. and Pol, M. 1999, Test Process Improvement – A Practical Step-by-step Guide to Structured Testing, Addison-Wesley. Koomen, T., van der Aalst, L., Broekman, B. and Vroon, M. 2006, TMap Next – for Result-driven Testing, UTN Publishers. Lauesen, S. 2002, Software Requirements – Styles and Techniques, Addison-Wesley. Pham, H. 2007, System Software Reliability, Springer. Rubin, J. 1994, Handbook of Usability Testing- How to Plan, Design, and Conduct Effective Tests, John Wiley & Sons. Ryber, T. 2006, Testdesign för Programvara – Så Tar Du Fram Bra Testfall, No Digit Media. Sommerville, I. 2001, Software Engineering, 6:th ed., Addison-Wesley. Wohlin, C. 2005, Introduktion till Programvaruutveckling, Studentlitteratur.

8.2 Journal articles

<Author. Year published, Title, Journal name, Volume, Issue, Page numbers.> Boehm, B.W. 1976, Software Engineering, IEEE Transactions on Computers, Vol. C-25, No. 12, pp. 1226-1241. Boehm, B.W. 2000, Software Estimation Perspectives, IEEE Software, Vol. 17, No. 6, pp. 22-26. Burnstein, I., Homyen, A., Suwanassart, T., Saxena, G. and Grom, R. 1999, A Testing Maturity Model for Software Test Process Assessment and Improvement, Software Quality Professional, Vol. 1, No. 4, <http://www.asq.org/pub/sqp/past/vol1_issue4/burnstein.html>. Denger, C. and Shull, F. 2007, A Practical Approach for Quality-Driven Inspections, IEEE Software, Vol. 24, No.2, pp. 79-86. Ericson, T., Subotic, A. and Ursing, S. 1997, TIM – A Test Improvement Model, Software Testing, Verification & Reliability (STVR), Vol. 7, No. 4, pp. 229-246. Gondra, I. 2008, Applying Machine Learning to Software Fault-Proneness Prediction, Journal of Software and Systems, Vol. 81, No. 2, pp. 186-195.

Page 55: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

55

IEEE Std 829-1998. IEEE Standard for Software Test Documentation. Software Engineering Technical Committee of the IEEE Computer Society, 1998. Johnson, M.J., Ho, C., Maximilien, E.M. and Williams, L. 2007, Incorporating Performance Testing in Test-driven Development, IEEE Software, Vol. 24, No. 3, pp. 67-73. Karlström, D., Runeson, P. and Nordén, S. 2005, A Minimal Test Practice Framework for Emerging Software Organizations, Software Testing, Verification and Reliability, Vol. 15, No. 3, pp. 145-166. Lauesen, S. and Vinter, O. 2001, Preventing Requirement Defects: An Experiment in Process Improvement, Requirements Engineering, Vol. 6, No. 1, pp. 37-50. Samson, D. 1993, Knowledge-Based Test Planning: Framework for a Knowledge-Based System to Prepare a System Test Plan from System Requirements, The Journal of Systems and Software, Vol. 20, No. 2, pp. 115-124.

8.3 Conference papers

<Author. Year published, title, conference name [, organizer] [, page numbers].> Bai, X., Dai, G., Xu, D. and Tsai, W. 2006, A Multi-Agent Based Framework for Collaborative Testing on Web Services, Second International Workshop on Collaborative Computing, Integration and Assurance, IEEE, pp. 205-210. Bate, R.R. and Ligler, G.T. 1978, An Approach to Software Testing: Methodology and Tools, The IEEE Computer Society’s Second International Computer Software and Applications Conference (COMPSAC ’78), IEEE, pp. 476-480. Bergner, K., Lötzbeyer, H., Rausch, A., Sihling, M. and Vilbig, A. 2000, A Formally Founded Componentware Testing Methodology, Proceedings of the first International Workshop on Automated Program Analysis, Testing and Verification, ICSE 22. Berling, T. and Thelin, T. 2003, An Industrial Case Study of the Verification and Validation Activities, Proceedings of the Ninth International Software Metrics Symposium (METRICS ‘O3), pp. 226-238. Bertolino, A., Corradini, F., Inverardi, P. and Muccini, H. 2000, Deriving Test Plans From Architectural Descriptions, Proceedings of the 2000 International Conference on Software Engineering, ACM, pp. 220-229. Gokhale, S.S., 2005, Variance Expressions for Software Reliability Growth Models, Annual Reliability and Maintainability Symposium 2005, Proceedings, IEEE, pp. 628-633. Gupta, S.C. and Sinha M.K. 1994, Impact of Software Testability Considerations on Software Development Life-cycle, First International Conference on Software Testing, Reliability and Quality Assurance, IEEE, pp. 105-110. Itkonen, J., Mantyla, M.V. and Lassenius, C. 2007, Defect Detection Efficiency: Test Case Based vs. Exploratory Testing, First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007), IEEE, pp. 61-70. Le Traon, Y. and Baudry, B. 2004, Optimal Allocation of Testing Resources, First International Workshop on Model, Design and Validation, IEEE, pp. 9-17. Paige, M.R. 1978, An Analytical Approach to Software Testing, The IEEE Computer Society’s Second International Computer Software and Applications Conference (COMPSAC ’78), IEEE, pp. 527-532. Sibisi, M. and van Waveren, C.C. 2007, A process framework for customising software quality models, AFRICON 2007, pp. 1-8.

Page 56: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

56

Souter, A.L. 2003, Context-driven Testing of Object-oriented Systems, International Conference on Software Maintenance (ICSM 2003), pp. 281-284. Vienneau, R.L. 1991, The Cost of Testing Software, Proceedings on the Annual Reliability and Maintainability Symposium 1991, pp. 423-427. Yutao, H., Hecht, H. and Paul, R.A. 2000, Measuring and assessing software test processes using test data, Fifth IEEE International Symposium on High Assurance Systems Engineering (HASE 2000), pp. 259-264.

Page 57: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

57

Appendix I – The Assessment Interview 1. Syfte Denna intervju är en del av ett examensarbete som har till syfte att utvärdera den befintliga testprocessen på Apptus för att förbättringar till den sedan ska kunna föreslås och införas. Svaren i intervjun kommer att ligga till grund för utvärderingen av den befintliga testprocessen. 2. Urval: En projektledare med ansvar för testning i projektet En person ur ledningen En person med god kännedom om företagets arbetssätt och produkter 3. Bakgrund:

Vilken är din position på företaget? På vilket sätt och till vilken grad är du involverad i testverksamheten på företaget?

4. Information om företaget: Hur många personer sysslar med utveckling och/eller underhåll av programvara? Hur många personer sysslar med testrelaterade aktiviteter? Hur är testningen på företaget organiserad (utvecklarna sköter om testningen, det finns en testgrupp inom utvecklingsgruppen under en projektledare, det finns en separat testgrupp under en testledare, det finns en Software Quality Assurance-grupp…)? Hur strukturerad tycker Du att testningen på företaget är (helt ad hoc, informell, något strukturerad, väl strukturerad)? Ungefär hur stor del procentuellt av projekttiden respektive -kostnaden går åt till testning? Tycker ni att det är för mycket/för lite/lagom? Vad skulle ni spontant säga är de starkaste respektive svagaste delarna i den nuvarande testprocessen? Vilka delar är i störst behov av förändring?

5. Frågor enligt testprocesselement:

Teststrategi och –metodik Finns det någon utarbetad teststrategi eller -policy? Vad specificeras i denna? Vem är det som utarbetar teststrategin? Beskriv testmetodiken? Är den klart formulerad? Används olika metodiker för olika projekt eller är metodiken generisk? Vilka delar innehåller den? Följs metodiken? Utförs någon riskanalys för testning i projekten, så att man vet hur mycket och vad man ska testa för att optimera resursfördelningen utifrån vad som behöver testas mest noggrant? Används riskanalysen för att planera för med vilken noggrannhet olika delar testas beroende på hur kritiska delarna är för projektet? Testas olika delar med olika testtekniker beroende på hur kritiska de är? Finns det någon teststrategi för testning av icke-funktionella egenskaper och krav? Hur sker prioriteringen av dessa, och var specificeras prioriteringen?

Page 58: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

58

Vad är det som avgör när det dags att avsluta testningen (vissa/alla testfall är godkända, tiden eller resurserna tar slut)?

Tids-/resursestimering och planering

På vad baseras tids- och resursestimeringen i ett projekt? Brukar uppskattningarna stämma? Finns det data från tidigare projekt som kan användas för estimering, och används denna i så fall? Sker det under projektets gång någon kontroll av hur tids- och resursestimeringen stämmer med det verkliga behovet? Om inte, sker kontroll av estimeringen efter projektets avslutande? Finns det utrymme att justera och anpassa tids- och resursallokeringen vid behov efter planeringsfasens avslutande?

Inställning

Hur är inställningen till testning allmänt bland de anställda? Ses det som ett ”nödvändigt ont” eller som något väldigt viktigt? Hur är attityden gentemot att införa en ny testprocess? Har testledaren/testarna möjlighet att påverka planeringen och resursallokeringen i projektet?

Testmiljön

Hur ser testmiljön ut? Vem ansvarar för att sätta upp och underhålla den? Kan testfall enkelt sparas och återskapas i testmiljön? Simulerar testmiljön den miljö som produkten ska användas i i verkligheten? Kan olika testare använda testmiljön utan att påverka varandras tester?

Rapportering och felhantering

Hur rapporteras testresultaten? Vad innehåller rapporterna? När rapporteras testresultaten (efter en viss tid, efter att alla testfall gåtts igenom, när ett kritiskt fel hittats…)? Hur rapporteras och beskrivs ett fel (ID, datum, ansvarig, placering, felbeskrivning, orsak, grad, feltyp…)? Finns det en mall för felrapporterna? Finns det en standard för klassificering av feltyp och grad?

Kommunikation

Hur sker kommunikationen inom testgruppen? Hur sker kommunikationen mellan testgruppen och övriga grupper (ledning, utvecklare…)? Sker det några avstämningar av hur testarnas arbete fortlöper? Hur? Vad är det som stäms av?

Testpersonal och utbildning

Hur många testare ingår vanligen i en testgrupp? Har varje testgrupp en formell testledare? Har alla testare fått sina ansvarsområden och uppgifter definierade? Var finns dessa definierade? Är de som testar anställda som testare eller är testningen egentligen inte deras huvudsakliga arbetsuppgift? Har testare/testledare fått någon utbildning inom testning?

Page 59: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

59

Har de som sysslar med testning på företaget generellt goda kunskaper och stor erfarenhet inom testning?

Testspecifikationstekniker

Specificeras testfallen enligt en eller flera bestämda testtekniker? Om inte, hur specificeras testfallen? Vilka tekniker används (både white-box och black-box)? Var uttrycks vilken testteknik som ska användas (i teststrategin, testplanen, testspecifikationen)? Vem bestämmer vilken teknik som ska användas (testledare, upp till var testare)? Hur beskrivs testfallen? Vad ingår i beskrivningen (indata, systemstatus, förväntad utdata)? Dokumenteras testfallen så att någon annan än den som skrivit dem kan utföra dem? Kan testfallen återanvändas? Specificeras testfall för alla testnivåer (enhets-, integrations-, system-, acceptanstester…)? Hur skiljer sig testteknikerna mellan de olika nivåerna?

Testfasinledning

När i ett utvecklingsprojekt (under inledningen av projektet, i samband med kravspecifikationen, under designen, under utvecklingen, vid leverans av sub-systemen…) planeras testningen inklusive resursallokering? När specificeras testfallen? När involveras testarna? Har testledaren/testarna möjlighet att granska kravspecifikationen innan utvecklingen startar? Hur går denna granskning till (formell/informell, checklistor…)?Hur rapporteras resultatet? Har testledaren/testarna möjlighet att föreslå ändringar i kraven med hänsyn till t ex kravens testabilitet? Sker det någon granskning och utvärdering av övriga dokument (t ex designdokument) som utgör testbasen innan testfallen specificeras? Hur går denna granskning till (formell/informell, checklistor…)?Hur rapporteras resultatet? Finns det några tekniker specificerade för denna typ av granskning/utvärdering?

Processmodellen

Beskriv vilka faser som är definierade i testprocessen (planering, förberedelser, specifikation, exekvering, avslutning…)? Vad består dessa faser av? Finns det någon mall för vad som ska finnas med i varje fas? Vem är ansvarig för respektive fas? Vilken dokumentation produceras i testprocessen (testplan, testspecifikation, testfallsdesign, felrapport…)? Finns dokumentationen lättillgänglig för testarna? Hur sker versionshanteringen och administrationen av dokumenten? Finns det testplaner i varje projekt? Vilken information innehåller dessa? Används mätetal för att mäta själva testprocessen? Vilka i så fall, och vad mäts? Vilken indata (tidsåtgång, systemets storlek…) respektive utdata (genomförda testfall, antal fel…) används i mätningarna? Hur utvärderas och rapporteras resultaten? Hur används resultaten?

6. Övriga kommentarer?

Page 60: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

60

Appendix II – The Pre-Improvement Questionnaire Introduktion Syftet med denna enkät är att utvärdera hur utvecklare/testare i ett specifikt projekt i nuläget ser på och utför testning. En viktig del i utvärderingen är att undersöka hur nuvarande processer och verktyg stöder testarbetet. I delar av enkäten används en sjugradig skala, där Du fyller i den grad till vilken du föredrar ett av två alternativ. Om Du ringar in ’1’ innebär det alltså att Du helt instämmer med alternativet till vänster och ’7’ att Du helt instämmer med alternativet till höger. Alternativet ’4’ innebär ett medelvärde eller att Du är neutral till frågan. Bakgrund B1. Personuppgifter:

Namn:

E-post:

Arbetsuppgifter:

Erfarenhet i mjukvaruindustrin (antal år):

Erfarenhet inom testning (antal år):

B2. Vilka testrelaterade arbetsuppgifter har du nu eller tidigare haft (kryssa i alla som stämmer)?

Testplanering Granskning av dokument (krav, design)

Specifikation av testfall Granskning av kod (ej din egen kod)

Dokumentation av testutfall (felrapporter) Exekvering av testfall för enhetstest

Test management Exekvering av testfall för systemtest

Insamling av testrelaterad statistik Debuggning

Övrigt:

B3. Ungefär hur stor andel av den tid du totalt lägger ner på mjukvaruutveckling ägnar du åt följande testrelaterade aktiviteter?

a) Planering och specifikation av testfall: ____% b) Dokumentation: ____% c) Exekvering av testfall: ____% d) Debuggning: ____% e) Övrigt: ____%

Page 61: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

61

Testprocessen TP1. Hur skulle Du beskriva testprocessen i nuläget (ringa in ditt svar)?

Ad hoc 1 2 3 4 5 6 7 Väl strukturerad

TP2. Hur viktigt tycker Du att det är att ha en strukturerad testprocess?

Inte alls viktigt

1 2 3 4 5 6 7 Väldigt viktigt

TP3. Tycker Du att lagom med tid har lagts på testning i projektet?

Det har lagts för lite tid på

testning.

1 2 3 4 5 6 7 Det har lagts för mycket tid på testning.

TP4. Skulle Du kunna tänka Dig att lägga mer tid på att planera och specificera test än i detta projekt?

Nej, inte alls 1 2 3 4 5 6 7 Ja, absolut TP5. Vad anser du har varit bra respektive mindre bra delar i testningen under detta projekt?

Bra:

Mindre bra:

TP6. Har Du haft tillgång till testplanen i detta projekt?

Nej

Ja

Page 62: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

62

TP6a:Om ja, har Du läst den?

Nej

Ja TP6b: Om ja, har den stött Dig i Ditt testarbete?

Nej

Ja TP7: Vad baserade Du Dina testfall på?

Kravspecifikationen Designdokument

Use cases Kod

Övrigt:

TP8: När började Du planera och specificera Dina testfall?

När kravspecifikationen var färdig Under programmeringen av den enhet som skulle testas

När designen var färdig När programmeringen av den aktuella enheten var färdig

När use case var färdiga Jag specificerade inga testfall

Övrigt:

TP8a: Var det en bra tid att börja planera och specificera testfall?

Det var för tidigt

1 2 3 4 5 6 7 Det var för sent

TP9: Hur valde du de parametrar (indata) du använde under testexekveringen?

Page 63: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

63

TP10: Vilka av dessa aktiviteter skulle Du behöva/vilja ha bättre stöd för eller hjälp med i ditt arbete?

Bättre planering av test Stöd för granskning av dokument (krav, design)

Bättre överblick över testprocessen Stöd för granskning av kod (ej din egen kod)

Veta rätt tidpunkt för att börja testa Stöd för att skriva testscript

Veta vilka tekniker som bör/kan användas för specifikation av testfall

Stöd för exekvering av testfall för enhetstest (black-box)

Stöd för dokumentation av testfall Stöd för exekvering av testfall för systemtest (white-box)

Stöd för dokumentation av testutfall (felrapporter) Stöd för debuggning

Övrigt:

Verktyg V1: Har Du använt några verktyg för testrelaterade aktiviteter i detta projekt? Om ja, i så fall vilka?

Nej

Ja:

Om Du svarade nej på ovanstående fråga, så är Du färdig med enkäten nu . V2: Var det första gången Du använde detta verktyg?

Nej

Ja V3: Tycker Du generellt att verktyget har underlättat Ditt arbete?

Nej

Ja

Page 64: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

64

V4: Hur väl instämmer Du med dessa påståenden vad gäller verktyget (kryssa i lämplig ruta)? Instämmer

ej alls Instämmer

ej Neutral Instämmer

till viss del Instämmer

helt

Verktyget var enkelt att förstå och använda

Jag använde verktyget så som det var menat att användas

Verktyget hjälpte mig att planera mitt arbete

Verktyget hjälpte mig att kommunicera mina framsteg bättre

Verktyget hjälpte mig att utföra mitt arbete snabbare

Verktyget gjorde att jag kunde utföra mitt arbete mer effektivt

Verktyget hjälpte mig att se resultatet av mitt arbete

Verktyget gav mig en bättre överblick över mitt arbete

Jag vill använda verktyget även i senare projekt

Tack för att du tog dig tid att besvara enkäten!

Page 65: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

Appendix III – The Current Test Process Model

Client/ initiator

Developer

Test manager

Project manager

Product manager

Unit test Error correct-

ion

Project planning

Main-ten-ance

Idea

wishes demands

Unit coding/integration System test

Pass

Fail

Operation

Unit test execution

System test

execution

Project plan

Design

More units to develop

Legend:

Develop-ment phase

Test process activity

Develop-ment flow (critical path)

Test artefact

Phase document/deliverable

Alternative paths

No more units to develop System

finished and tested

System not tested

Division of respons-ibility

Test plan/ spec.

Test level

Req’s/ Use cases

Develop-ment stop

FAT execution

Funct. accept-

ance test

System not accepted

System accepted

Figure A I - The current test process. The horizontal axis represents the timeline, and the vertical axis the division of responsibility.

Page 66: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

66

Appendix IV – The Improved Test Process Model

Figure A II - The improved test process model. The horizontal axis represents the timeline and the vertical axis the division of responsibility.

Client/ initiator

Developer

Test manager

Project manager

Product manager

Unit test Error correct-

ion

Project planning

Main-ten-ance

Idea

wishes demands

Unit coding/integration

Pass

Fail

Operation

Unit test execution

Project plan

Design

More units to develop

Legend:

Develop-ment phase

Test process activity

Develop-ment flow (critical path)

Test artefact

Phase document/deliverable

Alternative paths

No more units to develop

System finished and tested

System test

System test

execution

System not tested

Division of respons-ibility

Test plan

Test level

Req’s/ use

cases

Improve-ment step

FAT execution

Funct. accept-

ance test

System not accepted

System accepted

Test case scripting

Test environ-

ment setup

Parallel develop-ment/test activity

Require-ments review

Specify requirements /use cases already in the test planning phase

Recognize testing as a separate, manageable process

Test report

Produce test report

Produce test plan according to a set standard

Specify test cases according to stated techniques

Review requirements for testability

Test case specific-

ation

Problem report

Script test cases to simplify test execution and regression testing

Create problem report according to a set standard

Page 67: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

Appendix V – A guide to test planning Regardless of what test plan template is used, the information in this guide applies to any test planning phase. This document serves as a complement to a test plan template, and can also serve as a checklist for ensuring that the necessary parts are included in the test plan.

a) The goals of test planning The test planning should take place as early as possible. To enable as efficient use of resources as possible, test planning should be performed before development begins. The goals of the test planning are primarily to: ◊ Agree on a test approach ◊ Agree on the purpose and goals of testing ◊ Gain an understanding and overview of the product and the test process ◊ Acquire a common means of communication – the test plan ◊ Perform risk analysis ◊ Ensure optimal distribution of resources ◊ Enable an earlier start of the test phase ◊ Aim at performing tests as cost-efficiently as possible while still ensuring quality

in the system

b) The purpose of the test plan The test planning phase should result in a test plan. The test plan serves as a means of communication not only between developers and testers, but also as a means of communicating the quality of the test process, and ultimately the quality of the product, to the customer. The purpose of the test plan is to answer the following questions: ◊ Why should we test? ◊ What should we test? ◊ Who should test? ◊ How should we test? ◊ When should we test? ◊ When should we stop testing?

c) Test planning activities

The following activities are vital to the test planning process. The activities must not be performed in the order stated below. However, it is recommended to address general issues such as strategy and purpose first and technical details such as test techniques and tools later. The list below can be used as a checklist for ensuring the presence of the test planning activities within the test process.

i) Perform risk analysis Risks that can have negative impact on the goals of the project should be identified and evaluated. An example of a risk is that software or hardware is not available on

Page 68: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

68

time for tests. It is necessary to consider the probability that each particular risk will occur, and the severity, or the degree of impact the risk will have on the system or on the development process if it occurs. Next, strategies to either avoid, minimize, or mitigate the effects of the risks need to be drawn up. The reason for performing risk analysis is, of course, to be prepared for risks and to strive to avoid them.

ii) Define goals and purpose The goals and purpose of testing the system answer the question ‘why should we test?’. Testers need to know what goals are to be achieved by the tests. Goals are either qualitative or quantitative, and achieving them can be used as a sales argument by proving a certain degree of quality.

iii) Decide upon what to test and what not to test The reasons for defining what items are to be tested is to ensure that all parts that need to be tested are considered and covered by tests. Also, it is important to define what items not to test, since this eliminates the risk for items being tested in vain or tested twice. The items include classes, modules, libraries, systems, subsystems, etc. If possible, it is advisable to include the version numbers of the items and references to them. Also, include properties or quality characteristics that need to be tested. The features are related to the quality and non-functional requirements. Examples of features are performance, reliability and maintainability. The approach that will be taken to ensure that the features are tested must be stated. Also, the techniques for testing items and features should be included. Test cases should also be prioritized, so that the most important test cases are sure to be run and less important test cases can be omitted if the time is running out.

iv) Define roles and responsibilities The staff, their roles and their responsibilities within the test process should be identified. The reason is that it should be clear to each and every participant what his or her responsibilities are, and also to ensure that all responsibilities have been delegated to at least one person.

v) Determine preconditions for starting tests Define the things that need to be in place before testing can start. This includes determining what resources, documents, hardware, software, settings, test data, versions, and environment are needed for tests. Also, the tools to be used should be selected. Acquiring tools is not a goal in itself since choosing the wrong tools can be counterproductive. Tools must thus be carefully selected and adapted to the maturity of the organization. Tools that are suited to the test process can be very beneficial. They can aid in test planning, test process control, test specification, test execution and test evaluation. The reason for including preconditions in the test planning is to consider the resources needed and to be able to have them ready when the testable code units are delivered. In this way, time that needs to be awarded to test execution is not wasted on producing those items.

vi) Define criteria for stopping tests Completion criteria are criteria for ending the test iteration round or the test execution phase as a whole. The reason for stating completion criteria is for testers to

Page 69: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

69

know when they have tested enough, and when the product can be released or moved on to the next iteration of development. Completion criteria can be, for example, a code coverage goal or an error detection rate, such as mean time between failures (MTBF).

vii) Determine pass/fail criteria Pass/fail criteria are needed for knowing if a test has passed or if the item or document tested needs reworking. Pass/fail criteria govern the completion of separate test cases or groups of test cases. Here, it is necessary to define the criteria for passing a test. For example, a test case can be labelled “passed” if the output of it matches the expected output, while a test case whose output does not match the expected output is labelled “failed”. It is also important that the desired accuracy of the results, especially for non-functional tests, is stated in the requirements. For example, ‘the response time of a function call must be below 1000 ms 98% of the time’.

viii) Schedule testing The test can be broken down into testing tasks. The testing tasks could be related to development tasks defined in the project plan. The estimated time needed for each task should be stated and scheduled. Reasons for scheduling tests are to ensure that adequate time is awarded to testing and to measure that tests progress as planned by comparing the schedule to the real situation.

Page 70: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

70

Appendix VI – A requirements review checklist The requirements should be checked for testability. The term "testability" encompasses completeness, consistency, unambiguousness and verifiability. The checklist below can be used to review the requirements specification for testability. The severity of the defect is graded on a scale of 1 to 5, where 1 is a minor defect (which has none or little impact on test execution) and 5 a major defect (hindering test execution). Fill in the found defects in the table below.

Question Type of testability defect 1. Are all necessary requirements included, as described in the problem statement?

Incompleteness

2. Do any requirements contradict each other or conflict?

Inconsistency

3. Are the requirements clear and easy to understand? Do all parts agree on the meaning of each requirement?

Ambiguousness

4. Can each requirement be covered with one or more test cases? Can these test cases be run in an economically feasible way? Can tests determine whether the requirement has been satisfied?

Unverifiability

Position Requirement Type of testability defect (choose in list)

Severity (1-5)

Description

Page 71: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

Appendix VII – A guide to test techniques

Statistics and experience testing

Risk Lists or Error Guessing

Requirements

Executable Code

Data testing Data- or control flow testing

Logic, rules and formula testing

Groups of values

Intervals of values

Combinations of data

Decision Trees

Decision Tables

Use Case Model Testing or Coverage/ Control Flow Graphing

You want to test the output value from a known input value.

You want to test the flow of data.

You want to test logical rules and mathematical formulas.

Equivalence Partitioning (EP)

Boundary Value Analysis (BVA)

Domain Testing or Pair-wise Combination Testing (see also decision tables)

Step 1: Think about what tests you need to perform to verify the requirements. Step 2: Think about what types of test you should perform Step 3: Choose test techniques to cover the tests Step 4: Choose appropriate test parameters and data

Identify the parameters that control the flow of data. Choose input parameters that ensure that the entire use case/flow and alternative paths are run through.

Define the parameters that make up the rules, and divide into domains. Choose combinations of parameters from each domain. Draw a decision tree to ensure that all combinations have been covered

Define the areas that have many identified risks or are prone to errors, and prioritize the testing of these areas. Use your experience to choose parameters and techniques that reveal defects.

Divide each separate input type in domains/intervals. Combine the domains/intervals of the different input types to produce test cases.

Divide data into intervals, in which all data should produce the same type of output, along with invalid intervals. Choose parameters below, on and above the boundaries of the intervals.

Divide data into groups, in which all input should produce the same type of output. Also, divide into valid/invalid input. Choose a number of parameters from each group as test data.

Step 4

Step 3

Step 2

Step 1

You want to target areas known to have many errors.

Page 72: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

Appendix VIII – A guide to test evaluation Regardless of whether a test report is written or not, the testing effort should be evaluated. The following guide aims to aid in evaluating the testing effort.

a) The purpose of test evaluation The purpose of evaluating the test effort is to be able to decide whether the product is ready for release (or for the next iteration of development). Thus, the test evaluation should verify that the goals set in the test plan have been met. Also, evaluating the test effort can show whether the test planning was adequate, and thus serve as a basis for improving test planning in later projects.

b) Test evaluation activities The following questions should be answered when evaluating the test effort:

i) Are the test objectives in the test plan fulfilled? Have the test activities fulfilled the test objectives stated in the test plan for the particular release? Are the goals met? If not, why not? Should more tests be run to better meet the testing goals?

ii) Are the test completion criteria in the test plan fulfilled? The stated completion criteria (for example coverage goal or error detection rate) in the test plan should have been met for the test to be considered completed in the current release. If the completion criteria are not met, why is that? Is it worth releasing the product anyway?

iii) Have all specified test cases been executed and passed? If all test cases in this release have not been executed, why is that? If all executed test cases have not passed, do they need to be run again? If not, why not?

iv) Are all critical parts of the system tested? If there are parts of the system that are stated as critical, are those parts thoroughly tested? If not, are there maintenance/contingency plans if those parts fail in operation?

v) Can the product be released? After answering the questions above, can the product be considered ready for release?

Page 73: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

73

Appendix IX – The improved test process assessment questionnaire

a) Guide för testplanering Syftet med testplaneringsguiden är att den ska vara ett stöd i testplaneringen och se till att de nödvändiga aktiviteterna gås igenom. Testplaneringsguiden kan även användas för att skapa en testplan. Medan du läser igenom guiden, fundera på och svara på dessa frågor: Skala:

1) Nej, absolut inte 2) Nej, jag är tveksam 3) Neutral 4) Ja, troligen 5) Ja, absolut

Fråga 1 2 3 4 5 Kommentarer

Är det enkelt att förstå hur guiden ska användas?

Med den kunskap du har, skulle du kunna använda dig av guiden?

Skulle du ha nytta av guiden om du använde dig av den i ditt arbete?

Kan du tänka dig att använda guiden i ett framtida projekt?

Anser du att guiden löser några av de problem som finns i den nuvarande testprocessen?

Övriga kommentarer:

Page 74: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

74

b) Checklista för kravinspektion Syftet med checklistan för inspektion av kraven är att säkerställa att kraven är testbara för att man sedan ska kunna planera för test av dem. Medan du läser igenom checklistan, fundera på och svara på dessa frågor: Skala:

1) Nej, absolut inte 2) Nej, jag är tveksam 3) Neutral 4) Ja, troligen 5) Ja, absolut

Fråga 1 2 3 4 5 Kommentarer

Är det enkelt att förstå hur checklistan ska användas?

Med den kunskap du har, skulle du kunna använda dig av checklistan?

Skulle du ha nytta av checklistan om du använde dig av den i ditt arbete?

Kan du tänka dig att använda checklistan i ett framtida projekt?

Anser du att checklistan löser några av de problem som finns i den nuvarande testprocessen?

Övriga kommentarer:

Page 75: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

75

c) Guide för testutvärdering

Syftet med testutvärderingsguiden är att den ska vara ett stöd i testutvärderingen då man beslutar om produkten är redo för release. Medan du läser igenom guiden, fundera på och svara på dessa frågor: Skala:

1) Nej, absolut inte 2) Nej, jag är tveksam 3) Neutral 4) Ja, troligen 5) Ja, absolut

Fråga 1 2 3 4 5 Kommentarer

Är det enkelt att förstå hur guiden ska användas?

Med den kunskap du har, skulle du kunna använda dig av guiden?

Skulle du ha nytta av guiden om du använde dig av den i ditt arbete?

Kan du tänka dig att använda guiden i ett framtida projekt?

Anser du att guiden löser några av de problem som finns i den nuvarande testprocessen?

Övriga kommentarer:

Page 76: Introducing Structured Testingfileadmin.cs.lth.se/cs/Education/Examensarbete/Rap... · This Master’s thesis strives to introduce a structured testing process in a company where

76

d) Guide till testtekniker

Syftet med testteknikguiden är att den ska till hjälp vid valet av rätt testteknik för varje testsituation. Medan du läser igenom guiden, fundera på och svara på dessa frågor: Skala:

1) Nej, absolut inte 2) Nej, jag är tveksam 3) Neutral 4) Ja, troligen 5) Ja, absolut

Fråga 1 2 3 4 5 Kommentarer

Är det enkelt att förstå hur guiden ska användas?

Med den kunskap du har, skulle du kunna använda dig av guiden?

Skulle du ha nytta av guiden om du använde dig av den i ditt arbete?

Kan du tänka dig att använda guiden i ett framtida projekt?

Anser du att guiden löser några av de problem som finns i den nuvarande testprocessen?

Övriga kommentarer: