1 Testing Testing is a verification technique –critically important for quality software Industry...

48
1 Testing Testing is a verification technique critically important for quality software Industry averages: 30-85 errors per 1000 lines of code; 0.5-3 errors per 1000 lines of code not detected before delivery. The ability to test a system depends on a thorough, competent requirements document.

Transcript of 1 Testing Testing is a verification technique –critically important for quality software Industry...

1

Testing

• Testing is a verification technique– critically important for quality software

• Industry averages:– 30-85 errors per 1000 lines of code;– 0.5-3 errors per 1000 lines of code not detected before

delivery.

• The ability to test a system depends on a thorough, competent requirements document.

2

Testing

• One of the practical methods commonly used to detect the presence of errors (failures) in a computer program is to test it for a set of inputs.

Our programThe output is correct?I1, I2, I3,

…, In, …

Expected results = ?Obtained results

“Inputs”

3

LOC Code

1 program double ();

2 var x,y: integer;

3 begin

4 read(x);

5 y := x * x;

6 write(y)

7 end

Failure: x = 3 means y =9 Failure!• This is a failure of the system since the correct output would be 6

Fault: The fault that causes the failure is in line 5. The * operator is used instead of +.

Error: The error that conduces to this fault may be:• a typing error (the developer has written * instead of +) • a conceptual error (e.g., the developer doesn't know how means to double a number)

Example …

4

Software Testing in RUP

Management

Environment

Business Modeling

Implementation

Test

Architecture & Design

Preliminary Iteration(s)

Iter.#1

PhasesProcess Workflows

Iterations

Supporting Workflows

Iter.#2

Iter.#n

Iter.#n+1

Iter.#n+2

Iter.#m

Iter.#m+1

Deployment

Configuration Mgmt

Requirements

Elaboration TransitionInception Construction

5

Goals of Testing

• Goal: show a program meets its specification– But: testing can never be complete for non-

trivial programs

• What is a successful test?– One in which no errors were found?– One in which one or more errors were

found?

© 2001, Steve Easterbrook

6

Goals of Testing - 2 • Testing should be:

– repeatable• if you find an error, you want to repeat the test to show

others• if you correct an error, you want to repeat the test to

check that you fixed it

– systematic• random testing is not enough• select test sets that are representative of real uses• select test sets that cover the range of behaviors of the

program

– documented• keep track of what tests were performed, and what the

results were

7

Goals of Testing - 3.

• Therefore you need a way to document test cases showing:– the input– the expected output– the actual result

• These test plans/scripts are critical to project success!

8

Errors/Bugs

• Errors of all kinds are known as “bugs”.• Bugs come in two main types:

– compile-time (e.g., syntax errors)

which are cheap to fix– run-time (usually logical errors)

which are expensive to fix.

9

Relative cost of bugs“bugs found later cost more to fix”

Cost to fix a bug increases exponentially (10x) i.e., it increases tenfold as time increases

E.g., a bug found during specification costs $1 to fix. … if found in design cost is $10 … if found in code cost is $100 … if found in released software cost is $1000

11

Create a Test Plan

• What are you going to test? – functions, features, subsystems, the entire

system• What approach are you going to use?

– white box, black box, in-house, outsourced, inspections, walkthroughs, etc.

– what variant of each will you employ?• When will you test?

– after a new module is added? after a change is made? nightly? ….

– what is your testing schedule?

12

Create a Test Plan - 2

• What pass/fail criteria will you use?– Provide some sample test cases.

• How will you report bugs? – what form/system will you use?– how will you track them and resolve them?

• What are the team roles/responsibilities?– who will do what?

13

Testing Strategies

• Never possible for designer to anticipate every possible use of system. Systematic testing therefore essential.

• Offline strategies:1. syntax checking & “lint” testers;

2. walkthroughs (“dry runs”);

3. inspections

• Online strategies:1. black box testing;

2. white box testing.

14

Syntax Checking

• Detecting errors at compile time is preferable to having them occur at run time!

• Syntax checking will simply determine whether a program “looks” acceptable

• “lint” programs try to do deeper tests on code:– will detect “this line will never be executed”

– “this variable may not be initialized”

• Compilers do a lot of this in the form of “warnings”.Remember?

error_reporting(E_ALL); ini_set("display_errors", 1); 

15

Inspections

• Formal procedure, where a team of programmers read through code, explaining what it does.

• Inspectors play “devils advocate”, trying to find bugs.

• Time consuming process!• Can be divisive/lead to interpersonal

problems.• Often used only for critical code.

16

Walkthroughs

• Similar to inspections, except that inspectors “mentally execute” the code using simple test data.

• Expensive in terms of human resources.• Impossible for many systems.• Usually used as discussion aid.

• Inspections/walkthroughs usually take 90-120 minutes.

• Can find 30%-70% of errors.

17

Black Box Testing

• Generate test cases from the specification – i.e. don’t look at the code

• Advantages:– avoids making the same assumptions as

the programmer– test data is independent of the

implementation– results can be interpreted without knowing

implementation details

18

Consider this Function

function max_element ($in_array){ /* Returns the largest element in

$in_array */ … return $max_elem;}

function max_element ($in_array){ /* Returns the largest element in

$in_array */ … return $max_elem;}

19

A Test Set

• Is this enough testing?

input output OK?3 16 4 32 9 32 yes9 32 4 16 3 32 yes22 32 59 17 88 1 88 yes1 88 17 59 32 22 88 yes1 3 5 7 9 1 3 5 7 9 yes7 5 3 1 9 7 5 3 1 9 yes9 6 7 11 5 11 yes5 11 7 6 9 11 yes561 13 1024 79 86 222 97 1024 yes97 222 86 79 1024 13 561 1024 yes

20

“Black Box” Testing

• In black box testing, we ignore the internals of the system, and focus on the relationship between inputs and outputs.

• Exhaustive testing would mean examining system output for every conceivable input.– Clearly not practical for any real system!

• Instead, we use equivalence partitioning and boundary analysis to identify characteristic inputs.

21

Black Box Testing

• Three ways of selecting test cases:– Paths through the specification

• e.g. choose test cases that cover each part of the preconditions and postconditions

– Boundary conditions• choose test cases that are at or close to boundaries for

ranges of inputs

– Off-nominal cases• choose test cases that try out every type of invalid input

(the program should degrade gracefully, without loss of data)

22

Equivalence Partitioning

• Suppose system asks for “a number between 100 and 999 inclusive”.

• This gives three equivalence classes of input:– less than 100– 100 to 999– greater than 999

• We thus test the system against characteristic values from each equivalence class.

• Example: 50 (invalid), 500 (valid), 1500 (invalid).

23

Boundary Analysis

• Arises from the observation that most programs fail at input boundaries.

• Suppose system asks for “a number between 100 and 999 inclusive”.

• The boundaries are 100 and 999.• We therefore test for values:

99 100 101 998 999 1000

lower boundary upper boundary

24

White (Clear) Box Testing

• In white box testing, we use knowledge of the internal structure to guide development of tests.

• The ideal: examine every possible run of a system.– Not possible in practice!

• Instead: aim to test every statement at least once

25

White Box Testing

• Examine the code and test all paths

…because black box testing can never guarantee we exercised all the code

• Path completeness:– A test set is path complete if each path

through the code is exercised by at least one case in the test set

26

Code Coverage

Statement coverage Elementary statements: assignment, I/O, call Select a test set T such that by executing P in all

cases in T, each statement of P is executed at least once.

read(x); read(y); if x > 0 then write(“1”); else write(“2”); if y > 0 then write(“3”); else write(“4”); T: {<x = -13, y = 51>, <x = 2, y = -3>}

27

White-box Testing: Determining the Paths

FindMean (FILE ScoreFile){ float SumOfScores = 0.0;

int NumberOfScores = 0; float Mean=0.0; float Score;Read(ScoreFile, Score);while (! EOF(ScoreFile) {

if (Score > 0.0 ) {SumOfScores = SumOfScores + Score;NumberOfScores++;}

Read(ScoreFile, Score);}/* Compute the mean and print the result */if (NumberOfScores > 0) {

Mean = SumOfScores / NumberOfScores;printf(“ The mean score is %f\n”, Mean);

} elseprintf (“No scores found in file\n”);

}

1

23

4

5

7

6

8

9

28

Constructing the Logic Flow Diagram

Start

2

3

4 5

6

7

8 9

Exit

1

F

T F

T F

T

29

Weaknesses of Path Completeness

• Path completeness is usually infeasible– e.g.

– there are 2100 paths through this program segment (!)

… and even if you test every path, it doesn’t mean your program is correct

for ($j=0, $i=0; $i<100; $i++)if ($a[i]==True) $j=$j+1;

for ($j=0, $i=0; $i<100; $i++)if ($a[i]==True) $j=$j+1;

30

Loop Testing

Another kind of boundary analysis:

1. skip the loop entirely

2. only one iteration through the loop

3. two iterations through the loop

4. m iterations through the loop (m < n)

5. (n-1), n, and (n+1) iterations

where n is the maximum number of iterations through the loop

31

Test Planning

• Testing must be taken seriously, and rigorous test plans and test scripts developed.

• These are generated from requirements analysis document (for black box) and program code (for white box).

• Distinguish between:1. unit tests;

2. integration tests;

3. system tests.

4. Regression Testing

32

Integration Testing

Objectives: To expose problems arising from the combination To quickly obtain a working solution from components.

Problem areas Internal: between components

Invocation: call/message passing/… Parameters: type, number, order, value Invocation return: identity (who?), type, sequence

External: Interrupts (wrong handler?) I/O timing

Interaction

33

Unit Tests are tests written by the developers to test functionality as they write it.

Each unit test typically tests only a single class, or a small cluster of classes.

Unit tests are typically written using a unit testing framework, such as JUnit (automatic unit tests).

Target errors not found by Unit testing:

- Requirements are mis-interpreted by developer.

- Modules don’t integrate with each other

Unit Testing

34

Testing based on the coverage of the executed program (source) code.

Different coverage criteria:• statement coverage• path coverage• condition coverage• definition-use coverage• …..

It is often the case that it is not possible to cover all code. For instance:

- for the presence of dead code (not executable code) - for the presence of not feasible path in the CFG- etc.

Unit testing: a white-box approach

35

Integration Testing(Behavioral: Path-Based)

A B C

MM-path: Interleaved sequence of module exec path and messagesModule exec path: entry-exit path in the same module

Atomic System Function: port input, … {MM-paths}, … port output

Test cases: exercise ASFs

36

System Testing

Concerns with the app’s externals Much more than functional

Load/stress testing Usability testing Performance testing Resource testing

37

System Testing

Performance testing Performance seen by

users: delay, throughput System owner: memory, CPU, comm

Performance Explicitly specified or expected to do well Unspecified find the limit

Usability testing Human element in system operation

GUI, messages, reports, …

38

Regression Testing

Whenever a system is modified (fixing a bug, adding functionality, etc.), the entire test suite needs to be rerun Make sure that features that already worked are not

affected by the change Automatic re-testing before checking in changes into

a code repository Incremental testing strategies for big systems

39

Alpha and Beta Testing

• In-house testing is usually called alpha testing.

• For software products, there is usually an additional stage of testing, called beta testing.

• Involves distributing tested code to “beta test sites” (usually prospective/friendly customers) for evaluation and use.

• Typically involves a formal procedure for reporting bugs.

壓力測試

40

壓力測試要求進行超過規定效能指標的測試。例如一個網站設計容量是 100 個人同時點擊,壓力測試就要是採用 120 個同時點擊的條件測試。1. 系統能夠恢復。2. 壓力過程中不要有明顯效能下降。

41

Documenting Test Cases

• Describe how to test a system/module/function

• Description must identify– short description (optional)– system state before executing the test– function to be tested– input (parameter) values for the test– expected outcome of the test

42

Test Automation

• Testing is time consuming and repetitive

• Software testing has to be repeated after every change (regression testing)

• Write test drivers that can run automatically and produce a test report

43

JUnit

JUnit 的目的是協助進行單元測試( Unit Test )

將下載下來的壓縮包中 junit.*.jar 文件,放到你的 classpath 中就可以了

Why use JUnit?

Its free! It is simple and elegant to use. It is easy and inexpensive to write tests using

the JUnit testing framework. JUnit tests checks their own result and

provide quick visual feedback. Tests can be composed into TestSuites. It is integrated into IDE’s like Eclipse and

NetBeans.

Assertions for JUnit

JUnit uses assertion methods to test conditions: assertEquals(a,b) – a and b must be primitives or have an

equals method for comparison assertFalse(a) - a must be boolean assertNotNull(a) - a is either object or null assertNotSame(a,b) – test for object equality assertNull(a) - a is either object or null assertSame(a,b) – test for object equality assertTrue(a) - a must be boolean

46

public class SampleCalculator{       public int add(int augend , int addend)       {              return augend + addend ;       }

       public int subtration(int minuend , int subtrahend)

       {

              return minuend - subtrahend ;

       }

}

47

import junit.framework.TestCase;public class TestSample extends TestCase

{

       public void testAdd()

       {      

SampleCalculator calculator = new SampleCalculator();   

              int result = calculator.add(50 , 20);

              assertEquals(70 , result);

       }

       public void testSubtration()

       {

              SampleCalculator calculator = new SampleCalculator();

              int result = calculator.subtration(50 , 20);

              assertEquals(30 , result);

       }

}

48

After compiling, input “

java org.junit.runner.JUnitCore TestSample “ for running.

49

JUnit in Eclipse If you write your method stubs first (as on the previous slide), Eclipse

will generate test method stubs for you To add JUnit 4 to your project:

Select a class in Eclipse Go to File New... JUnit Test Case Make sure New JUnit 4 test is selected Click where it says “Click here to add JUnit 4...” Close the window that appears

To create a JUnit test class: Do steps 1 and 2 above, if you haven’t already Click Next> Use the checkboxes to decide which methods you want test cases for;

don’t select Object or anything under it I like to check “create tasks,” but that’s up to you

Click Finish To run the tests:

Choose Run Run As JUnit Test