ISTQB Ravinder Document

27
Ravinder Puranam [email protected] Page 1 of 27 Simplified Document to prepare for ISTQB, a Certified Tester (Foundation Level) August 2011 ****************************************************************************** K1: remember; K2: understand; K3: apply; K4: analyze Chapter and Number of Questions Expected from each chapter Chapter 1. The principles of testing -- 7 Chapter 2. Testing throughout the life-cycle -- 6 Chapter 3. Static testing -- 3 Chapter 4. Test design techniques -- 12 Chapter 5. Test management -- 8 Chapter 6. Tool support for testing 4 ****************************************************************************** Chapter 1. The principles / fundamentals of testing -- 7 ****************************************************************************** 1.1. Why is Testing necessary?? --- Terms: Bug, defect, error, failure, fault, mistake, quality, risk Generally accepted set of testing definitions/terminologies used worldwide are from BS 7925-1 standards document. Sub-Topics: Software Systems Context --- Softwares have been an integral part of our life...Software that does not work correctly can lead to many problems, including loss of money, time or business reputation, and could lead to many serious outputs... Cause of Software Defects --- Human being can make an error (mistake), which produces a defect (fault, bug) in the program code, or in a document. If a defect in code is executed, the system may fail (output is not expected result) causing a failure. Defects occur because of human errors due to many reasons like time/complex code/technology. Failure can be caused by environmental conditions also. Role of Testing in S/W Development, Maintenance and Operations --- Testing contributes to a quality software system by correcting the defects, hence reducing the risks before the system is released to production / operation. Testing and Quality --- Testing can give confidence in the quality of the software if it finds few or no defects. A properly designed test that passes reduces the overall level of risk in a system. When testing does find defects, the quality of the software system increases when those defects are fixed.

description

ISTQB, Simplified Document to prepare for ISTQB, a Certified Tester (Foundation Level) – August 2011, Testing, Certification, Certified Tester

Transcript of ISTQB Ravinder Document

Page 1: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 1 of 27

Simplified Document to prepare for ISTQB, a Certified Tester (Foundation Level)

– August 2011

******************************************************************************

K1: remember; K2: understand; K3: apply; K4: analyze

Chapter and Number of Questions Expected from each chapter

Chapter 1. The principles of testing -- 7

Chapter 2. Testing throughout the life-cycle -- 6

Chapter 3. Static testing -- 3

Chapter 4. Test design techniques -- 12

Chapter 5. Test management -- 8

Chapter 6. Tool support for testing 4

******************************************************************************

Chapter 1. The principles / fundamentals of testing -- 7

******************************************************************************

1.1. Why is Testing necessary?? ---

Terms: Bug, defect, error, failure, fault, mistake, quality, risk

Generally accepted set of testing definitions/terminologies used worldwide are from BS 7925-1

standards document.

Sub-Topics:

Software Systems Context ---

Softwares have been an integral part of our life...Software that does not work correctly can lead

to many problems, including loss of money, time or business reputation, and could lead to many

serious outputs...

Cause of Software Defects ---

Human being can make an error (mistake), which produces a defect (fault, bug) in the program

code, or in a document. If a defect in code is executed, the system may fail (output is not

expected result) causing a failure.

Defects occur because of human errors due to many reasons like time/complex code/technology.

Failure can be caused by environmental conditions also.

Role of Testing in S/W Development, Maintenance and Operations ---

Testing contributes to a quality software system by correcting the defects, hence reducing the

risks before the system is released to production / operation.

Testing and Quality ---

Testing can give confidence in the quality of the software if it finds few or no defects. A properly

designed test that passes reduces the overall level of risk in a system.

When testing does find defects, the quality of the software system increases when those defects

are fixed.

Page 2: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 2 of 27

It is possible to measure the quality of the software in terms of defects found, for both functional

and non-functional software requirements and characteristics like

reliability/usability/efficiency/maintainability/portability.

Process can be improved when the lessons learnt from earlier projects is taken care for root cause

analysis, this improving the quality of future systems.

How much testing is enough?

How much testing is enough considers in to account the level of risks, including technical,

safety, and business risks (RISK of missing important faults, incurring failure costs, releasing

untested or under-tested software, losing credibility and market share, missing a market window,

over-testing, ineffective testing), and project constraints such as time and budget.

*** Prioritize tests so that, whenever you stop testing, you have done the best testing in the time

available.

How to prioritize?? Possible ranking criteria (all risk based): test where a failure would be most

severe, test where failures would be most visible, test where failures are most likely, ask the

customer to prioritize the requirements, what is most critical to the customer‟s business, areas

changed most often, areas with most problems in the past, most complex areas, or technically

critical.

********************************************

1.2. What is Testing?? ---

Terms: Debugging, requirement, review, test case, testing, test objective

Testing is not only executing the test cases / test scenarios, but many other testing activities...

Testing has objectives like: Finding defects, Gaining confidence about the level of quality,

providing information for decision-making and preventing defects.

Debugging (Development activity) and testing are different.

********************************************

1.3. Seven Testing Principles ---

Terms: Exhaustive Testing

Many testing principles are suggested and offer common guidelines for all testing.

Principle 1: Testing shows presence of defects, but cannot prove that there are no defects. It only

reduces the probability of undiscovered defects remaining in the software.

Principle 2: Exhaustive testing (All combinations) is impossible except for trivial cases, instead

risk analysis and priorities should be used to focus testing efforts.

Principle 3: To find defects early, start EARLY TESTING in SDLC and focus on defined

objectives.

Principle 4: Defect Clustering is needed. Testing effort shall be focused proportionally to the

expected and later observed defect density of modules.

Page 3: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 3 of 27

Principle 5: Pesticide paradox, test cases need to be regularly reviewed and revised, and new and

different tests need to be written to exercise different parts of the software / system

Principle 6: Testing is context dependent, i.e. for each domain / application there is different way

of testing approach

Principle 7: Absence-of-errors fallacy, i.e. Finding/Fixing defects does not help if system built is

unusable and not as per user needs / expectations.

********************************************

1.4. Fundamental Test Process ---

Terms: Confirmation testing, re-testing, exit criteria, incident, regression testing, test basis, test

condition, test coverage, test data, test execution, test log, test plan, test procedure, test policy,

test suite, test summary report, test ware.

The Fundamental Test Process consists of

1.4.1. Test Planning and Control --- Test Planning is activity of defining the objectives /

specifications of testing activities; whereas Test control is the ongoing activity of comparing

actual progress against the plan which takes different information to come to conclusion.

1.4.2. Test analysis and design --- Is the activity during which general testing objectives are

transformed into test conditions and test cases, Test analysis and design activity has many major

tasks like writing / reviewing the test artifacts, prioritizing test conditions, traceability, and test

cases, etc.

1.4.3. Test implementation and reporting --- Is the activity where designed test artifacts are taken

with added I/P's or any other information with environment set. Execution, defect logging and

reporting are also considered in this stage.

1.4.4. Evaluating exit criteria and reporting --- Test execution is assessed against the defined test

objectives...This has major tasks like checking test logs against exit criteria, assessing if more

tests are needed or if exit criteria specified to be changed, and publishing test summary report.

1.4.5. Test closure activities --- Test closure activities collect data from completed test activities

to consolidate information of the major tasks done in Test process which gives project milestone

for release / test project complete / maintenance release, etc.

********************************************

1.5. The Psychology of Testing ---

Terms: Error guessing, independence

The mindset to be used while testing and reviewing is different from that used while developing

software. Independent testing may be carried out at any level of testing or stage of SDLC.

Due to special testing independence effective testing may be carried out by clearly stating the

objectives of testing.

Testing is often seen as a destructive activity, even though it is very constructive in the

management of product risks.

Communication problems may occur, particularly if testers are seen only as messengers of

unwanted news about defects.

Page 4: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 4 of 27

Effective communication and constructive environment always removed the differences between

testing and other teams.

********************************************

1.6. Code of Ethics ---

Certified testers always act in the best interests of their clients and employer, consistent with

public interest.

Professional standards, Integrity, independent in judgment, reputation, cooperation, supportive

and continuous learning are all the ethics / code of conduct as per ISTQB

Page 5: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 5 of 27

******************************************************************************

2. Testing throughout the Software life-cycle -- 6

******************************************************************************

2.1. Software Development models

Terms: Commercial Off-The-Shelf (COTS), iterative-incremental development model,

validation, verification, V-model

Different testing approach / activities are related based on different SDLC used.

2.1.1. Relationship between testing and development life cycles.

Although variants of the V-model exist, common types of V-model uses four test levels:

-- Component (unit) testing

-- Integration testing

-- System testing

-- Acceptance testing

With various development and test levels in V-model or any SDLC models, the basis of testing

are the product documents (like use cases, requirement specs, code, design docs, etc)

2.1.2. SDLC based on project and product characteristics.

Considering Iterative-incremental Development Model where the SDLC/system has short

development cycles

E.g.: Prototyping, Rapid Application Development (RAD), Rational Unified Process (RUP) and

agile development models.

A system that is produced using these models may be tested at several test levels during each

iteration.

Regression testing is increasingly important on all iterations after the first one. Verification and

Validation can be carried out on each increment.

2.1.3. Recall characteristics of good testing that are applicable to any life cycle.

Testing within any Life Cycle Model has several characteristics of good testing like:

--- For each development activity there is a corresponding testing activity

--- Each test level has test objectives specified to that level

--- Analysis and design for a given test level should begin during the corresponding development

activity

--- Testers should be involved in reviewing documents as soon as drafts are available in the

Development life cycle.

Page 6: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 6 of 27

Test levels can be combined or reorganized depending on the nature of the project or the system

architecture.

E.g.: For integration of a Commercial Off-The-Shelf (COTS) software product into a system, the

purchaser may perform integration testing at the system level (Integration to infrastructure and

other systems, or system deployment) and acceptance testing.

********************************************

2.2. Test Levels

Terms: Alpha testing, beta testing, component testing, driver, field testing, functional

requirement, integration, integration testing, non-functional requirement, robustness testing, stub,

system testing, test environment, test level, test driven development, user acceptance testing.

Testing a system's configuration data shall be considered during test planning.

2.2.1. Compair different levels / stages of testing

Some of the levels of testing are:

--- Component Testing (Unit, module or program testing)

Test basis: Component requirements, Detailed design, and Code

Typical test objectives: Components, Programs, Data conversion / migration programs, Database

modules

Stubs, drivers and simulators may be used in Component Testing. It may include functional,

specific non-functional (such as resource-behavior like memory leaks or robustness testing), as

well as structural testing (e.g. decision coverage)

Generally Component Testing is done with the involvement of the developer who wrote code

and defects are fixed without formally managing these defects.

One approach to component testing is to prepare and automate test cases before coding since it is

highly iterative.

--- Integration Testing

Test basis: Software and system design, Architecture, Workflows, use cases

Typical test objects: Subsystems, Database implementation, Infrastructure, Interfaces, System

configuration and configuration data

Integration testing tests interfaces between components, interactions with different parts of a

system, such as operating system, file system and hardware, and interfaces between systems.

Systematic integration strategies may be based on the system architecture (Top-down and

bottom-up). Integration should normally be incremental rather than "big bang" (not all modules

integration at a time).

Page 7: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 7 of 27

Non-functional testing (like performance) may also be included in integration testing with

functional testing.

Function increment and Bing Bang both are integration testing techniques but the difference is in

function increment, we add function one by one and then test but in big bang we add all

functions at once then test. Driver: also know as Test harness and scaffolding.

--- System Testing

Test basis: System and software requirement specification, Use cases, Functional specification,

Risk analysis reports.

Typical test objects: System, user and operation manuals; System configuration and

configuration data.

System testing is concerned with the behavior of a whole system/product.

The test environment should correspond to the final target or production environment as much as

possible to minimize the risk of environment-specific failures not being found in testing.

This is performed to investigate functional and non-functional requirements of the system, and

data quality characteristics.

An independent test team often carries out system testing.

--- Acceptance Testing

Test basis: User requirements, System requirements, Use cases, Business processes, Risk

analysis reports.

Typical test objectives: Business processes on fully integrated system, Operational and

maintenance processes, user procedures, Forms, Reports, Configuration data.

Acceptance testing is often the responsibility of the customers or the users of a system; other

stakeholders may be involved as well.

The goal is to establish confidence in the system, parts of the system or specific non-functional

characteristics of the system.

To assess the systems readiness for readiness for deployment and use.

Operational (acceptance) testing: (Done by system administrators) Testing of backup/restore,

disaster recovery, user management, maintenance tasks, data load and migration tasks, periodic

check of security vulnerabilities.

Contract and regulation acceptance testing: Is performed against a contract's acceptance criteria

for producing custom-developed software (OEM) by adhering to legal or safety regulations.

Alpha and Beta (or field) testing:

Alpha testing is performed at the developing organization's site but not by the developing team.

Beta testing, or field-testing, is performed by customers or potential customers at their own

locations.

Page 8: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 8 of 27

Organizations may use terms such as factory acceptance testing and site acceptance testing

before/after moving to a customer's site.

********************************************

2.3. Test Types

Terms: Black-box testing, code coverage, functional testing, interoperability testing, load testing,

maintainability testing, performance testing, portability testing, reliability testing, security

testing, stress testing, structural testing, usability testing, white-box testing

Objectives of test type always focus on functional, non-functional (reliability or usability),

system, change related.

2.3.1. Compair Functional / Non-Functional / Structural / change related software test

types

2.3.2. Recognize functional and structural tests occur at any level

2.3.3. Identify and describe non-functional test types based on non-functional

requirements

2.3.4. Identify and describe test types based on analysis of software architecture

2.3.5. Describe purpose of confirmation testing and regression testing

Functional tests are based on functions and features and their interoperability with specific

systems, and may be performed at all test levels.

Non-functional tests are not limited to, performance testing, load testing, stress testing, usability

testing, maintainability testing, reliability testing and portability testing. It is testing of HOW the

system works.

Structural testing (white-box) may be performed at all test levels, but especially in component

testing and component integration testing.

Structural testing may be based on the architecture of the system, such as a calling hierarchy.

Structural testing approaches can also be applied at system, system integration or acceptance

testing levels.

Defect when detected and fixed, needs to be re-tested so as to confirm that the original defect has

been successfully removed. This is called confirmation testing.

Debugging is a development activity, not a testing activity.

Regression testing is the repeated testing of an already tested program, after modification, to

discover any defects introduced or uncovered as a result of the changes in one/more of the

module of the software.

Regression testing can be performed at all test levels, and includes functional, non-functional and

structural testing.

If the software is stable and since regression test suits are run many times due to small changes,

so regression testing leads to automation testing.

********************************************

Page 9: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 9 of 27

2.4. Maintenance Testing

2.4.1. Compare maintenance testing with normal new software testing

2.4.2. Recognize indicators for maintenance testing

2.4.3. Describe role of regression testing and impact analysis in maintenance

Once deployed, a software system is often in service for years or decades. During this time the

system, its configuration data, or its environment are changed or extended.

Planning of any releases in advance is crucial for successful maintenance testing.

Maintenance testing is done on an existing Operating System, and is triggered by modifications,

migration, or retirement of the software or system.

Modifications include planned enhancement changes, corrective and emergency changes, and

changes in environment, such as planned operating system or database upgrades, planned

upgrades of Commercial-Off-The-Shelf software, or patches to correct newly exposed or

discovered vulnerabilities of the operating system.

Determining how the existing system may be affected by changes is called impact analysis, and

is used to help decide how much regression testing to do. the impact analysis may be used to

determine the regression test suite.

Maintenance testing can be difficult if specifications are out of date or missing, or testers with

domain knowledge are not available.

Page 10: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 10 of 27

******************************************************************************

Chapter 3: Static testing -- 3

******************************************************************************

3.1 Static Techniques and the test process

Terms: Dynamic Testing, Static Testing

3.1.1. Recognize software work products that can be examined by the different static techniques

3.1.2. Importance and value of considering static techniques for the assessment of software work

products

3.1.3. Difference between static and dynamic techniques, considering objectives, types of defects

to be identified, and role of these techniques within the SDLC.

Static testing relies on manual methods. Dynamic testing requires execution of software.

Both have same objective, to find defects. Additionally, static testing is doing bug RCA.

Static testing can be used by the developer who wrote the code, in isolation. Code reviews,

inspections and walkthroughs are also used.

Dynamic testing is done by the testing team....Some of the methodologies include unit testing,

integration testing, system testing and acceptance testing.

Static is generally not detailed testing, but checks mainly for the sanity of the code, algorithm, or

document. It is primarily syntax checking of the code or and manually reading of the code or

document to find errors.

Dynamic analysis refers to the examination of the physical response from the system to variables

that are not constant and change with time

Static testing is the verification portion of Verification and Validation

Dynamic testing is the validation portion of Verification and Validation.

********************************************

3.2. Review Process

3.2.1. Recall activities, roles and responsibilities of a typical formal review.

3.2.2. Differences between informal review, technical review, walkthrough and inspection.

3.2.3. Explain factors for successful performance of reviews.

Terms: Entry criteria, formal review, informal review, inspection, metric, moderator, peer

review, reviewer, scribe, technical review, walkthrough

Review can be done at any stage and depends on the need for review, maturity of the

development process, agreed objectives,

Page 11: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 11 of 27

Activities of a Formal Review:

Planning: Define review criteria, selecting the personnel, allocating roles, defining entry & exit

criteria for more formal review types (e.g. inspections), selecting which parts of document to

review, checking entry criteria

Kick-off: distributing documents and explaining the objectives, process and documents to the

participants.

Individual preparation: Preparing for the review meetings by reviewing the documents and

noting potential defects, questions and comments.

Examination/evaluation/recording of results of review meeting: discussing or logging, with

documented results, noting defects, making recommendations/decisions regarding handling the

defects and examining/evaluating and recording issues during physical meetings

Rework: Fixing defects found and recording updated status of defects.

Follow-up: Checking defects fixed, gathering metrics and checking on exit criteria.

Roles and responsibilities:

Manager: Decides on execution of reviews, allocates time in project schedules and determines if

review objectives have been met.

Moderator: Person who leads the review of documents, planning review, running review meeting

and following-up after meeting.

Author: Person responsible for document to be reviewed.

Reviewers: Persons with specific technical / business background who reviews the product and

participates in all review meetings.

Scribe: Documents issues, problems and open points that were identified during meetings.

Types of Reviews:

Apart from formal review there are many other review types as below, of which any one can be

used depending on the use by their organization.

--- Informal Review: No formal process, any technical lead can review, results may be

documented, varies in usefulness depending on the reviewers, Main purpose: inexpensive way to

get some benefit

--- Walkthrough: Meeting led by author, may take the form of scenarios, dry runs, peer group

participation, open-ended (optional pre/post reviews) sessions, optional scribe, may become a bit

formal, Main purpose: learning, gaining understanding, finding defects.

--- Technical Review: similar to formal review activities, ideally led by trained moderator,

optional use of checklists, Main purpose: discussing, making decisions, evaluating alternatives,

finding defects, solving technical problems, and checking conformation to specifications, plans,

regulations and standards.

--- Inspection: Led by trained moderator, same as Formal process based on rules and checklist,

Main purpose: finding defects.

Page 12: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 12 of 27

Success factors for reviews:

Predefined objectives, right people for review objectives are involved, testers are valued

reviewers who review and prepare tests earlier, defects found are welcome and expressed

objectively, used to increase effectiveness of defect identification, review is conducted in an

atmosphere of trust,

Training is giving especially more formal techniques like inspection; Management supports a

good review process, emphasis on learning and process improvement.

********************************************

3.3. Static Analysis by Tools

3.3.1. Defects and errors identified by static analysis and compare them with dynamic

testing

3.3.2. Typical benefits of static analysis

3.3.3. Code and design defects that may be identified by static analysis tools.

Terms: Compiler, complexity, control flow, data flow, static analysis.

Objective of static analysis is to find defects in software source code and software models,

dynamic testing does execute the software.

Static analysis can locate defects that are hard to find in dynamic testing.

Static analysis tools are typically used by developers before and during component and

integration testing or when checking-in code to configuration management tools, and by

designers during software modeling.

Many warning messages are produced by the tools which need to be well-managed to allow the

most effective use of the tool.

Compilers may offer some support for static analysis, including calculation of metrics.

Value of Static Testing: Early detection of defects before test execution which are not easily

found in dynamic testing, early warnings about code or design, detecting dependencies and

inconsistencies in software models such as links, improved maintainability and prevention of

defects.

Typical defects discovered by static analysis tools: referencing a variable with an undefined

value, Inconsistent interfaces, unused variables, unreachable code, missing and erroneous logic,

overly complicated constructs, programming standards violation, security vulnerabilities, syntax

violations of code and software models.

Page 13: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 13 of 27

******************************************************************************

Chapter 4: Test design techniques -- 12

******************************************************************************

4.1. The Test Development Process

Terms: Test case specification, test design, test execution schedule, test procedure specification,

test script, traceability

The Test development process can be done in different ways, from very informal with little or no

document, to very formal.

The level of formality depends on the context of the testing, including the maturity of testing and

development processes, time constraints, safety or regulatory requirements, and the people

involved.

4.1.1. Differentiate between a test design specification, test case specification and test

procedure specification

4.1.2. Compare the terms test condition, test case and test procedure

4.1.3. Evaluate the quality of test cases in terms of clear traceability to the requirements

and expected results

4.1.4. Translate test cases into a well-structured test procedure specification at a level of

detail relevant to knowledge of the testers

Test analysis, the test basic documentation is analyzed in order to determine what to test i.e. test

conditions.

A Test condition (test scenario) is defined as an item or event that could be verified by one or

more test cases.

Test cases must and should consist of set of input values, execution preconditions, expected

results, obtained outputs and execution post conditions, defined to cover a certain test objectives

/ test conditions.

Test procedure specifies the sequence of actions for the execution of a test either

manual/automation are formed into a test execution schedule that defines the order of execution.

During test implementation the test cases are developed, implemented, prioritized and organized

in test procedures.

Relating traceability from test conditions back to the specifications and requirements enables

both effective impact analysis when requirements change and determining requirements

coverage for a set of tests.

During test analysis the test approach is implemented to select the test design techniques based

on risks and other criteria

During test design the test cases and test data are created and specified.

IEEE STD 829-1998 document describes content of test design specifications and test case

specifications.

********************************************

Page 14: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 14 of 27

4.2. Categories of Test Design Techniques

Terms: Black-box test design technique, experience-based test design technique, test design

technique, white-box test design technique

4.2.1. Recall reasons that both specification-based (black-box) and structure-based

(white-box) test design techniques are useful and list common techniques for each

4.2.2. Explain the characteristics, commonalities, and differences between specification-

based testing, structure-based testing and experience-based testing

The purpose of Test Design Techniques is to identify test conditions, test cases, and test data

Test design techniques are distincted generally as black-box or white-box.

Black-box testing (specification-based test design) does not use any information related to the

internal structure of the system to be tested, test conditions/test cases/test data are derived from

test basis documentation which includes both functional and non-functional testing.

--- Models, either formal or informal, are used for the specification of the problem to be solved,

the software or its components.

--- Test cases can be derived systematically from these Models.

White-box testing (structure-based or structural test design) is based on analysis of the structure

of the component or system.

--- Information about how the software is constructed is used to derive the test cases

--- Extent of software covered is measured and further test cases are derived systematically to

increase coverage.

Black-box and White-box test designs can be combined with experience-based techniques to

leverage the experience of developers, testers and users to determine what should be tested.

Some techniques fall into either of the two test design techniques (Black-box/White-box), others

have more than one category.

Experience-based test design techniques include:

--- Knowledge and experience of people are used to derive test cases.

--- Knowledge of testers, developers, users and other stakeholders about the

software/usage/environment is one source of information

--- Knowledge about likely defects and their distribution is another source of information.

Black box White box

Equivalence partitioning Statement testing

Boundary value analysis Branch / Decision testing

State transition testing Data flow testing

Cause-effect graphing Branch condition testing

Decision table testing Branch condition combination testing

Use case testing Modified condition decision testing

LCSAJ testing

Page 15: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 15 of 27

4.3. Specification-based or Black-box Techniques

Terms: Boundary value analysis, decision table testing, equivalence partitioning, state transition

testing, use case testing

4.3.1. Write test cases from given software model using equivalence partitioning,

boundary value analysis, decision tables and state transition diagrams/tables

4.3.2. Explain main purpose of each of the four testing techniques, what level and type of

testing could use the techniques, and how coverage may be measured.

4.3.3. Explain the concepts of use case testing and its benefits

Many Black-box techniques are present, some of them are:

First method: Test one right and one wrong value one right are the type of positive data like

correct username and password, proper navigation, etc

The one wrong are the type of negative data like wrong username and password, blank data, etc

In this pick alternately one right and one wrong values...this method is called Test one right and

one wrong value method

Second method: Equivalence partitioning --- Large input data is partitioned into groups that are

expected to exhibit similar behavior so as to cover maximum inputs

E.g. a) Say I/P data is 10 to 100, then Equivalence values will be less than 10, between 10 to 100

and above 100 i.e. 3 values.

b) Say I/p data is 10; 11-50; 51-75; 76-100…..then the Equivalence values will be below

10,between 11-50, between 51-75, between 76-100 and above 100 i.e. 5 values.

Third method: Logical combination testing --- Combination testing with discrete values

Fourth method: State-transition testing --- State-transition testing tests only the state machine

logic in itself

Fifth method: Boundary value and domain analysis --- The boundaries are the values on and

around the beginning and end of a partition

Boundary value analysis can very easily be combined with equivalence partitioning.

E.g. Say I/P range is between X and Y, then the values will be below X, between X and Y (may

include X/Y also sometimes) and above Y i.e. 3 values; If any complicated ranged then will be

according to the same logic.

Sixth method: Decision Table Testing --- All combinations of triggering conditions are used as

inputs and all logical decisions are covered which is the strength of this test design technique.

Seventh method: Use case Testing --- Test cases are derived from use cases, which tell the

interactions between the system with Pre-conditions and post-conditions.

This is very useful to do effective integration / system testing where use-cases are available. This

can be combined with other specification-based test techniques.

Page 16: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 16 of 27

Out of many techniques, some of the most useful or often discussed once are:

- Random and statistical testing --- Tested by generating random inputs, and checking the output

- Exhaustive testing --- Every possible combination of input values is tested. (Almost impossible)

- Error guessing --- Concentrating on typical problems like low memory, too much inputs, etc

(Corner test cases)

- White-box testing (Explained in below section 4.4.) --- Typically used for unit or component

testing where there is access to the program code.

- The test oracle --- It is defined as a mechanism that decides if a result is right or wrong using

spreadsheet calculations, or any other tools...the only thing you do is check that the program does

not crash.

********************************************

4.4. Structure based or white-box Techniques

Terms: Code coverage, decision coverage, statement coverage, structure-based testing

4.4.1. Describe the concept and value of code coverage

4.4.2. Explain the concepts of statement and decision coverage, and give reasons why

these concepts can also be used at test levels other than component testing (E.g.. On

business procedures at system level)

4.4.3. Write test cases from given control flows using statement and decision test design

techniques

4.4.4. Assess statement and decision coverage for completeness with respect to defined

exit criteria

Structure-based or white-box testing is based on an identified structure of the software or the

system which needs Component level (Statements, decisions, branches or even distinct paths),

Integration level and System level

Mainly has 3 code-related structural test design techniques like:

--- Statement Testing and Coverage, Is determined by no. of executable statements covered by

test cases divided by the number of all executable statements.

--- Decision Testing and Coverage: For this a control flow diagram may be used to visualize the

alternatives for each decision (E.g... True / False).

It is determined by the number of all decision outcomes covered by test cases divided by the

number of all possible decision outcomes in the code under test.

Decision coverage is stronger than statement coverage.

100% LCSAJ coverage will imply 100% Branch/Decision coverage

100% branch coverage guarantees 100% decision coverage.

100% decision/branch coverage guarantees 100% statement coverage.

*LCSAJ = Linear Code Sequence and Jump.

--- Other Structure-based techniques...There are stronger levels of structural coverage beyond

decision coverage, for e.g. condition and multiple condition coverage.

********************************************

Page 17: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 17 of 27

4.5. Experience-based Techniques

Terms: Exploratory testing, (fault) attack

4.5.1. Recall reasons for writing test cases based on intuition, experience and knowledge

about common defects.

4.5.2. Compare experience-based techniques with specification-based techniques

Tests are derived from tester's skill and intuition and their experience with similar applications

and technologies. Special tests are identified which are not easily captured by formal approaches.

A common is error guessing, which is also called as fault attack.

These defects and failure lists can be built based on experience, available defect and failure data,

and from common knowledge about why software fails.

More formally, Exploratory testing is concurrent test design, test execution, test logging and

learning, based on a test charter containing test objectives, and carried out within time-boxes. It

can serve as a check on test process, to help ensure that the most serious defects are found.

********************************************

4.6. Choosing Test Techniques

4.6.1. Classify test design techniques according to their fitness to a given context, for the

test basis, respective models and software characteristics.

Choise of which test techniques to use depends on a number of factors, including the type of

system, regulatory standards, customer or contractual requirements, level of risks, type of risks,

test objectives, documents, knowledge level of testers, time and budget, SDLC, use case models

and previous experience with types of defects found.

Some techniques are more applicable to certain situations and test levels, others are applicable to

all test levels.

When creating test cases, testers generally use a combination of test techniques including

process, rule and data-driven techniques to ensure adequate coverage of object under test.

Exploratory testing is normal testing + some tests probably based on several factors, including

past experience on similar projects, specific skills and detailed knowledge on Domain/Project,

and perhaps also aspects of personality, such as learning style etc without domain knowledge if

the testing is done by the tester and explores all the scenarios then it is called exploratory testing

Page 18: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 18 of 27

******************************************************************************

Chapter 5. Test management -- 8

******************************************************************************

Learning objectives for Test management

The Objectives identify what you will be able to do following the completion of each module.

5.1. Test Organization

Terms: Tester, test leader, test manager

5.1.1. Recognize the importance of independent testing

5.1.2. Explain the benefits and drawbacks of independent testing within an organization - K2

5.1.3. Recognize the different team members to be considered for the creation of a test team

-- Options like forming groups from development, outsourced testers, independent testers,

etc...But for large/critical projects as multiple level of testing is involved, it is suggested to have

an independent testing team but drawbacks like developers focus on quality and isolation from

development team may happen.

5.1.4. Recall the tasks of a typical test leader and tester

Sometimes, Test leader is called as a test manager or test coordinator. The role of the test leader

may be performed by a project manager, a development manager, a QA manager or the manager

of a test group.

In large projects test leader and test manager both exists. Tipically test leader plans, monitors and

controls the testing activities and tasks as defined in 1.4 section

-- Coordinate the test strategy, plan with Project Manager and others, Write/Review test strategy,

test policy of organization, plan tests, plan project activities, resources, define test levels, cycles,

adapt planning based on test results and test/project progress, metrics to measure the team, to go

for automation or not, select tools for testing, implementation of test environment, test summary

reports based on information gathered during testing

Tester performs all the tasks assigned to him by the Test Leader / Manager

-- Tester reviews/contributes to test plan, SRS assessing, any other specifications/Documents

review, test data, test cases/scripts, execution, automation tests, bug/defect tracking, measure

performance of components and systems, review tests developed by others, etc.

At some stages testers are developers for component and integration level; business experts for

User Acceptance level and operators for operational acceptance testing.

********************************

Page 19: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 19 of 27

5.2. Test Planning and Estimation - K3

Terms: Test approach, test strategy

IEEE 829 - Standard for Software Test Documentation

5.2.1. Recognize the different levels and objectives of test planning

5.2.2. Summarize the purpose and content of the test plan, test design specification and test

procedure documents according to the 'Standard for Software Test Documentation' - IEEE 829-

1998 - K2

5.2.3. Difference b/w conceptually different test approaches, such as analytical, model-based,

methodical, process/standard compliant, dynamic/heuristic, consultative and regression-averse -

K2

5.2.4. Differentiate between the subject of test planning for a system and schedule test execution

- K2

5.2.5. Write a test execution schedule for a given set of test cases, considering prioritization, and

technical and logical dependencies - K3

5.2.6. List test preparation and execution activities that should be considered during test planning

5.2.7. Recall typical factors that influence the effort related to testing

5.2.8. Differentiate b/w two conceptually different estimation approaches: the metrics-based and

expert-based approaches - K2

5.2.9. Recognize/justify adequate entry and exit criteria for specific test levels and groups of test

cases i.e. integration/acceptance/usability/any other

Test planning is a continuous activity and is performed in all life cycle processes and activities.

Feedback from test activities is used to recognize changing risks so that planning can be

adjusted.

Entry and Exit criteria..

Test estimate: Metrics-based approach (former or similar projects) and Expert-based approach.

Test approach is the implementation of the test strategy for a specific project, which is defined in

the test plans and test designs.

Test approach includes decisions made based on project's goal and risk assessment. It is

base/start point for selecting test design techniques and test types, and for Entry and Exit criteria.

Commercial Off-The-Shelf (COTS) software product vs Custom built products.

Typical approaches include: Analytical approaches (such as risk-based testing), Model-based

approaches (stochastic testing using statistical information about failure rates or usage),

Methodical approaches (such as failure-based), experience-based, checklist-based, failure-based,

characteristic-based, process- or standard-complaint approaches or various agile methodologies,

Dynamic and heuristic approaches (exploratory testing), consultative approaches (driven by

domain experts), Regression-averse approaches (reuse of existing material and standard test

suits)

Different Test approaches may be combined, e.g. risk-based dynamic approach.

********************************

Page 20: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 20 of 27

5.3. Test Progress monitoring and Control - K2

Terms: Defect density, failure rate, test control, test monitoring, test summary report

5.3.1. Recall common metrics used for monitoring test preparation and execution

5.3.2. Explain and compare test metrics for test reporting and test control (defect found/fixed &

test pass/fail) related to purpose and use - K2

5.3.3. Summarize the purpose and content of the test summary report document according to the

'Standard for Software Test Documentation' - IEEE 829-1998 - K2

The purpose of test monitoring is to provide feedback and visibility about test activities.

Metrics may be used to assess progress against the planned schedule and budget.

Common Test Metrics include: Percentage of work done in test case preparation / work done in

test environment preparation / test case execution / defect information / test coverage / dates of

test milestones, testing costs.

Test reporting is concerned with summarizing information about the testing endeavor, including

what happened during a period of testing, such as dates when exit criteria were met; analyzed

information and metrics to support recommendations and decisions about future actions, such as

assessment of defects remaining, outstanding risks, economic benefit of continued testing, and

level of confidence in tested software.

Outline of test summary report is given in 'Standard for Software Test Documentation - IEEE

829-1998'

Metrics should be collected during and end of test level to assess: adequacy of test objectives for

that test level, adequacy of test approaches taken, effectiveness of testing with respect to

objectives.

Test control describes any guiding or corrective actions taken as a result of information and

metrics gathered and reported. E.g. Making decisions based on information from test monitoring;

Re-prioritizing tests when identified risk occurs; Changing the test schedule due to availability or

unavailability; setting entry criterion to have been re-tested (confirmation testing) by a developer

before accepting them into a build.

********************************

5.4. Confirmation Management - K2

Terms: Configuration management, version control

5.4.1. Summarize how configuration management supports testing - K2

The purpose of configuration management is to establish and maintain the integrity of the

products (components, data and documentation) of the software or system through the project

and product life cycle.

Page 21: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 21 of 27

Configuration management may involve ensuring:

-- All items of test ware, version controlled, tracking for changes, related to each other and

related to development items so that traceability can be maintained throughout the test process

-- All identified documents and software items are referenced unambiguously in test

documentation.

For testers, configuration management helps to uniquely identify the tested items, test

documents, tests and test harness.

During test planning the configuration management procedures and infrastructure should be

chosen, documented and implemented.

********************************

5.5. Risks and Testing - K2

Terms: Product risk, project risk, risk, risk-based testing

Risks can be defined as the chance of an event, hazard, threat or situation occurring and resulting

in undesirable consequences or a potential problem. The level of risk depends on the adverse

event happening and the impact.

5.5.1. Describe a risk which can be a threat to one/more project objectives - K2

5.5.2. Remember that the level of risks is determined by likelihood and impact

5.5.3. Distinguish between the project and project risks - K2

5.5.4. Recognize typical product and project risks

5.5.5. Describe, using examples, how risk analysis and risk management may be used for test

planning - K2

Project Risks: Project risks are the risks that surround the project's capability to deliver its

objectives, such as:

Organization factors: Skill, training, staff shortages, personnel issues, political issues

(communication issues), improper attitude

Technical issues: problems in defining right requirements, extent to which requirements

cannot be met, test environment not ready, late data conversion, migration planning and

development and conversion/migration tools.

Supplier issues: Failure of a third party, contractual issues.

Product Risks: Potential failure areas in the software or system are known as product risks, as

they are a risk to the quality of the product

Failure-prone software delivered: potential Software/Hardware could cause hard to an individual

or company, poor software characteristics, poor data integrity and quality, software that does not

perform its intended functions.

Risks are used to decide where to start testing and where to test more; testing is used to reduce

the risk of an adverse effect occurring, or to reduce the impact of an adverse effect.

Product risks are a special type of risk to the success of a project, since effectiveness of critical

defect removal and of contingency plans.

Page 22: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 22 of 27

A risk-based approach to testing provides proactive opportunities to reduce the level of product

risk, starting in the initial stages of a project.

Risk-based testing draws on the collective knowledge and insight of the project stakeholders to

determine the risks and the level of testing required to address those risks.

To ensure the chance of product failure is minimized, risk management activities provide an

approach to: Assess what can go wrong, determine what risks are important to deal with,

implement actions to deal with those risks.

In addition, testing may support the identification of new risks, may help to determine what risks

should be reduced, may lower uncertainty about risks.

********************************

5.6. Incident Management - K3

Terms: Incident logging, incident management, incident report.

5.6.1. Recognize the content of an incident report according to the 'Standard for Software Test

Documentation' - IEEE 829-1998

5.6.2. Write an incident report covering the observation of a failure during testing - K3

Since one of the objectives of testing is to find defects, the discrepancies between actual and

expected outcomes need to be logged as incidents.

An incident must be investigated and may turn out to be a defect.

Incidents and defects should be tracked from discovery and classification to correction and

confirmation of the solution.

In order to manage all incidents to completion, an organization should establish an incident

management process and rules for classification.

Incident reports have following objectives: provide everyone with feedback about problem,

isolation and correction as necessary, provide test leaders a means of tracking the quality of

system under test and progress of testing, provide ideas for test process improvement.

Details of incident report may include:

Data of issue, issuing organization, author; expected and actual results; identification of test item

and environment; Software/System life cycle in which the incident was observed; description of

incident to enable reproduction and resolution, including logs, DB dumps / screenshots; scope /

degree of impact on stakeholders interests; severity; Urgency/priority to fix; Status of incident;

conclusions, recommendations and approvals; Global issues; Change history, such as sequence

of actions taken by project team members with respect to incident to isolate, repair, and confirm

it as fixed; References, including the identity of the test case specification that revealed the

problem.

Structure of an incident report is also covered in the 'Standard for Software Test documentation

(IEEE std 829-1998)

Page 23: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 23 of 27

******************************************************************************

Chapter 6. Tool support for testing -- 4

******************************************************************************

6.1. Types of Test Tools - K2

Terms: Configuration management tool, coverage tool, debugging tool, dynamic analysis tool,

incident management tool, load testing tool, modeling tool, monitoring tool, performance testing

tool, probe effect, requirements management tool, review tool, security tool, static analysis tool,

stress testing tool, test comparator, test data preparation tool, test design tool, test harness, test

execution tool, test management tool, unit test framework tool.

6.1.1. Classify different types of test tools according to their purpose and to the activities of the

fundamental test process and the software life cycle - K2

6.1.2. Explain the term test tool and the purpose of tool support for testing - K2

A tool is something which makes process and testing easy in each stage of an SDLC.

There are different test tools which help in testing of a system faster as the process is automated.

Requirements Testing tools:

Automated support for verification and validation of requirements models

-- Consistency checking

-- Animation

Static Analysis tools:

Provide information about the quality of software…….Code is examined, not executed

Objective measures

-- Cyclomatic complexity: L-N+2P where L: No. of Edges; N: No. of Nodes & P: No. of

disconnected parts

-- Others: nesting levels, size

Test Design Tools:

Generate test inputs

-- From a formal specification or CASE repository

-- From code (e.g. code not covered yet)

Test data preparation tools:

Data manipulation

-- selected from existing databases or files

-- created according to some rules

-- edited from other sources

Test running tools 1:

-- Interface to the software being tested and Run tests as though run by a human tester

-- Test scripts in a programmable language

-- Data, test inputs and expected results held in test repositories

-- Most often used to automate regression testing

Page 24: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 24 of 27

Test running tools 2:

Character-based

-- simulates user interaction from dumb terminals

-- capture keystrokes and screen responses

GUI (Graphical User Interface)

-- simulates user interaction for WIMP applications (Windows, Icons, Mouse, Pointer)

-- capture mouse movement, button clicks, and keyboard inputs

-- Capture screens, bitmaps, characters, object states

Comparison tools:

Detect differences between actual test results and expected results

-- Screens, characters, bitmaps

-- Masking and filtering

Test running tools normally include comparison capability

Stand-alone comparison tools for files or databases

Test harness and drivers:

-- Used to exercise software which does not have a user interface (yet)

-- Used to run groups of automated tests or comparisons

-- Often custom-build

-- Simulators (where testing in real environment would be too costly or dangerous)

Performance testing tools:

Load generation

-- drive application via user interface or test harness

-- simulates realistic load on the system & logs the number of transactions

Transaction measurement

-- Response times for selected transactions via user interface

Reports based on logs, graphs of load versus response times

Dynamic analysis tools:

Provide run-time information on software (while tests are run)

-- Allocation, use and de-allocation of resources, e.g. memory leaks

-- flag unassigned pointers or pointer arithmetic faults

Debugging tools:

Used by programmers when investigating, fixing and testing faults

Used to reproduce faults and examine program execution in detail

-- Single-stepping

-- Breakpoints or watch points at any statement

-- examine contents of variables and other data

Coverage measurement tools:

-- Objective measure of what parts of the software structure was executed by tests

-- Code is instrumented in a static analysis pass

-- Tests are run through the instrumented code

Page 25: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 25 of 27

-- Tool reports what has and has not been covered by those tests, line by line and summary

statistics

-- Different types of coverage: statement, branch, condition, LCSAJ, et al

********************************

6.2. Effective use of tools: Potential benefits and Risks - K2

6.2.1. Summarize the potential benefits and risks of test automation and tool support for testing -

K2

6.2.2. Remember special considerations for test execution tools, static analysis, and test

management tools - K1

Benefits: Repetitive work is reduced, greater consistency and repeatability, objective assessment,

ease of access to information about tests or testing

Risks: Unrealistic expectations for the tool; underestimating the time, cost and effort; over-

reliance on the tool; Neglecting version control of test assets within the tool; Neglecting

relationships and interoperability issues between critical tools; Risk of tool vendor going out of

business; poor support for the tool; unforeseen (inability to support a new platform).

Static analysis tools: Refer Chapter: 3.3

Test management tools:

-- Management of test ware: test plans, specifications, results

-- Project management of the test process, e.g. estimation, schedule tests, log results

-- Incident management tools (may include workflow facilities to track allocation, correction and

retesting)

-- Traceability (of tests to requirements, designs)

Test Execution Tools: Test execution tools execute test objects using automated test scripts. This

type of tool often requires significant effort in order to achieve significant benefits.

Data-driven, keyword-driven are some of the testing approaches in execution tool

Advantages of recording manual tests:

Documents what the tester actually did

-- Useful for capturing ad hoc tests (e.g. end users)

-- May enable software failures to be reproduced

Produces a detailed “script”

-- Records actual inputs

-- Can be used by a technical person to implement a more maintainable automated test

Ideal for one-off tasks

-- Such as long or complicated data entry

Capture test scripts:

Will not be very understandable

-- It is a programming language after all!

-- During maintenance will need to know more than can ever be „automatically commented‟

Page 26: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 26 of 27

Will not be resilient to many software changes

-- A simple interface change can impact many scripts do not include verification

-- May be easy to add a few simple screen based comparisons

Automation Verification:

There are many choices to be made

-- Dynamic / post execution, compare lots / compare little, resilience to change / bug finding

effective scripts can soon become very complex

-- More susceptible to change, harder to maintain there is a lot of work involved

-- Speed and accuracy of tool use is very important usually there is more verification that can

(and perhaps should) be done

-- Automation can lead to better testing (not guaranteed!)

Effort to automate:

The effort required to automate any one test varies greatly

-- Typically between 2 and 10 times the manual test effort

And depends on:

-- Tool, skills, environment and software under test

-- Existing manual test process which may be:

-- Unscripted manual testing

-- scripted (vague) manual testing

-- scripted (detailed) manual testing

*** Don‟t automate too much long term

********************************

6.3. Introducing a tool into an organization

6.3.1. State the main principles of introducing a tool into an organization

6.3.2. State the goals of a proof-of-concept for tool evaluation and a piloting phase for tool

implementation

6.3.3. Recognize that factors other than simply acquiring a tool are required for good tool support

Main considerations in selecting a tool for an organization include:

-- Assessment of organizational maturity, evaluation against clear requirements and objectives,

proof-of-concept, evaluation of the vendor, identification of internal resources / training needs,

estimation of cost-benefit ratio based on concrete business case.

-- Introducing the selected tool into an organization starts with a pilot project with evaluation,

deciding and assessment.

-- Success factors for deployment of the tool within an organization includes: rolling out the tool,

adapting and improving, providing training, defining usage guidelines, monitoring tool use and

benefits, providing support for test team for a given tool, gathering lessons learned from all

teams.

Page 27: ISTQB Ravinder Document

Ravinder Puranam

[email protected]

Page 27 of 27

******************************************************************************

Glossary important extracts: Standards

******************************************************************************

- BS 7925-2:1998. Software Component Testing / Cause-Effect graphing

- DO-178B:1992. Software Considerations in Airborne Systems and Equipment

Certification, Requirements and Technical Concepts for Aviation (RTCA SC167)

- IEEE 610.12:1990. Standard Glossary of Software Engineering Terminology

- IEEE 829:1998. Standard for Software Test Documentation

- IEEE 1008:1993. Standard for Software Unit Testing

- IEEE 1012:2004 Standard for Verification and Validation Plans

- IEEE 1028:1997. Standard for Software Reviews and Audits

- IEEE 1044:1993. Standard Classification for Software Anomalies

- IEEE 1219:1998. Software Maintenance

- ISO/IEC 2382-1:1993. Data processing - Vocabulary

- ISO 9000:2005. Quality Management Systems – Fundamentals and Vocabulary

- ISO/IEC 9126-1:2001. Software Product Quality

- ISO/IEC 12207:1995. Software Life Cycle Processes.

- ISO/IEC 14598-1:1999. Software Product Evaluation

Trademarks:

- CMM and CMMI are registered trademarks of Carnegie Mellon University

- TMap, TPA and TPI are registered trademarks of Sogeti Nederland BV

- TMM is a registered service mark of Illinois Institute of Technology

- TMMi is a registed trademark of the TMMi Foundation