Testing Manual Document V1 1 [1].1[1]

download Testing Manual Document V1 1 [1].1[1]

of 122

Transcript of Testing Manual Document V1 1 [1].1[1]

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    1/122

    Testing Manual

    CONTENTS

    1

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    2/122

    Testing Manual

    I. The Software life cycle

    1.1 Feasibility Study

    1.2 Systems Analysis1.3 Systems Design1.4 Coding1.5 Testing1.6 Installation & Maintenance

    2. SDLC Models

    2.1 Code-and-fix model2.2 Waterfall model2.3 Prototyping model

    2.4 Incremental2.5 V-Model2.6 Spiral model2.7 RAD model

    3. Testing Life Cycle

    3.1 System Study3.2 Scope/ Approach/ Estimation3.3 Test Plan Design3.4 Test Cases Design

    3.5 Test Case Review3.6 Test Case Execution3.7 Defect Handling3.8 Gap Analysis3.9 Deliverables

    4 Testing Phases The V Model

    4.1 Requirement Analysis Testing4.2 Design Testing4.3 Unit Testing

    4.4 Integration Testing4.5 System Testing4.6 Acceptance Testing

    2

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    3/122

    Testing Manual

    5 Testing Methods FURRPSC Model

    5.1 Functionality Testing5.2 Usability Testing5.3 Reliability Testing

    5.4 Regression Testing5.5 Performance Testing5.6 Scalability Testing5.7 Compatibility Testing5.8 Security Testing5.9 Installation Testing5.10 Adhoc Testing5.11 Exhaustive Testing

    6 Performance Life Cycle

    6.1 What is Performance Testing6.2 Why Performance Testing6.3 Performance Testing

    6.4 Load Testing

    6.5 Stress Testing

    6.6 When should we start Performance Testing6.7 Popular tools used to conduct Performance Testing6.8 Performance Test Process

    7 Life Cycle of Automation

    7.1 What is Automation?7.2 Benefits of Test Automation7.3 False Benefits7.4 What are the different tools available in the market?

    8 Testing

    8.1 Test Strategy8.2 Testing Approach8.3 Test Environment8.4 Risk Analysis

    8.5 Testing Limitations8.6 Testing Objectives8.7 Testing Metrics8.7 Test Stop Criteria:8.8 Six Essentials of Software Testing8.9 Five common problems in s/w development process?8.10 Solution for common problems that occur during softwaredevelopment.

    3

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    4/122

    Testing Manual

    8.11 What Should be done no enough time for testing8.12 How do you know when to stop testing?8.14 Why does the software have Bugs?

    9 Tester Responsibilities

    9.1 Test Manager9.2 Test Lead9.3 Tester Engineer9.4 How to Prioritize Tests

    10 How can we improve the efficiency in testing?

    11 CMM and ISO Standards

    12 Bug Report Template

    13 Testing Glossary

    14 Interview Questions

    4

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    5/122

    Feasibility Study

    Design

    Coding

    Analysis

    Testing

    Installation &Maintenance

    Testing Manual

    What Manual Testing?

    Testing activities performed by people without the help of Software Testing Tools.

    What is Software Quality?

    It is reasonably bug free delivered on time with in the budget, meets all requirements andit is maintainable.

    1. The Software life cycle

    All the stages from start to finish that take place when developing a newSoftware.

    The software life-cycle is a description of the events that occurbetween the birth and death of a software project inclusively.

    SDLC is separated into phases (steps, stages)

    SLDC also determines the order of the phases, and the criteria fortransitioning from phase to phase

    5

    Feasibility Study What exactly is this system supposed to do?

    Analysis Determine and list out the details of the problem.

    Design How will the system solve the problem?

    Coding Translating the design into the actual system.

    Testing Does the system solve the problem?

    Have the requirements been satisfied? Does the system work properly in all situations?

    Maintenance Bug fixes

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    6/122

    Testing Manual

    Change Requests on Requirement Specifications

    Why Customer ask Change Requests

    Different users/customers have different requirements.

    Requirements get clarified/ known at a later date

    Changes to business environment

    Technology changes

    Misunderstanding of the stated requirements due to lack of domainknowledge

    How to Communicate the Change Requests to team

    Formal indication of Changes to requirements

    Joint Review Meetings

    Regular Daily Communication

    Queries

    Defects reported by Clients during testing.

    Client Reviews of SRS, SDD, Test plans etc.

    Across the corridor/desk (Internal Projects)

    Presentations/ Demonstrations

    Analyzing the Changes

    Classification

    Specific

    Generic

    Categorization

    Bug

    Enhancement

    Clarification etc.

    Impact Analysis

    Identify the Items that will be effected

    Time estimations

    Any other clashes / open issues raised due to this?

    6

    The Analyst conducts an initialstudy of the problem and asks isthe solution

    Technologically possible?Economically possible?Legally possible?Operationally possible?

    s

    i

    bl

    e

    ?

    Op

    e

    ra

    t

    io

    n

    a

    ll

    y

    p

    o

    s

    si

    bl

    e

    ?

    Sc

    h

    ed

    ule

    d

    ti

    m

    e

    s

    ca

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    7/122

    Testing Manual

    Benefits of accepting Change Requests

    1. Direct Benefits

    Facilitates Proper Control and Monitoring

    Metrics Speak for themselves

    You can buy more time.

    You may be able to bill more.

    2. Indirect Benefits:

    Builds Customer Confidence.

    What can be done if the requirements are changingcontinuously?

    Work with project stakeholders early on to understand how therequirements might change. So that alternative test plans and

    strategies can be worked out in advance.

    It is helpful if the application initial design has some adaptability. So thatlater changes do not require redoing the application from scratch.

    If the code is well commented and well documented. Then it is easy tomake changes for developers.

    Use rapid prototyping whenever possible to help customers feel sure oftheir requirements and minimize changes.

    Negotiate to allow only easily-implemented new requirements into theproject, while moving more difficult new requirements into future

    versions of the application.

    7

    p

    os

    s

    i

    bl

    e

    ?

    ste

    ms

    a

    na

    l

    ys

    tc

    o

    nd

    u

    c

    ts

    a

    n

    i

    n

    i

    ti

    a

    l

    s

    tu

    d

    y

    o

    f

    t

    he

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    8/122

    Feasibility Study

    Design

    Coding

    Analysis(Requirements)

    Testing

    Installation &Maintenance

    Testing Manual

    1.1 Feasibility Study:

    The feasibility report

    BRS (Business Requirement Document)

    Applications areas to be considered (Stock control, banking,Accounts etc)

    System investigations and System requirements for each

    application

    Cost estimates

    Timescale for implementation

    Expected benefits

    8

    ro

    b

    le

    m

    an

    d

    a

    s

    ks

    is

    t

    he

    so

    l

    ut

    i

    o

    n

    T

    ec

    h

    no

    l

    og

    i

    c

    al

    l

    y

    p

    os

    s

    i

    bl

    e?

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    9/122

    Feasibility Study

    Design

    Coding

    Analysis

    Testing

    Installation &Maintenance

    Testing Manual

    1.2 Systems Analysis:

    9

    con

    o

    mi

    c

    al

    l

    y

    p

    o

    ss

    i

    bl

    e

    ?

    L

    eg

    al

    l

    y

    p

    os

    s

    ibl

    e

    ?O

    p

    er

    a

    t

    io

    n

    al

    l

    y

    p

    os

    si

    The process of identifying problems, resources opportunities, requirementand constraints

    System analysis is the process of Investigating a business with a view to

    determining how best to manage the various procedures and informationprocessing tasks that it involves.

    1.2.1 The Systems Analyst

    Performs the investigation and mighrecommend the use

    of a computer to improve the efficiency of theinformation system.

    1.2.2 Systems Analysis

    The intention to determine how well abusiness copes

    with its current information processing needs

    Whether it is possible to improve theprocedures inorder

    to make it more efficient or profitable.

    The (BRS, FRS and SRS) Documents Bridge the communication

    The System Analysis Report

    SRS(Software Requirement Specification)

    Use Cases( User action and system Response)

    FRS(Functional Requirement Document) Or Functional

    specifications

    [These 3 are the Base documents for writing Test Cases]

    Documenting the results

    Systems flow charts

    Data flow diagrams

    Organization charts

    Report

    Note FRS contains Input, Output, process but no format.

    Use Cases contains user action and s stem res onse with fixed

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    10/122

    Feasibility Study

    Design

    Coding

    Analysis

    Testing

    Installation &Maintenance

    Testing Manual

    1.3 Systems Design:

    The business of finding a way to meet the functional requirementswithin the specified constraints using the available technology

    Planning the structure of the information system to be implemented.

    Systems analysis determines what the system should do

    Design determines how it should be done.

    System Design Report consist of

    Architectural Design

    Database Design

    Interface Design

    Design Phases

    High Level Design

    Low Level Design

    10

    e

    ?S

    c

    he

    d

    ul

    e

    dt

    i

    me

    sc

    a

    l

    e

    p

    os

    s

    ib

    l

    e?

    User interface designDesign of output reportsInput screensData storage i .e files, database

    TablesSystem securityBackups, validation, passwordsTest plan

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    11/122

    Testing Manual

    High Level Design

    1. List of modules and a brief description of each

    2. Brief functionality of each module

    3. Interface relationship among modules

    4. Dependencies between modules

    5. Database tables identified with key elements

    6. Overall architecture diagrams along with technology

    details

    Low Level Design

    1. Detailed functional logic of the module, in pseudo code

    2. Database tables, with all elements, including their type and size

    3. All interface details

    4. All dependency issues

    5. Error MSG listing

    6. Complete input and output format of a module

    Note: HLD and LLD phases put together called Design phase

    1.4 Coding:

    11

    Translating the design into the

    actual system

    Program developmentUnite testing by Development

    Tem (Dev Tem)

    Feasibility Study

    Design

    Coding

    Analysis

    Testing

    Installation &Maintenance

    Coding Report

    All the programs, Functions, Reports thatrelated to System

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    12/122

    Feasibility Study

    Design

    Coding

    Analysis

    Testing

    Installation &Maintenance

    Testing Manual

    1.5 Testing:

    .IEEE Definitions

    Failure: External behavior is incorrect

    Fault: Discrepancy in code that causes a failure.

    Error: Human mistake that caused fault

    Note:

    Error is terminology of Developer

    Bug is terminology of Tester

    Why is Software Testing?

    1. To discover defects.

    2. To avoid user detecting problems

    3. To prove that the software has no faults

    4. To learn about the reliability of the software.

    5. To avoid being sued by customers

    6. To ensure that product works as user expected.

    7. To stay in business

    8. To detect defects early, which helps in reducing the cost of defect fixing?

    12

    1.5.1 What Is Software Testing?

    IEEE Terminology : An examination of the behavior of theprogram by executing on sample data sets.

    Testing is Executing a program with an intention of finding defects

    Testing is executing a program with an indent of findingError/Fault and Failure.

    Fault : It is a condition that causes the software to fail toperform its

    required function.

    Error : Refers to difference between Actual Output andExpected

    output.F F Failure : It is the inability of a system or component to perf

    re uired function accordin to its s ecification

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    13/122

    Testing Manual

    Cost of Defect Repair

    Phase % CostRequirements 0Design 10Coding 20

    Testing 50Customer Site 100

    0

    20

    40

    60

    80

    100

    Cost

    SDLC Phase

    Cost of Defect Repair

    % Cost 0 10 20 50 100

    Requirem Design Coding Testing Customer

    How exactly Testing is different from QA/QC

    Testing is often confused with the processes of quality control and qualityassurance.

    Testing

    It is the process of Creating, Implementing and Evaluating tests.

    Testing measures software quality

    Testing can find faults. When they are removed, software quality is

    improved.Quality Control (QC)

    It is the process of Inspections, Walk-troughs and Reviews.

    Measures the quality of the product

    It is a Detection Process

    13

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    14/122

    Testing Manual

    14

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    15/122

    Testing Manual

    Quality Analysis (QA )

    Monitoring and improving the entire SDLC process

    Make sure that all agreed-upon standards and procedures are followed

    Ensuring that problems are found and addressed.

    Measures the quality of the process used to create good quality Product

    It is a Prevention Process

    Why should we need an approach for testing?

    Yes, We definitely need an approach for testing.

    To over come following problems, we need a formal approach for Testing.

    Incomplete functional coverage: Completeness of testing is difficult taskfor testing team with out a formal approach. Team will not be in a position to

    announce the percentage of testing completed.No risk management -- this is no way to measure overall risk issuesregarding code coverage and quality metrics. Effective quality assurancemeasures quality over time and starting from a known base of evaluation.

    Too little emphasis on user tasks -- because testers will focus on idealpaths instead of real paths. With no time to prepare, ideal paths are definedaccording to best guesses or developer feedback rather than by carefulconsideration of how users will understand the system or how usersunderstand real-world analogues to the application tasks. With no time toprepare, testers will be using a very restricted set input data, rather than

    using real data (from user activity logs, from logical scenarios, from carefulconsideration of the concept domain).

    Inefficient over the long term -- quality assurance involves a range oftasks. Effective quality assurance programs expand their base ofdocumentation on the product and on the testing process over time,increasing the coverage and granularity of tests over time. Great testingrequires good test setup and preparation, but success with the kind Test plan-less approach described in this essay may reinforce bad project and testmethodologies. A continued pattern of quick-and-dirty testing like this is a signthat the product or application is unsustainable in the long run.

    Areas of Testing:

    Black Box Testing

    White Box Testing

    Gray Box Testing

    15

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    16/122

    Testing Manual

    Black Box Testing

    Test the correctness of the functionality with the help of Inputs and

    Outputs.

    User doesnt require the knowledge of software code.

    Black box testing is also called as Functionality Testing.

    It attempts to find errors in the following categories:

    Incorrect or missing functions.

    Interface errors.

    Errors in data structures or external data base access.

    Behavior or performance based errors.

    Initialization or termination errors.

    Approach:

    Equivalence Class:

    For each piece of the specification, generate one or more equivalence

    Class

    Label the classes as Valid or Invalid

    Generate one test case for each Invalid Equivalence class

    Generate a test case that covers as many Valid Equivalence Classesas possible

    An input condition for Equivalence Class

    A specific numeric value

    A range of values

    A set of related values

    16

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    17/122

    Testing Manual

    A Booleancondition

    17

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    18/122

    Testing Manual

    Equivalence classes can be defined using the followingguidelines

    If an input condition specifies a range, one valid and two invalid

    equivalence class are defined.

    If an input condition requires a specific value, one valid and two

    invalid equivalence classes are defined.

    If an input condition specifies a member of a set, one valid and one

    invalid equivalence classes are defined.

    If an input condition is Boolean, one valid and one invalid classes are

    defined.

    Boundary Value Analysis

    Generate test cases for the boundary values.

    Minimum Value, Minimum Value + 1, Minimum Value -1

    Maximum Value, Maximum Value + 1, Maximum Value - 1

    Error Guessing.

    Generating test cases against to the specification.

    White Box Testing

    Testing the Internal program logic

    White box testing is also called as Structural testing.

    User does require the knowledge of software code.

    Purpose

    Testing all loops

    Testing Basis paths

    Testing conditional statementsTesting data structures

    Testing Logic Errors

    Testing Incorrect assumptions

    Structure = 1 Entry + 1 Exit with certain Constraints, Conditions and Loops.

    18

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    19/122

    Testing Manual

    Logic Errors and incorrect assumptions most are likely to be made whilecoding for

    special cases. Need to ensure these execution paths are tested.

    19

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    20/122

    Testing Manual

    Approach

    Basic Path Testing (Cyclomatic Complexity(Mc Cabe Method)

    Measures the logical complexity of a procedural design.

    Provides flow-graph notation to identify independent paths of

    processing

    Once paths are identified - tests can be developed for - loops,

    conditions

    Process guarantees that every statement will get executed at least

    once.

    Structure Testing:

    Condition Testing

    All logical conditions contained in the program module should be

    tested.

    Data Flow Testing

    Selects test paths according to the location of definitions and use

    of variables.

    Loop Testing

    Simple Loops

    Nested Loops

    Concatenated Loops

    Unstructured Loops

    Gray Box Testing.

    It is just a combination of both Black box & white box testing.

    It is platform independent and language independent.

    Used to test embedded systems.

    Functionality and behavioral parts are tested.

    Tester should have the knowledge of both the internals andexternals of the function

    If you know something about how the product works on theinside, you can test it better from the outside.

    20

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    21/122

    Testing Manual

    Gray box testing is especially important with Web and Internetapplications, because the Internet is built around loosely integratedcomponents that connect via relatively well-defined interfaces. Unlessyou understand the architecture of the Net, your testing will be skindeep.

    21

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    22/122

    Feasibility Study

    Design

    Coding

    Analysis

    Testing

    Installation &Maintenance

    Testing Manual

    1.6 Installation & Maintenance

    Installation

    File conversion

    New system becomes operational

    Staff training

    Maintenance Corrective maintenance A type of maintenance performed to correct

    a defect

    Perfective maintenance Reengineering include enhancement

    Adaptive maintenance To change software so that it will work in analtered environment, such as when an operating system, hardware platform,

    compiler, software library or database structure changes

    Table format of all the phases in SDLC:

    PHASE INPUT OUTPUTAnalysis BRS FRS and SRSDesign FRS and SRS Design DocCoding Design Doc .exe

    File/Application/WebsiteTesting All the above Docs Defect Report

    22

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    23/122

    Testing Manual

    2. Software Development Life Cycles

    Life cycle: Entire duration of a project, from inception totermination

    Different life cycle models

    2.1. Code-and-fix model:

    Earliest software development approach (1950s)

    Iterative, programmers' approach

    Two phases: 1. coding, 2. fixing the code

    No provision for:

    Project planning

    Analysis

    Design

    Testing

    Maintenance

    Problems with code-and-fix model:

    1. After several iterations, code became very poorly structured;subsequent fixes became very expensive

    2. Even well-designed software often very poorly matched usersrequirements: were rejected or needed to be redeveloped(expensively!)

    3. Changes to code were expensive, because of poor testing andmaintenance practices

    Solutions:

    1. Design before coding

    2. Requirements analysis before design

    3. Separate testing and maintenance phases after coding

    2.2. Waterfall model:

    Also called the classic life cycle

    Introduced in 1956 to overcome limitations of code-and-fix model

    Very structured, organized approach and suitable for planning

    Waterfall model is a linearapproach, quite inflexible

    23

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    24/122

    Testing Manual

    At each phase, feedback to previous phases is possible (but isdiscouraged in practice)

    Still is the most widespread model today

    Main phases:1. Requirements

    2. Analysis

    3. Design (overall design & detailed design)

    4. Coding

    5. Testing (unit test, integration test, acceptance test)

    6. Maintenance

    24

    Requirements

    Design

    Coding

    Testing

    Analysis

    Maintenance

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    25/122

    Testing Manual

    Approaches

    The standard waterfall model for systems development is anapproach that goes through the following steps:

    1. Document System Concept

    2. Identify System Requirements and Analyze them

    3. Break the System into Pieces (Architectural Design)

    4. Design Each Piece (Detailed Design)

    5. Code the System Components and Test Them Individually(Coding, Debugging, and Unit Testing)

    6. Integrate the Pieces and Test the System (System Testing)

    7. Deploy the System and Operate It

    Waterfall Model Assumption

    The requirements are knowable in advance of implementation.

    The requirements have no unresolved, high-risk implications

    -- e.g., risks due to COTS choices, cost, schedule,performance, safety, security, user interface,organizational impacts

    The nature of the requirements are compatible with all the keysystem stakeholders expectations

    -- e.g., users, customer, developers, maintainers, investors

    The right architecture for implementing the requirements is wellunderstood.

    There is enough calendar time to proceed sequentially.

    Advantages of Waterfall Model

    Conversion of existing projects in to new projects.

    For proven platforms and technologies, it works fine.

    Suitable for short duration projects.

    The waterfall model is effective when there is no change in the

    requirements, and the requirements are fully known . If there is no Rework, this model build a high quality product.

    The stages are clear cut

    All R&D done before coding starts, implies better quality programdesign

    25

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    26/122

    Testing Manual

    Disadvantages with Waterfall Model:

    Testing is postponed to later stage till coding completes.

    Not suitable for large projects

    It assumes uniform and orderly sequence of steps.

    Risk in certain project where technology itself is a risk.

    Correction at the end of phase need correction to the previousphase, So rework is more.

    Real projects rarely flow in a sequential process.

    It is difficult to define all requirements at the beginning of aproject.

    The model has problems adapting to change.

    A working version of the system is not seen until late in theproject's life.

    Errors are discovering later (repairing problem further along thelifecycle becomes progressively more expensive).

    Maintenance cost can be as much as 70% of system costs.

    Delivery only at the end (long wait)

    2.3. Prototyping model:

    Introduced to overcome shortcomings of waterfall model

    Suitable to overcome problem of requirements definition Prototyping builds an operational model of the planned system,

    which the customer can evaluate

    Main phases:

    1. Requirements gathering

    2. Quick design

    3. Build prototype

    4. Customer evaluation of prototype

    5. Refine prototype6. Iterate steps 4. and 5. to "tune" the prototype

    7. Engineer product

    26

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    27/122

    Testing Manual

    Note: Mostly, the prototype is discarded after step 5. and theactual

    system is built from scratch in step 6. (throw-awayprototyping)

    Possible problems: Customer may object to prototype being thrown away and may

    demand "a few changes" to make it working (results in poorsoftware quality and maintainability)

    Inferior, temporary design solutions may become permanentafter a while, when the developer has forgotten that they wereonly intended to be temporary (results in poor software quality)

    Advantages

    Helps counter the limitations of waterfall model

    After prototype is developed, the end user and the client arepermitted to use the application and further modifications aredone based on their feedback.

    User oriented

    What the user sees

    Not enigmatic diagrams

    27

    Prototyping

    RequirementsEngineerProduct

    QuickDesign

    BuildPrototype

    EvaluatePrototype

    RefinePrototype

    Changes?Yes

    No

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    28/122

    Testing Manual

    Quicker error feedback

    Earlier training

    Possibility of developing a system that closely addresses users'

    needs and expectations

    Disadvantages

    Development costs are high.

    User expectations

    Bypass analysis

    Documentation

    Never ending Managing the prototyping process is difficult because of its rapid,

    iterative nature

    Requires feedback on the prototype

    Incomplete prototypes may be regarded as complete systems

    28

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    29/122

    Testing Manual

    2.4 Incremental:

    During the first one-month phase, the development team workedfrom static visual designs to code a prototype. In focus groupmeetings, the team discussed users needs and the potentialfeatures of the product and then showed a demonstration of its

    prototype. The excellent feedback from these focus groups had alarge impact on the quality of the product.

    Main phases:

    1. Define outline Requirements 4. Develop

    2. Assign requirements to increments 5. Integrate3. Design system architecture 6. Validate

    After the second group of focus groups, the feature set was frozen andthe product definition complete. Implementation consisted of four-to-six-week cycles, with software delivered for beta use at the end ofeach cycle. The entire release took 10 months from definition tomanufacturing release. Implementation lasted 4.5 months. The resultwas a world-class product that has won many awards and has beeneasy to support.

    29

    Incremental

    Define outlinerequirements

    Assign requirementsto increments

    Design systemarchitecture

    Develop systemincrement

    Validateincrement

    Integrateincrement

    Validatesystem

    FinalSystem

    System incomplete

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    30/122

    Testing Manual

    2.5 V-Model:

    Verification (Static System Doing Right Job) To test thesystem correctness as to whether the system is beingfunctioning as per specifications.

    Typically involves in Reviews and Meetings toevaluate documents, plans, code, requirements andspecifications.

    This can be done with checklists, issue lists,walkthroughs and inspection meetings.

    Validation (Dynamic System - Job Right) Testing the system ina real environment i.e, whether software is catering the customersrequirements.

    Typically involves in actual testing and take place afterverifications are completed

    Advantages

    Reduces the cost of defect repair ( . Every document is verified

    by tester ) No Ideal time for Testers

    Efficiency of V-model is more when compare to Waterfall Model

    Change management can be effected in V-model

    Disadvantages

    Risk management is not possible

    30

    VERIFICA

    TION

    VALID

    ATI

    ON

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    31/122

    Testing Manual

    Applicable of medium sized projects

    2.6 Spiral model:

    Objective: overcome problems of other models, while combining

    their advantages

    Key component: risk management (because traditional modelsoften fail when risk is neglected)

    Development is done incrementally, in several cycles_ Cycle asoften as necessary to finish

    Main phases:

    1. Determine objectives, alternatives for development, andconstraints for the portion of the whole system to be developedin the current cycle

    2. Evaluate alternatives, considering objectives and constraints;identify and resolve risks

    3. Develop the current cycle's part of the system, usingevolutionary or conventional development methods (dependingon remaining risks); perform validation at the end

    4. Prepare plans for subsequent phases

    31

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    32/122

    Testing Manual

    Spiral Model

    This model is very appropriate for large software projects. The modelconsists of four main parts, or blocks, and the process is shown by acontinuous loop going from the outside towards the inside. This showsthe progress of the project.

    PlanningThis phase is where the objectives, alternatives, and constraintsare determined.

    Risk Analysis

    What happens here is that alternative solutions and constraintsare defined, and risks are identified and analyzed. If riskanalysis indicates uncertainty in the requirements, theprototyping model might be used to assist the situation.

    Engineering

    Here the customer decides when the next phase of planningand risk analysis occur. If it is determined that the risks are tohigh, the project can be terminated.

    Customer Evaluation

    In this phase, the customer will assess the engineering resultsand make changes if necessary.

    32

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    33/122

    Testing Manual

    Spiral model flexibility

    Well-understood systems (low technical risk) - Waterfall model.Risk analysis phase is relatively cheap

    Stable requirements and formal specification. Safety criticality -

    Formal transformation model High UI risk, incomplete specification - prototyping model

    Hybrid models accommodated for different parts of the project

    Advantages of spiral model:

    Good for large and complex projects

    Customer Evaluation allows for any changes deemed necessary,or would allow for new technological advances to be used

    Allows customer and developer to determine and to react to risks

    at each evolutionary level Direct consideration of risks at all levels greatly reduces

    problems

    Problems with spiral model:

    Difficult to convince customer that this approach is controllable

    Requires significant risk assessment expertise to succeed

    Not yet widely used efficacy not yet proven

    If a risk is not discovered, problems will surely occur

    2.7 RAD Model

    RAD refers to a development life cycle designed to give muchfaster development and higher quality systems than thetraditional life cycle.

    It is designed to take advantage of powerful developmentsoftware like CASE tools, prototyping tools and code generators.

    The key objectives of RAD are: High Speed, High Quality and Low

    Cost. RAD is a people-centered and incremental development

    approach.

    Active user involvement, as well as collaboration and co-operation between all stakeholders are imperative.

    Testing is integrated throughout the development life cycle sothat the system is tested and reviewed by both developers and

    33

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    34/122

    Testing Manual

    users incrementally.

    34

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    35/122

    Testing Manual

    Problem Addressed By RAD

    With conventional methods, there is a long delay before thecustomer gets to see any results.

    With conventional methods, development can take so long that

    the customer's business has fundamentally changed by the timethe system is ready for use.

    With conventional methods, there is nothing until 100% of theprocess is finished, then 100% of the software is delivered

    Bad Reasons For Using RAD

    To prevent cost overruns(RAD needs a team already disciplined in cost

    management)

    To prevent runaway schedules

    (RAD needs a team already disciplined in timemanagement)

    Good Reasons for using RAD

    To converge early toward a design acceptable to the customerand feasible for the developers

    To limit a project's exposure to the forces of change

    To save development time, possibly at the expense of economyor product quality

    RAD in SDLC

    Mapping between System Development Life Cycle (SDLC) of ITSDand RAD stages is depicted as follows.

    SDLC RAD

    35

    Project Request

    & Maintenance

    System Analysis &

    Design SA&D)

    Implementation

    Post Implementation

    Review

    Transition

    Requirements

    Planning

    User Design

    RAD Construction

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    36/122

    Testing Manual

    Essential Ingredients of RAD

    RAD has four essential ingredients:

    Tools

    Methodology

    People

    Management.

    The following benefits can be realized in using RAD:

    High quality system will be delivered because of methodology,tools and user

    involvement;

    Business benefits can be realized earlier;

    Capacity will be utilized to meet a specific and urgent businessneed;

    Standards and consistency can be enforced through the use ofCASE tools.

    In the long run, we will also achieve that:

    Time required to get system developed will be reduced;

    Productivity of developers will be increased.

    36

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    37/122

    Testing Manual

    37

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    38/122

    Testing Manual

    Advantages of RAD

    Buying may save money compared to building

    Deliverables sometimes easier to port

    Early visibility

    Greater flexibility (because developers can redesign almost atwill)

    Greatly reduced manual coding (because of wizards, codegenerators, code reuse)

    Increased user involvement (because they are represented onthe team at all times)

    Possibly reduced cost (because time is money, also because ofreuse)

    Disadvantages of RAD Buying may not save money compared to building

    Cost of integrated toolset and hardware to run it

    Harder to gauge progress (because there are no classicmilestones)

    Less efficient (because code isn't hand crafted)

    More defects

    Reduced features

    Requirements may not converge

    Standardized look and feel (undistinguished, lacklusterappearance)

    Successful efforts difficult to repeat

    Unwanted features

    38

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    39/122

    Testing Manual

    Design testcases

    Prepare testda ta

    Run p rogramwith test data

    Co mpare r esultsto test cases

    Testcases

    Testdata

    Testresults

    Testreports

    39

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    40/122

    Test Plan Design

    Test Case Review

    Test Case Execution

    Test Case Design

    Defect Handling

    Gap Analysis

    System Study

    Scope/ Approach/ Estimations

    Deliverables

    Testing Manual

    3. Testing Life Cycle

    A systemic approach for Testing

    40

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    41/122

    Testing Manual

    3.1 System Study

    1. Domain Knowledge :- Used to know about the client business

    Banking / Finance / Insurance / Real-estates / ERP / CRM /

    Others2. Software : -

    Front End (GUI) VB / JAVA/ FORMS / Browser Process Language witch we want to write

    programmes

    Back End Database like Oracle, SQL Server etc.3. Hardware: - Internet/ Intranet/ Servers which you want to

    install.

    4. Functional Points: - Ten Lines Of Code (LOC) = 1 FunctionalPoint.

    5. Number of Pages: - The document which you want toprepare.

    6. Number of Resources : -Like Programmers, Designers, andManagers.

    7. Number of Days: - For actual completion of the Project.

    8. Numbers of Modules

    9. Priority:- High/ Medium/ Low importance for Modules

    3.2 Scope/ Approach/ Estimation:

    Scope

    What to test

    What not to test

    Approach

    Methods, tools and techniques used to accomplish testobjectives.

    Estimation

    Estimation should be done based on LOC/ FP/Resources

    1000 LOC = 100 FP (by considering 10 LOC = 1 FP) 100 x 3 = 300 (FP x 3 Tech. = Test Cases)The 3 Tech are (Equivalence Class, Boundary Value Analysis, ErrorGuessing)

    41

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    42/122

    Testing Manual

    30 TC Par Day => 300/30 = 10 Days to Design Test Cases Test Case Review => of Test Case Design (5 Days) Test Case Execution = 1 of Test Case Design(15 Days) Defect Headlining = Test Case Design (5 Days) Test Plan = 5 days ( 1 week ) Buffer Time = 25% of Estimation

    3.3 Test Plan Design:

    A test plan prescribes the scope, approach, resources, andschedule of testing activities.

    Why Test Plan?

    1. Repeatable

    2. To Control3. Adequate Coverage

    Importance of Test Plan

    Test planning process is a critical step in the testing process.Without a documented test plan, the test itself cannot be verified,coverage cannot be analyzed and the test is not repeatable

    The Test Plan Design document helps in test execution itcontain

    1. About the client and company

    2. Reference document (BRS, FRS and UI etc.)

    3. Scope (What to be tested and what not to be )

    4. Overview of Application

    5. Testing approach (Testing strategy)

    6. For each testing

    Definition

    Technique

    Start criteria

    Stop criteria7. Resources and there Roles and Responsibilities

    8. Defect definition

    9. Risk / Contingency / Mitigation Plan

    10. Training Required

    11. Schedules

    42

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    43/122

    Testing Manual

    12. Deliverables

    To support testing, a plan should be there, which specifies

    What to Do?

    How to Do?

    When to Do?

    43

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    44/122

    Testing Manual

    3.4 Test Cases Design:

    What is a test case?

    Test case is a description of what to be tested, what data to be givenand what actions to be done to check the actual result against the

    expected result.

    What are the items of test case?

    1. Test Case Number

    2. Pre-Condition

    3. Description

    4. Expected Result

    5. Actual Result

    6. Status (Pass/Fail)

    7. Remarks.

    Test Case Template

    TC ID Pre-Condition

    Description ExpectedResult

    ActualResult

    Status

    Remarks

    UniqueTestCase

    number

    Condition tosatisfied

    1. What tobetested

    2. whatdata toprovided

    3. whatactionto bedone

    AspearFSR

    Systemresponse

    PassorFail

    If any

    Yahoo-001

    Yahoowebpageshoulddisplayed

    1. Checkinboxisdisplayed

    2. UserID/PW

    3. ClickonSubmit

    Systemshouldmailbox

    Systemresponse

    44

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    45/122

    Testing Manual

    Testcase Development process

    Identify all potential Test Cases needed to fully test the businessand technical requirements

    Document Test Procedures and Test Data requirements

    Prioritize test cases

    Identify Test Automation Candidates

    Automate designated test cases

    Types of Test Cases

    Type Source

    1. RequirementBased Specifications

    2. Design based Logical system

    3. Code based Code

    4. Extracted Existing files or test cases

    5. ExtremeLimits and boundaryconditions

    Can this test cases reusable?

    Test cases developed for functionality testing and can be reusable

    for Integration

    System

    Regression

    Performance

    What are the characteristics of good test case?

    A good test case should have the following:

    TC should start with what you are testing.

    TC should be independent.

    TC should not contain If statements.

    TC should be uniform. (Convention should be followed sameacross the Project Eg. , Links

    45

    Testing with fewmodifications.

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    46/122

    Go through the Use cases & FunctionalSpec

    Submit the Review Report

    Try to find the gap between TC & Usecases

    Take checklist

    Take a demo of functionally

    Testing Manual

    The following issues should be considered while writing thetest cases

    All the TCs should be traceable.

    There should not be any duplicate test cases.

    Out dated test cases should be cleared off.

    All the test cases should be executable.

    Test case Guidelines

    Developed to verify that specific requirements or design aresatisfied

    Each component must be tested with at least two test cases:Positive and Negative

    Real data should be used to reality test the modules after

    successful test data is used3.5 Test Case Review:

    1. Peer to peer Reviews

    2. Team Lead Review

    3. Team Manager Review

    Review Process

    3.6 Test Case Execution:

    46

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    47/122

    Testing Manual

    Test execution is completion of testing activities, which involvesexecuting the planned test cases and conducting of the tests.

    Test execution phase broadly involves execution and reporting.

    Execution and execution results plays a vital role in the testing.

    47

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    48/122

    Check the availability of application

    Implement the Test Cases

    Take the Test Case document

    Raise the Defects

    Testing Manual

    Test execution consists of following activities to beperformed

    1. Creation of test setup or Test bed

    2. Execution of test cases on the setup

    3.Test Methodology used

    4. Collection of Metrics

    5. Defect Tracking and Reporting

    6. Regression Testing

    The following activities should be taken care:

    1. Number of test cases executed.

    2. Number of defects found

    3. Screen shoots of failure executions should be taken in word

    document.

    4.Time taken to execute.

    5.Time wasted due to the unavailability of the system.

    Test Case Execution Process:

    3.7 Defect Handling

    48

    Test Case

    Test Data Test CaseExecution

    Raise the Defect

    Screen shot

    INPUT PROCESS Installation&

    OUTPUT

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    49/122

    Testing Manual

    What is Defect?

    Defect is a coding error in a computer program. A software error is present when the program does not do what

    its end user expects it to do.

    Who can report a Defect?

    Anyone who has involved in software development life cycle andwho is using the software can report a Defect. In most of the casesdefects are reported by Testing Team.

    A short list of people expected to report bugs:

    1. Testers / QA Engineers2. Developers3. Technical Support

    4. End Users5. Sales and Marketing Engineers

    Defect Reporting

    Defect or Bug Report is the medium of communication betweenthe tester and the programmer

    Provides clarity to the management, particularly at the summarylevel

    Defect Report should be accurate, concise, thoroughly-edited,well conceived, high-quality technical document

    The problem should be described in a way that maximizes theprobability that it will be fixed

    Defect Report should be non-judgmental and should not pointfinger at the programmer

    Crisp Defect Reporting process improves the test teamscommunications with the senior and peer management

    49

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    50/122

    Testing Manual

    Defect Life Cycle

    Defect Life Cycle helps in handling defects efficiently.

    This DLC will help the users to know the status of the defect.

    No

    50

    Valid

    Close the Defect

    Defect Accepted

    Defect Raised

    Defect Fixed

    Internal DefectReview

    Defect Submitted toDev Team

    Valid

    DefectRejected

    DefectPostponed

    No

    Yes

    Yes

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    51/122

    Testing Manual

    Types of Defects

    1. Cosmetic flaw

    2. Data corruption

    3. Data loss

    4. Documentation Issue

    5. Incorrect Operation

    6. Installation Problem

    7. Missing Feature

    8. Slow Performance

    9. System Crash

    10. Unexpected Behavior

    11. Unfriendly behavior

    How do u decide the Severity of the defect

    SeverityLevel

    Description Response Time or Turn-aroundTime

    High A defect occurred due to theinability of a key function toperform. This problemcauses the system hang it

    halts (crash), or the user isdropped out of the system.An immediate fix or workaround is needed fromdevelopment so that testingcan continue.

    Defect should be responded towithin 24 hours and thesituation should be resolvedtest exit

    A defect occurred whichseverely restricts the systemsuch as the inability to use amajor function of the system.There is no acceptable work-around but the problem doesnot inhibit the testing of otherfunctions

    A response or action planshould be provided within 3working days and the situationshould be resolved before testexit.

    51

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    52/122

    Testing Manual

    SeverityLevel

    Description Response Time or Turn-around Time

    Low A defect is occurred whichplaces minor restrict on a

    function that is not critical.There is an acceptable work-around for the defect.

    A response or action planshould be provided within 5

    working days and the situationshould be resolved before testexit.

    Others An incident occurred whichplaces no restrictions on anyfunction of the system. Noimmediate impact to testing.

    A Design issue orRequirements not definitivelydetailed in project.

    The fix dates are subject tonegotiation.

    An action plan should beprovided for next release orfuture enhancement

    Defect Severity VS Defect Priority

    The General rule for the fixing the defects will depend on theSeverity.

    All the High Severity Defects should be fixed first.

    This may not be the same in all cases some times even thoughseverity of the bug is high it may not be take as the High priority.

    At the same time the low severity bug may be considered ashigh priority.

    Defect Tracking Sheet

    DefectNo

    Description

    Origin Severity Priority Status

    UniqueNo

    Dec of Bug Birthplace of

    the Bug

    CriticalMajor

    MediumMinorCosmetic

    HighMedium

    Low

    Submitted

    AcceptedFixed

    RejectedPostpone

    dClosed

    52

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    53/122

    Testing Manual

    53

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    54/122

    Testing Manual

    Defect Tracking Tools

    Bug Tracker -- BSL Proprietary Tools

    Rational Clear Quest

    Test Director

    3.8 Gap Analysis:

    1. BRS Vs SRS

    BRS01 SRS01

    -SRS02

    -SRS03

    2. SRS Vs TC

    SRS01 TC01

    - TC02

    - TC03

    3. TC Vs Defects

    TC01 Defects01

    -- Defects02

    3.9 Deliverables:

    All the documents witch are prepared in each and every stage. FRS

    SRS

    Use Cases

    Test Plain

    Defect Report

    Review Report etc.,

    54

    BRS SRSTestCase Defects

    BRS001

    SRS001

    TC001Defect001

    Defect002

    TC002

    TC003

    SRS002

    SRS003

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    55/122

    Testing Manual

    4 Testing Phases

    Requirement Analysis Testing

    Design Testing

    Unit Testing

    Integration Testing

    System Testing

    Acceptance Testing

    4.1 Requirement Analysis Testing

    Objective

    The objective of Requirement AnalysisTesting is to ensuresoftware quality by eradicating errors as earlier as possible in thedevelopement process

    If the errors noticed at the end of the software life cycle aremore costly compared to that of early ones, and there byvalidating each of the Outputs.

    The objective can be acheived by three basic issues:

    1. Correctness

    2. Completeness

    3. Consistency

    Types of requirements

    Functional Requirements

    Data Requirements

    Look and Feel requirements

    Usability requirements

    Performance Requirements

    Operational requirements

    Maintainability requirements

    Security requirements

    Scalability requirements

    55

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    56/122

    Testing Manual

    Difficulties in conducting requirements analysis:

    Analyst not prepared

    Customer has no time/interest

    Incorrect customer personnel involved

    Insufficient time allotted in project schedule

    What constitutes good requirements?

    Clear Unambiguous terminology

    Concise No unnecessary narrative or non-relevant facts

    Consistent Requirements that are similar are stated in similar terms. Requirements do not conflict with each other.

    Complete All functionality needed to satisfy the goals of the system is

    specified to a level of detail sufficient for design totake place

    Testing related activities during Requirement phase

    1. Creation and finalization of testing templates

    2. Creation of over-all Test Plan and Test Strategy

    3. Capturing Acceptance criteria and preparation of Acceptance TestPlan

    4. Capturing Performance criteria of the software requirements

    4.2 Design Testing

    Objective

    The objective of the design phase testing is to generate acomplete specifications for implementing a system using a set oftools and languages

    Design objective is fulfilled by five issues

    1. Consistency

    2. Completeness

    3. Correctness

    4. Feasibility

    5. Tractability

    56

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    57/122

    Testing Manual

    57

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    58/122

    Testing Manual

    Testing activities in Design phase

    1. Develop Test cases to ensure that product is on par withRequirement Specification document.

    2. Verify Test Cases & test scripts by peer reviews.

    3. Preparation of traceability matrix from system requirements

    4.3 Unit Testing

    Objective

    In Unit testing user is supposed to check each and every micro

    function.

    All field level validations are expected to test at the stage of

    testing.

    In most of the cases Developer will do this.

    The objective can be achieved by the following issues

    1. Correctness

    2. Completeness

    3. Early Testing

    4. Debugging

    4.4 Integration Testing:

    Objective

    The primary objective of integration testing is to discover errorsin the interfaces between Modules/Sub-Systems (Host & ClientInterfaces).

    Minimizing the errors which include internal and externalInterface errors

    Approach:

    Top-Down Approach

    58

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    59/122

    Testing Manual

    The integration process is performed in a series of 5 steps

    1. The main control module is used as a test driver, and stubs are

    substituted for all modules directly subordinate to the maincontrol module.

    2. Depending on the integration approach selected (depth orbreadth-first) subordinate stubs are replaced at a time withactual modules.

    3. Tests are conducted as each module is module is integrated.

    4. One completion of each set of tests, another stub is replacedwith the real-module.

    5. Regression testing may be conducted to ensure that new errors

    have not been introduced.Advantages

    We can verify the major controls early in the testing Process

    Disadvantage:

    Stubs are required. Very difficult to develop stubsBottom-Up Approach.

    59

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    60/122

    Testing Manual

    A bottom-up integration strategy may be implemented withthe following steps:

    1. Low level modules are combined into clusters (Some times calledbuilds) that perform a specific software sub function.

    2. A driver (control program for testing) is written to coordinate testcase input and output.

    3. The cluster is tested.

    4. Drivers are removed and clusters are combined upward in theprogram structure

    Advantages

    Easy to Develop the drivers than stubs

    Disadvantage:

    The need of test drivers Late detection of interface problems

    An integration testing is conducted, the tester should identifycritical modules. A critical module has one or more of the followingcharacteristics:

    1. Address several software requirements.2. Has a high-level of control. (resides relatively high in the

    program structure)3. Complex & Error-Phone.4. Have definite performance requirements.

    Testing activities in Integration Testing Phase

    1. This testing is conducted in parallel with integration of variousapplications (or components)

    60

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    61/122

    Testing Manual

    2. Testing the product with its external and internal interfaceswithout using drivers and stubs.

    3. Incremental approach while integrating the interfaces.

    4.5 System Testing:

    The primary objective of system testing is to discover errorswhen the system is tested as a hole.

    System testing is also called as End-End Testing.

    User is expected to test from Login-To-Logout by coveringvarious business functionalities.

    The following Tests will be conducted in Systemtesting

    Recovery Testing.

    Security Testing.

    Load & Stress Testing.

    Functional Testing

    Testing activities in System Testing phase

    1. System test is done for validating the product with respect toclient requirements

    2. Testing can be in multiple rounds

    3. Defect found during system test should be logged into DefectTracking System for the purpose of tracking.

    4. Test logs and defects are captured and maintained.5. Review of all the test documents

    Approach: IDO Model

    Identifying the End-End/Business Life Cycles. Design the test and data. Optimize the End-End/Business Life Cycles.

    4.6 Acceptance Testing:

    The primary objective of acceptance testing is to get the

    acceptance from the client. Testing the system behavior against customers requirements

    Customers undertake typical tasks to check their requirements

    Done at the customers premises on the user environment

    Acceptance Testing Types

    Alpha Testing

    61

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    62/122

    Testing Manual

    Testing the application on the developers premises itself in acontrolled environment.

    Generally, the Quality Assurance cell is the body that isresponsible for conducting the test.

    On successful completion of this phase, the software is readyto migrate outside the developers premises.

    Beta Testing

    It is carried out at one or more users premises using theirinfrastructure in an uncontrolled manner.

    It is the customer or his representative that conducts thetest, with/without the developer around. As and bugs areuncovered, the developer is notified about the same.

    This phase enables the developer to modify the code so as to

    alleviate any remaining bugs before the final official release.Approach: BE

    Building a team with real-time user, functional users anddevelopers.

    Execution of business Test Cases.

    62

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    63/122

    Testing Manual

    When Should we start writing Test Cases/ Testing

    V Model is the most suitable way to start writing Test Cases andconduct Testing.

    SDLC Phase Requirements

    Freeze

    Requirements

    BuildBusinessRequirementsDocs

    Acceptance TestCases

    AcceptanceTesting

    SoftwareRequirementsDocs

    System TestCases

    System testing

    DesignRequirementsDocs

    Integration testCases

    IntegrationTesting

    Code Unit Test Cases Unit Testing

    63

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    64/122

    Testing Manual

    5 Testing Methods

    5.1 Functionality Testing:

    Objective:

    Testing the functionality of the application with the help of inputand out put

    Test against system requirements.

    To confirm all the requirements are covered.

    Approach:

    1. Equivalence Class

    2. Boundary Value Analysis

    3. Error Guessing.

    5.2 Usability Testing:

    To test the Easiness and User-friendliness of the system.

    Approach:

    1. Qualitative & Quantitative

    2. Qualitative Approach:

    Qualitative Approach

    Each and every function should available from all the pages ofthe site.

    User should able to submit each and every request with in 4-5actions.

    Confirmation message should be displayed for each and everysubmit.

    Quantitative Approach:

    Heuristic Checklistshould be prepared with all the general testcases that fall under the classification of checking.

    This generic test cases should be given to 10 different people

    and ask to execute the system to mark the pass/fail status.

    The average of 10 different people should be considered as thefinal result.

    Example: Some peoplemay feelsystem is more users friendly, Ifthe submit is button on the left side of the screen. At the same timesome other may feel its better if the submit button is placed on theright side.

    64

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    65/122

    Testing Manual

    65

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    66/122

    Testing Manual

    Classification of Checking:

    Clarity of communication.

    Accessibility

    Consistency

    Navigation

    Design & Maintenance

    Visual Representation.

    5.3 Reliability Testing:

    Objective

    Reliability is considered as the probability of failure-freeoperation for a specified time in a specified environment for a

    given purpose

    To find Mean Time between failure/time available under specificload pattern. Mean time for recovery.

    Approach

    By performing the continuous hours of operation.

    More then 85% of the stability is must.

    Reliability Testing helps you to confirm:

    Business logic performs as expected

    Active buttons are really active

    Correct menu options are available

    Reliable hyper links

    Note: This should be done by using performance testing tools

    5.4 Regression Testing:

    Objective is to check the new functionalities has incorporatedcorrectly with out failing the existing functionalities.

    RAD In case of Rapid Application development Regression Testplays a vital role as the total development happens in bits andpieces.

    Testing the code problems have been fixed correctly or not.

    Approach

    Manual Testing (By using impact Analysis)

    66

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    67/122

    Testing Manual

    Automation tools

    67

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    68/122

    Testing Manual

    5.5 Performance Testing:

    Primary objective of the performance testing is to demonstratethe system works functionally as per specifications with in givenresponse time on a production sized database.

    Objectives Assessing the system capacity for growth.

    Identifying weak points in the architecture

    Detect obscure bugs in software

    Tuning the system

    Verify resilience & reliability

    Performance Parameters

    Request-Response Time

    Transactions per Second

    Turn Around time

    Page down load time

    Through Put

    Approach

    Usage of Automation Tools

    Classification of Performance Testing:

    Load Test

    Volume Test

    Stress Test

    Load Testing

    Estimating the design capacity of the system within theresources limit

    Approach is Load Profile

    Volume Testing

    Is the process of feeding a program with heavy volume of data.

    Approach is data profile

    Stress Testing

    Estimating the breakdown point of the system beyond theresources limit.

    68

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    69/122

    Testing Manual

    Repeatedly working on the same functionality

    Critical Query Execution (Join Queries) To Emulate peak load.

    Load Vs Stress:

    With the Simple Scenario (Functional Query), N number of people

    working on it will not enforce stress on the server.

    A complex scenario with even one less number of users willstress the server.

    5.6 Scalability Testing:

    Objective is to find the maximum number of user system canhandle.

    Classification:

    Network Scalability

    Server Scalability

    Application Scalability

    Approach

    Performance Tools

    5.7 Compatibility Testing:

    Compatibility testing provides a basic understanding of how aproduct will perform over a wide range of hardware, software &

    network configuration and to isolate the specific problems.Approach

    Environment Selection.

    Understanding the end users

    Importance of selecting both old browser and new browsers

    Selection of the Operating System

    Test Bed Creation

    Partition of the hard disk.

    Creation of Base Image

    69

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    70/122

    Testing Manual

    5.8 Security Testing:

    Testing how well the system protects against unauthorizedinternal or external access.

    Verify how easily a system is subject to security violationsunder different conditions and environments

    During Security testing, password cracking, unauthorizedentry into the software, network security are all taken intoconsideration.

    5.8 Installation Testing:

    Installation testing is performed to ensure that all Installfeatures and options function properly and to verify that allnecessary components of the application are installed.

    The uninstallation of the product also needs to be tested toensure that all data, executables, and .DLLs are removed.

    The uninstallation of the application is tested using DOScommand line, Add/Remove programs, and manual deletion offiles

    5.9 Adhoc Testing

    Testing carried out using no recognized test case designtechnique.

    5.10 Exhaustive Testing

    Testing the application with all possible combinations of valuesfor program variables.

    Feasible only for small, simple programs.

    70

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    71/122

    Testing Manual

    6 Performance Life Cycle

    6.1 What is Performance Testing?

    Primary objective of the performance testing is todemonstrate the system works functionally as per specifications

    with in given response time on a production sized database6.2 Why Performance Testing:

    To assess the system capacity for growth

    The load and response data gained from the tests can be usedto validate the capacity planning model and assist decisionmaking.

    To identify weak points in the architecture

    The controlled load can be increased to extreme levels to stressthe architecture and break it bottlenecks and weak components

    can be fixed or replaced

    To detect obscure bugs in software

    Tests executed for extended periods can cause failures causedby memory leaks and reveal obscure contention problems orconflicts

    To tune the system

    Repeat runs of tests can be performed to verify that tuningactivities are having the desired effect improvingperformance.

    To verify resilience & reliability

    Executing tests at production loads for extended periods is theonly way to access the systems resilience and reliability toensure required service levels are likely to be met.

    6.3 Performance-Tests:

    Used to test each part of the web application to find outwhat parts of the website are slow and how we can make themfaster.

    6.4 Load-Tests: This type of test is done to test the website using the loadthat the customer expects to have on his site. This is somethinglike a real world test of the website.

    First we have to define the maximum request times wewant the customers to experience, this is done from the businessand usability point of view, not from a technical point of view. At

    71

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    72/122

    Testing Manual

    this point we need to calculate the impact of a slow website onthe company sales and support costs.

    Then we have to calculate the anticipated load and loadpattern for the website (Refer Annexure I for details on loadcalculation) which we then simulate using the Tool.

    At the end we compare the test results with the requeststimes we wanted to achieve.

    6.5 Stress-Tests:

    They simulate brute force attacks with excessive load on the webserver. In the real world situations like this can be created by amassive spike of users far above the normal usage e.g.caused by a large referrer (imagine the website being mentionedon national TV).

    The goals of stress tests are to learn under what load the servergenerates errors, whether it will come back online after such amassive spike at all or crash and when it will come back online.

    6.6 When should we start Performance Testing:

    It is even a good idea to start performance testing before a lineof code is written at all! Early testing the base technology(network, load balancer, application-, database- and web-servers) for the load levels can save a lot of money when youcan already discover at this moment that your hardware is toslow. Also the first stress tests can be a good idea at this point.

    The costs for correcting a performance problem rise steeply fromthe start of development until the website goes productive andcan be unbelievable high for a website already online.

    As soon as several web pages are working the first load testsshould be conducted and from there on should be part of theregular testing routine each day or week or for each build of thesoftware.

    6.7 Popular tools used to conduct Performance Testing:

    LoadRunner from Mercury Interactive

    AstraLoad from Mercury Interactive Silk Performer from Segue

    Rational Suite Test Studio from Rational

    Rational Site Load from Rational

    Webload from Radview

    72

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    73/122

    Testing Manual

    RSW eSuite from Empirix

    MS Stress tool from Microsoft

    6.8 Performance Test Process:

    This is a general process for performance Testing. This processcan be customized according to the project needs. Few moreprocess steps can be added to the existing process, deleting anyof the steps from the existing process may result in Incompleteprocess. If Client is using any of the tools, In this case one canblindly follow the respective process demonstrated by the tool.

    General Process Steps:

    73

    Setting up of the

    Environment

    Record & Playback in

    the standby mode

    Enhancement of the

    script to support

    multiple users

    Configure the scripts

    Execution for fixedusers and reporting the

    status to the developers

    Re-execution of the

    scenarios after the

    developers fine-tune the

    code

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    74/122

    Testing Manual

    Setting up of the test environment

    The installation of the tool, agents

    Directory structure creation for the storage of the scripts andresults

    Installation of additional software if essential to collect the serverstatistics

    It is also essential to ensure the correctness of the environmentby implementing the dry run.

    Record & playback in the stand by mode

    The scripts are generated using the script generator and playedback to ensure that there are no errors in the script.

    Enhancement of the script to support multiple users

    The variables like logins, user inputs etc. should beparameterised to simulate the live environment.

    It is also essential since in some of the applications no two userscan login with the same id.

    Configuration of the scenarios

    Scenarios should be configured to run the scripts on differentagents, schedule the scenarios

    Distribute the users onto different scripts, collect the data relatedto database etc.

    Hosts

    The next important step in the testing approach is to run thevirtual users on different host machines to reduce the load onthe client machine by sharing the resources of the othermachines.

    Users

    The number of users who need to be activated during theexecution of the scenario.

    Scenarios

    A scenario might either comprise of a single script or multiplescripts. The main intention of creating a scenario to simulateload on the server similar to the live/production environment.

    74

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    75/122

    Testing Manual

    Ramping

    In the live environment not all the users login to theapplication simultaneously. At this stage we can simulate thevirtual users similar to the live environment by deciding -

    1. How many users should be activated at a particular point oftime as a batch?

    2. What should be the time interval between every batch ofusers?

    Execution for fixed users and reporting the status to thedevelopers

    The script should be initially executed for one user and theresults/inputs should be verified to check it out whether theserver response time for a transaction is less than or equal to theacceptable limit (bench mark).

    If the results are found adequate the execution should becontinued for different set of users. At the end of every executionthe results should be analysed.

    If a stage reaches when the time taken for the server to respondto a transaction is above the acceptable limit, then the inputsshould be given to the developers.

    Re-execution of the scenarios after the developers fine tunethe code

    After the fine-tuning, the scenarios should be re-executed forthe specific set of users for which the response was inadequate.

    If found satisfactory, then the execution should be continueduntil the decided load.

    Final reportAt the end of the performance testing, final report should begenerated which should comprise of the following

    Introduction about the application.

    Objectives set / specified in the test plan.

    Approach summary of the steps followed in conducting thetest

    Analysis & Results is a brief explanation about the resultsand the analysis of the report.

    Conclusion the report should be concluded by tellingwhether the objectives set before the test is met or not.

    75

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    76/122

    Testing Manual

    Annexure can consist of graphical representation of thedata with a brief description, comparison statistics if any etc.

    76

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    77/122

    Testing Manual

    7 Life Cycle of Automation

    7.1 What is Automation?

    77

    Analyze theApplication

    Select the Tool

    Reporting the Defect

    Finding & Reporting theDefect

    Finding & Reporting theDefect

    Finding & Reporting theDefect

    Finding & Reporting theDefect

    sdasdadadasdadafhgfdgdf

    Finding & Reporting theDefect

    Finding & Reporting the

    Defect

    Identify the Scenarios

    Design / Record TestScripts

    Modify the TestScripts

    Run the Test Scripts

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    78/122

    Testing Manual

    A software program that is used to test another softwareprogram, This is referred to as automated software testing.

    7.2 Why Automation

    Avoid the errors that humans make when they get tired aftermultiple repetitions.

    The test program wont skip any test by mistake.

    Each future test cycle will take less time and require less humanintervention.

    Required for regression testing.

    7.3 Benefits of Test Automation:

    Allows more testing to happen

    Tightens / Strengthen Test Cycle

    Testing is consistent, repeatable

    Useful when new patches released

    Makes configuration testing easier

    Test battery can be continuously improved.

    7.4 False Benefits:

    Fewer tests will be needed

    It will be easier if it is automate

    Compensate for poor design

    No more manual testing.

    7.5 What are the different tools available in the market?

    Rational Robot

    WinRunner

    SilkTest

    QA Run

    WebFT

    8 Testing

    78

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    79/122

    Testing Manual

    8.1 Test Strategy

    Test strategy is statement of overall approach of testing to meetthe business and test objectives.

    It is a plan level document and has to be prepared in the

    requirement stage of the project. It identifies the methods, techniques and tools to be used for

    testing .

    It can be a project or an organization specific.

    Developing a test strategy which effectively meets the needs ofthe organization/project is critical to the success of the softwaredevelopment

    An effective strategy has to meet the project and businessobjectives

    Defining the strategy upfront before the actual testing helps inplanning the test activities

    A test strategy will typically cover the following aspects

    Definition of test objective

    Strategy to meet the specified objective

    Overall testing approach

    Test Environment

    Test Automation requirements Metric Plan

    Risk Identification, Mitigation and Contingency plan

    Details of Tools usage

    Specific Document templates used in testing

    8.2 Testing Approach

    Test approach will be based on the objectives set for testing

    Test approach will detail the way the testing to be carried out

    Types of testing to be done viz Unit, Integration and systemtesting

    The method of testing viz Blackbox, White-box etc.,

    Details of any automated testing to be done

    79

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    80/122

    Testing Manual

    8.3 Test Environment

    All the Hardware and Software requirements for carrying outtesting shall be identified in detail.

    Any specific tools required for testing will also be identified

    If the testing is going to be done remotely, then it has to beconsidered during estimation

    8.4 Risk Analysis

    Risk analysis should carried out for testing phase

    The risk identification will be accomplished by identifying causes-and-effects or effects-and-causes

    The identified Risks are classified into to Internal and ExternalRisks.

    The internal risks are things that the test team can control orinfluence.

    The external risks are things beyond the control or influence ofthe test team

    Once Risks are identified and classified, the following activitieswill be carried out

    Identify the probability of occurrence

    Identify the impact areas if the risk were to occur

    Risk mitigation plan how avoid this risk? Risk contingency plan if the risk were to occur what do we

    do?

    8.5 Testing Limitations

    You cannot test a program completely

    We can only test against system requirements

    May not detect errors in the requirements.

    Incomplete or ambiguous requirements may lead to

    inadequate or incorrect testing. Exhaustive (total) testing is impossible in present scenario.

    Time and budget constraints normally require very carefulplanning of the testing effort.

    Compromise between thoroughness and budget.

    80

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    81/122

    Testing Manual

    Test results are used to make business decisions for releasedates.

    Even if you do find the last bug, youll never know it

    You will run out of time before you run out of test cases

    You cannot test every path

    You cannot test every valid input

    You cannot test every invalid input

    8.6 Testing Objectives

    You cannot prove a program correct (because it isnt!)

    The purpose of testing is to find problems

    The purpose of finding problems is to get them corrected

    8.7 Testing Metrics Time

    Time per test case Time per test script

    Time per unit test

    Time per system test

    Sizing Function points

    Lines of code

    Defects Numbers of defects

    Defects per sizing measure

    Defects per phase of testing Defect origin

    Defect removal efficiency

    Number of defects found in producer testing

    Defect Removal Efficiency =

    Number of defects during the life of the product

    Actual Size-Planed Size

    Size Variance =

    Planed Size

    Actual end date Planed end date

    Delivery Variance =

    Planed end date Planed start date

    Actual effort Planed effort

    81

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    82/122

    Testing Manual

    Effort =

    Planed effort

    82

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    83/122

    Testing Manual

    Effort

    Productivity =

    Size

    No defect found during the review time

    Review efficiency = Effort

    8.7 Test Stop Criteria:

    Minimum number of test cases successfully executed.

    Uncover minimum number of defects (16/1000 stm)

    Statement coverage

    Testing uneconomical

    Reliability model

    8.8 Six Essentials of Software Testing

    1. The quality of the test process determines the success of the testeffort.

    2. Prevent defect migration by using early life-cycle testingtechniques.

    3. The time for software testing tools is now.

    4. A real person must take responsibility for improving the testingprocess.

    5. Testing is a professional discipline requiring trained, skilledpeople.

    6. Cultivate a positive team attitude of creative destruction.

    8.9 What are the five common problems in s/w developmentprocess?

    Poor Requirements: If the requirements are unclear, incomplete,too general and not testable there will be problem.

    Unrealistic Schedules: If too much work is creamed in too littletime. Problems are inventible.

    Inadequate Testing: No one will know weather the system is goodor not. Until the complains system crash

    Futurities: Requests to pile on new features after development isunderway. Extremely common

    Miscommunication: If the developers dont know what is needed(or) customers have erroneous expectations, problems areguaranteed.

    83

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    84/122

    Testing Manual

    8.10 Give me five common problems that occur duringsoftware development.

    Solid requirements :Ensure the requirements are solid, clear,complete, detailed, cohesive, attainable and testable.

    Realistic Schedules: Have schedules that are realistic. Allowadequate time for planning, design, testing, bug fixing, re-testing,changes and documentation. Personnel should be able to completethe project without burning out.

    Adequate Testing: Do testing that is adequate. Start testing earlyon, re-test after fixes or changes, and plan for sufficient time forboth testing and bug fixing.

    Firm Requirements: Avoid new features. Stick to initialrequirements as much as possible.

    Communication. Communicate Require walk-thorough andinspections when appropriate

    8.11 What Should be done no enough time for testing

    Risk analysis to determine where testing should be focused

    Which functionality is most important to the project's intendedpurpose?

    Which functionality is most visible to the user?

    Which functionality has the largest safety impact?

    Which functionality has the largest financial impact on users?

    Which aspects of the application are most important to thecustomer?

    Which aspects of the application can be tested early in thedevelopment cycle?

    Which parts of the code are most complex and thus most subjectto errors?

    Which parts of the application were developed in rush or panicmode?

    Which aspects of similar/related previous projects causedproblems?

    Which aspects of similar/related previous projects had largemaintenance expenses?

    Which parts of the requirements and design are unclear or poorlythought out?

    84

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    85/122

    Testing Manual

    What do the developers think are the highest-risk aspects of theapplication?

    What kinds of problems would cause the worst publicity?

    What kinds of problems would cause the most customer service

    complaints? What kinds of tests could easily cover multiple functionalities?

    Which tests will have the best high-risk-coverage to time-required ratio?

    8.12 How do you know when to stop testing?

    Common factors in deciding when to stop are...

    Deadlines, e.g. release deadlines, testing deadlines;

    Test cases completed with certain percentage passed;

    Test budget has been depleted;

    Coverage of code, functionality, or requirements reaches aspecified point;

    Bug rate falls below a certain level; or

    Beta or alpha testing period ends.

    8.14 Why does the software have Bugs?

    Miscommunication or No communication

    Software Complexity

    Programming Errors

    Changing Requirements

    Time Pressures

    Poorly Documented Code

    8.15 Different Type of Errors in Software

    User Interface Errors

    Error Handling

    Boundary related errors

    Calculation errors

    Initial and Later states

    Control flow errors

    Errors in Handling or Interpreting Data

    85

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    86/122

    Testing Manual

    Race Conditions

    Load Conditions

    Hardware

    Source, Version and ID Control

    9 Roles and Responsibilities

    9.1 Test Manager

    Single point contact between Wipro onsite and offshore team

    Prepare the project plan

    Test Management

    Test Planning

    Interact with Wipro onsite lead, Client QA manager Team management

    Work allocation to the team

    Test coverage analysis

    Co-ordination with onsite for issue resolution.

    Monitoring the deliverables

    Verify readiness of the product for release through releasereview

    Obtain customer acceptance on the deliverables

    Performing risk analysis when required

    Reviews and status reporting

    Authorize intermediate deliverables and patch releases tocustomer.

    9.2 Test Lead

    Resolves technical issues for the product group

    Provides direction to the team members

    Performs activities for the respective product group

    Review and Approve of Test Plan / Test cases

    Review Test Script / Code

    Approve completion of Integration testing

    Conduct System / Regression tests

    86

    Testing Errors

  • 8/23/2019 Testing Manual Document V1 1 [1].1[1]

    87/122

    Testing Manual

    Ensure tests are conducted as per plan

    Reports statu