Performance Testing Guide V6

29
Table of Contents - Performance Testing Guide Author: Sravan Tadakamalla 1. INTRODUCTION TO PERFORMANCE TESTING......................................3 1.1 OBJECTIVE.............................................................. 3 1.2 TYPES OF PERFORMANCE TESTING..............................................3 1.3 RISKS ADDRESSED VIA PERFORMANCE TESTING.....................................4 1.4 UNDERSTANDING VOLUME.....................................................4 1.5 TEST TOOLS............................................................. 4 1.6 TOOL IDENTIFICATION & EVALUATION...........................................4 1.7 WHAT KIND OF SKILLS ARE REQUIRED TO INVOLVE IN THE PERFORMANCE TESTING...........5 2. PERFORMANCE TEST STRATEGY................................................5 2.1 EVALUATE SYSTEMS TO IMPROVE PERFORMANCE TESTING EFFECTIVENESS...................5 2.2 DETERMINE PERFORMANCE TESTING OBJECTIVES, BUILDING THE TARGETS..................5 2.3 QUANTIFY END USERS RESPONSE TIME GOALS.....................................6 2.4 BASELINE APPLICATION PERFORMANCE...........................................6 2.5 CONDUCT A PERFORMANCE TESTING RISK ASSESSMENT................................6 3. ENVIRONMENT - PERFORMANCE TEST BEDS......................................6 3.1 CREATING...............................................................6 3.2 TROUBLESHOOTING..........................................................6 3.3 REAL WORLD SIMULATION.................................................... 7 3.4 SERVER CONFIGURATIONS.....................................................7 3.5 CLIENT CONFIGURATIONS.................................................... 7 3.6 SCALABLE TEST ENVIRONMENTS................................................7 4. TEST EXECUTION...........................................................8 4.1 TEST DESIGN AND EXECUTION.................................................8 4.2 LOAD RUNNER EXECUTIONS....................................................8 4.3 REPORTING AND ANALYSIS................................................... 8 4.4 DATA PRESENTATIONS.......................................................8 4.4.1 Analysis Summary........................................................................................................................... 8 4.4.2 Statistics Summary.......................................................................................................................... 9 4.4.3 Transaction Summary..................................................................................................................... 9 4.4.4 HTTP Responses Summary............................................................................................................. 9 4.5 STANDARD GRAPHS (PROVIDED BY DEFAULT)..........................................9 5. DEFINING THE PERFORMANCE AND STRESS TESTING..............................9

Transcript of Performance Testing Guide V6

Page 1: Performance Testing Guide V6

Table of Contents - Performance Testing Guide

Author: Sravan Tadakamalla

1. INTRODUCTION TO PERFORMANCE TESTING......................................................................................3

1.1 OBJECTIVE.......................................................................................................................................................31.2 TYPES OF PERFORMANCE TESTING.................................................................................................................31.3 RISKS ADDRESSED VIA PERFORMANCE TESTING............................................................................................41.4 UNDERSTANDING VOLUME.............................................................................................................................41.5 TEST TOOLS.....................................................................................................................................................41.6 TOOL IDENTIFICATION & EVALUATION..........................................................................................................41.7 WHAT KIND OF SKILLS ARE REQUIRED TO INVOLVE IN THE PERFORMANCE TESTING....................................5

2. PERFORMANCE TEST STRATEGY...............................................................................................................5

2.1 EVALUATE SYSTEMS TO IMPROVE PERFORMANCE TESTING EFFECTIVENESS................................................52.2 DETERMINE PERFORMANCE TESTING OBJECTIVES, BUILDING THE TARGETS................................................52.3 QUANTIFY END USERS RESPONSE TIME GOALS.............................................................................................62.4 BASELINE APPLICATION PERFORMANCE.........................................................................................................62.5 CONDUCT A PERFORMANCE TESTING RISK ASSESSMENT..............................................................................6

3. ENVIRONMENT - PERFORMANCE TEST BEDS........................................................................................6

3.1 CREATING........................................................................................................................................................63.2 TROUBLESHOOTING.........................................................................................................................................63.3 REAL WORLD SIMULATION..............................................................................................................................73.4 SERVER CONFIGURATIONS...............................................................................................................................73.5 CLIENT CONFIGURATIONS...............................................................................................................................73.6 SCALABLE TEST ENVIRONMENTS.....................................................................................................................7

4. TEST EXECUTION.............................................................................................................................................8

4.1 TEST DESIGN AND EXECUTION.......................................................................................................................84.2 LOAD RUNNER EXECUTIONS............................................................................................................................84.3 REPORTING AND ANALYSIS.............................................................................................................................84.4 DATA PRESENTATIONS....................................................................................................................................8

4.4.1 Analysis Summary...................................................................................................................................84.4.2 Statistics Summary..................................................................................................................................94.4.3 Transaction Summary.............................................................................................................................94.4.4 HTTP Responses Summary.....................................................................................................................9

4.5 STANDARD GRAPHS (PROVIDED BY DEFAULT)..................................................................................................9

5. DEFINING THE PERFORMANCE AND STRESS TESTING......................................................................9

5.1 MODELING THE USER EXPERIENCE.................................................................................................................95.1.1 Simulate Realistic User Delays...............................................................................................................95.1.2 Model Representative User Groups......................................................................................................105.1.3 Simulate Realistic User Patterns..........................................................................................................105.1.4 How to use the transaction markers to evaluate performance.............................................................105.1.5 How to handle hidden fields.................................................................................................................10

Page 2: Performance Testing Guide V6

5.1.6 How to handle script and test failures..................................................................................................105.1.7 Create Tests to Identify Points of Failure and Bottlenecks..................................................................115.1.8 Create Tests to Optimize Critical User Actions....................................................................................115.1.9 Performance Testing Across Load Balancers (clusters)......................................................................115.1.10 Load Balancer - Why do I need load balancers?.................................................................................125.1.11 Scheduling and Balancing Methods......................................................................................................125.1.12 Execute Tests to Tune Specific Components Across Network Tiers.....................................................12

6. HOW TO AVOID MISTAKES IN PERFORMANCE TESTING................................................................12

6.1 UNSYSTEMATIC APPROACH...........................................................................................................................126.2 PERFORMANCE TESTING BEYOND TEST EXECUTION......................................................................................136.3 TRANSLATE STAKEHOLDERS LANGUAGE INTO REAL PERFORMANCE GOALS AND REQUIREMENTS............136.4 EMULATING PRODUCTION ENVIRONMENTS...................................................................................................136.5 HANDLE OUTLIERS IN PERFORMANCE TEST REPORTS..................................................................................136.6 HANDLE PERFORMANCE DATA CORRECTLY AVOIDING OVER-AVERAGING................................................13

7. STRESS TESTING WITH LOAD RUNNER..................................................................................................14

7.1 PERFORMANCE TESTING TAKING A PART IN SDLC.......................................................................................147.2 INSTALLING AND SET UP LOADRUNNER / BASICS.........................................................................................157.3 CORE CONCEPTS............................................................................................................................................157.4 BENCHMARKING RUN / EXECUTION..............................................................................................................167.5 TEST DESIGN..................................................................................................................................................167.6 RUNNING TEST..............................................................................................................................................167.7 HARDWARE SETUP........................................................................................................................................177.8 PERFORMANCE ANALYSIS REPORT...............................................................................................................177.9 PERFORMANCE COUNTERS.............................................................................................................................177.10 PERFROMANCE METRICS...............................................................................................................................19

8. DATA PRESENTATION...................................................................................................................................20

8.1 DATA PRESENTATION AT DIFFERENT LEVELS IN THE ORGANIZATION..........................................................208.2 HOW TO ORGANIZE EFFICIENT DATA GRAPHS...............................................................................................208.3 SUMMARIZE RESULTS ACROSS TESTS RUNS EFFICIENTLY...........................................................................208.4 USE DEGRADATION CURVES IN REPORTS.....................................................................................................208.5 REPORT ABANDONMENT AND OTHER PERFORMANCE PROBLEMS...............................................................208.6 LOAD RUNNER SOLUTION AIDING WITH REPORTING ANALYSIS....................................................................20

9. PERFORMANCE TESTING FOR CAPACITY PLANNING AND SCALABILITY................................20

9.1 UNDERSTANDING THE ENVIRONMENT...........................................................................................................209.2 PERFORMANCE TESTING TO AID IN CHECKING AVAILABILITY AND SUSTAINABILITY...................................219.3 EVALUATE THE TESTING INFRASTRUCTURE AGAINST THE FOOTPRINT.........................................................229.4 MANUAL TESTING IS PROBLEMATIC.............................................................................................................22

10. BEST PRACTICES - PERFORMANCE TESTING..................................................................................22

10.1 PERFORMANCE TESTING ACTIVITIES............................................................................................................2310.2 IDENTIFY THE TEST ENVIRONMENT...............................................................................................................2310.3 IDENTIFY PERFORMANCE ACCEPTANCE CRITERIA........................................................................................2310.4 PLAN AND DESIGN TESTS...............................................................................................................................2410.5 CONFIGURE THE TEST ENVIRONMENT...........................................................................................................2410.6 IMPLEMENT THE TEST DESIGN.......................................................................................................................2410.7 EXECUTE THE TEST.......................................................................................................................................2410.8 Analyze Results, Report, and Retest.............................................................................................................24

Page 3: Performance Testing Guide V6

1. INTRODUCTION TO PERFORMANCE TESTING

The performance testing is a measure of the performance characteristics of an application. The main objective of a performance testing is to demonstrate that the system functions to specification with acceptable response times while processing the required transaction volumes in real-time production database. It’s defined as the technical investigation done to determine or validate the speed, scalability, and/or stability characteristics of the product under test and also Performance-related activities, such as testing and tuning, are concerned with achieving response times, throughput, and resource-utilization levels that meet the performance objectives for the application under test.

1.1 Objective

The objective of a performance test is to demonstrate that the system meets requirements for transaction throughput and response times simultaneously.

The main deliverables from such a test, prior to execution, are automated test scripts and an infrastructure to be used to execute automated tests for extended periods. This infrastructure is an asset and an expensive one too, so it pays to make as much use of this infrastructure as possible. Fortunately, this infrastructure is a test bed, which can be re-used for other tests with broader objectives. A comprehensive test strategy would define a test infrastructure to enable all these objectives be met.

The performance testing goals are:

End-to-end transaction response time measurements Measure Application Server components performance under various loads Measure database components performance under various loads Monitor system resources under various loads. Measure the network delay between the server and clients

1.2 Types of Performance Testing

Performance Testing, Load Testing, Stress Testing, Spike Testing and Endurance Testing (Soak Testing)

Performance Testing is the process of determining the speed or effectiveness of a computer, network or software program or device. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions. Qualitative attributes such as reliability, scalability and interoperability may also be evaluated. Performance testing is often done in conjunction with stress testing.

Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. It refers to tests that put a greater emphasis on robustness, availability, and error handling under a heavy load, rather than on what would be considered correct behavior under normal circumstances. In particular, the goals of such tests may be to ensure the software doesn't crash in conditions of insufficient computational resources (such as memory or disk space), unusually high concurrency, or denial of service attacks.

Spike testing suggests to be done by spiking the number of users and understanding the behavior of the application whether it will go down or will it is able to handle dramatic changes in load.

Endurance Testing (Soak Testing) is usually done to determine if the application can sustain the continuous expected load. Generally this test is done to determine if there are any memory leaks in the application.

Page 4: Performance Testing Guide V6

1.3 Risks Addressed via Performance Testing

Speed, Scalability, Availability and Recoverability, as they relate to Performance Testing

Scalability Testing is the testing of a software application for measuring its capability to scale up or scale out in terms of any of its non-functional capability - be it the user load supported, the number of transactions, the data volume etc

Availability Testing is the testing that need to check the application availability 24/7 at any point of time, generally this will be done in production environment using tools called Site scope, BAC and Tivoli.

Recoverability Testing, is a disaster recovery testing majorly done in database perspective and in Load balancing also. Recoverability means that committed transactions have not read data written by aborted transactions (whose effects do not exist in the resulting database states).

1.4 Understanding Volume

This section will discuss about the User Sessions, Session Duration, Transactions and User Abandonment as they relate to volume in performance testing.User Sessions : For example a login session is the period of activity between a user logging in and logging out of a (multi-user) system.

Session Duration is an Average amount of time that visitors spend on the site each time they visit the pages in any web site.

Transaction is a movement carried out between separate entities or objects or functionalities, often involving the exchange of items of value, such as information, services.

1.5 Test tools

Load runnerPerformance CenterWAPTVSTSOpen STARational Performance TesterWe have various tools available in industry to find out the causes for slow performance in the following areas:Application Database Network Client side processing Load balancer

1.6 Tool Identification & Evaluation

The Performance testing tool identification and evaluation process would be carried out based on the performance test requirements, protocols, software and hardware used in the application. A POC (Proof Of Concept) would be carried out if required. The objective of this activity is to ensure that the tools identified support all the applications used in the solution and helps in measuring the performance test goals.

The three major components of functionality to execute a test are: Running of the Client application Actual Load Generation Resource Monitoring

Page 5: Performance Testing Guide V6

The Summary of Performance test tools identification and evaluation

Task Input Tools & Techniques

Out Put

Tools Identification • Test Requirement • Test Environment

Proof Of Concept Tools identified

  • Application’s scripting.• Tools vendor’s Information

   

1.7 What kind of skills are required to involve in the Performance Testing.

Performance testing requires more skills than just knowledge about how to create a script for a particular load testing tool.

What is going on within the system? Monitoring and Performance Analysis.You got results. What if? Modeling and Capacity Planning.We see the bottleneck. What to do? Tuning and System Performance Engineering.And write, present, communicate, organize all the time.

2. PERFORMANCE TEST STRATEGY

2.1 Evaluate Systems to Improve Performance Testing Effectiveness

While System Evaluation is a continual process throughout the performance testing effort, the bulk of the evaluation process is most valuable if conducted very early in the effort. The evaluation can be thought of as the evaluation of the project and system context. The intent is to collect information about the project as a whole, the functions of the system, the expected user activities, the system architecture and any other details that are helpful in creating a Performance Testing Strategy specific to the needs of the particular project. Starting with this information, the performance goals and requirements can be more efficiently collected and/or determined then validated with project stakeholders. The information collected via System Evaluation is additionally invaluable when characterizing the workload and assessing project and system risks. Simultaneously we need to learn techniques effectively and efficiently determine and document the system's functions, document expected user activities and document the system's logical and physical architecture.

2.2 Determine Performance Testing Objectives, Building the Targets

Performance testing objectives are fairly easy to capture. The easiest way to capture performance testing objectives is simply to ask each and every member of the team what value you can add for him or her while you are doing performance testing. That value may be providing resource utilization data under load, generating specific loads to assist with tuning an application server, or providing a report of the number of objects requested by each web page. While collecting performance testing objectives early in the project is a good habit to get into, so is periodically revisiting them and checking in with members of the team to see if there are any new objectives that they would like to see added. And the majorly the performance testing are as follows

Establish valuable performance testing objectives at any point in the development lifecycle Communicate those objectives and the value they add to both team members and executives Establish technical, performance related targets (sometimes called performance budgets) that

can be validated independently from end-user goals and requirements Communicate those targets and the value that testing against them provides to both team

members and executives

Page 6: Performance Testing Guide V6

2.3 Quantify End Users Response Time Goals

Determining and quantifying application performance requirements and goals accurately is a critical component of valuable performance testing. Successfully verbalizing our application’s performance requirements and goals is the first and most important step in this process. Remember that when all is said and done, there is only one performance requirement that really matters: those application users are not annoyed or frustrated by poor performance. The users of your application don’t know or care what the results of your performance tests are, how many seconds it takes something to display on the screen past their threshold for "too long," or what your throughput is. The only thing application users know is that they either notice that the application seems slow or they don't – and users will notice (or not) based on anything from their mood to what they have become accustomed to. This How-To discusses methods for converting these feelings into numbers, but never forgets to validate your quantification by putting the application in front of real users. The major objectives in end user response time goals are follows

Identify the difference between performance requirements and performance goals. Capture subjective performance requirements and goals. Quantify subjective performance requirements and goals.

2.4 Baseline Application Performance

Base lining the application is where test execution actually begins. The intent is in twofold. First, all scripts need to be executed, validated and debugged (if necessary). Second, various single and multi-user tests are executed and recorded to provide a basis of comparison for all future testing. Initial baselines are typically taken as soon as the test environment is available. Re-base lining occurs at each new release. The following are the techniques to go forward in base line testing

Design and execute single user baseline scenarios Design and execute multi-user baseline scenarios Report test results against baseline scenarios Determine when new baselines need to be taken

2.5 Conduct a Performance Testing Risk Assessment

The performance tester is the only person on the project who has the information and experience to assess project and business risks related to performance. Typical areas of risk include: unrealistic acceptance criteria, unsuitable test environment, test schedule, test resources, limited access to real users and a team-wide lack of understanding about when and how performance testing can add value to the project. Identifying these risks early significantly increases the odds of success for the project by allowing time for risk mitigation activities.

3. ENVIRONMENT - PERFORMANCE TEST BEDS

3.1 Creating

Test Bed is an execution environment configured for software testing. It consists of specific Client/ Server hardware, network topology, Client/Server Operating System, deployment of application under test with all tiers, Installation of Performance Test tool (with valid license), other applications and other machines (if required).

3.2 Troubleshooting

Isolate the source of a problem and fix it, typically through a process of elimination whereby possible sources of the problem are investigated and eliminated.

Troubleshooting involves the following:

1.       Identify the symptom(s).

Page 7: Performance Testing Guide V6

Is an error message displayed on your screen or written to the log? Is behavior abnormal? Is the symptom transient or persistent and can you reproduce it?

2.       Locate the problem.

Is it hardware or software? Can the problem source be identified easily? Can you bypass the problem?

3.       Find the cause.

Has your system recently been re-configured? Have you installed a new component (hardware or software)? Has the problem just occurred or has it existed for some time?

4.       Prepare and apply a fix.

Have you prepared the fix? Have you tested the fix? Can you rollback the fix?

3.3 Real world simulation

Simulate Test Bed with real world environment.

It includes the following: 

Network bandwidth Usage (Customized / Maximum bandwidth) Normal / Peak User load Which business transaction needs to be included? Workload Models

3.4 Server configurations

This Section includes the Hardware Configuration of Application Server / Web Server / Database Server.  

Example: Processor Virtual Memory Physical Memory LAN Card

3.5 Client Configurations

This Section includes the Hardware Configuration of Client machine where VUsers will be emulated (Load Generator/ Controller)

3.6 Scalable test environments

Once we’ve decided what servers and clients will be needed on your test network, next you need to decide how many physical machines you need for the test lab. We can save money by creating multiple servers on one physical machine, using virtualization software such as Microsoft Virtual PC/Virtual Server.

Page 8: Performance Testing Guide V6

This is an especially scalable solution because it allows us to spend less money on hardware, and you can add additional virtual servers by upgrading disk space and RAM, instead of buying complete machines to emulate each new server that you add to your productivity network.

4. TEST EXECUTION

4.1 Test Design and Execution

Based on the test strategy detailed test scenarios would be prepared. During the test design period the following activities will be carried out:

Scenario design Detailed test execution plan Dedicated test environment setup Script Recording/ Programming Script Customization (Delay, Checkpoints, Synchronizations points) Data Generation Parameterization/ Data pooling

4.2 Load runner executions

When Scenario is designed that includes Business Transaction Script, Virtual User load, Load Generators (if any), Ramp Up/ Ramp Down, Test Duration, Client/ Server Resource Measurements (Objects and Counters), we can plan for test execution. During Test Execution, monitor essential online graphs (Transaction Response Time, Hits per Second, Throughput, Passed/Failed Transactions, Error messages (if any)).

4.3 Reporting and Analysis

After Completion of Test Execution, Load Runner Analysis tool can display data points from a run as a time series graph, for example Average Transaction Response Time. These graphs are one of the methods the Analysis tool uses to summarize the vast amount of data collected during a test run. Creating a summary in a standard form, the graph, removes the need to plough through all of the raw data. The first step when the Analysis Tool time series graphs are being prepared is to generate the values that can be seen in the Graph Data sheet. This is done by dividing the graph time span into slots and taking the mean of the raw values falling within the slot as the Graph Data value for that slot. The duration of the slots in the Graph Data is referred to as the Granularity of the graph. This can be set to be a number of seconds, minutes or hours. Shorter periods provide more detail, longer ones more of an Overview.

4.4 Data Presentations

Represent the Final Report (Word / HTML) with the following: 

4.4.1 Analysis Summary

4.4.1.1 Header Time Range

This date is, by default, in the European format of dd/mm/yy (30/8/2004 for August 30, 2004).

4.4.1.2 Scenario Name:

The file path to the .lrs file

4.4.1.3 Results in Session:

The file path to the .lrr file

Page 9: Performance Testing Guide V6

4.4.2 Statistics Summary

4.4.2.1 Maximum Running Vusers

This number is usually smaller than the number of VUsers specified in run-time parameters because of ramp-up time and processing delays.

4.4.2.2 Total Throughput (bytes):

Dividing this by the amount of time during the test run yields the next number:

4.4.2.3 Average Throughput (bytes/second):

This could be shown as a straight horizontal line in the Throughput graph.

4.4.2.4 Total Hits:

Dividing this by the amount of time during the test run yields the next number:

4.4.2.5 Average Hits per Second:

This could be shown as a straight horizontal line in the Hits per Second graph.  

4.4.3 Transaction Summary

4.4.3.1 Transactions

Total Passed is the total of the Pass column. The number of transactions Passed and Failed is the total count of every action defined in the script, multiplied by the number of VUsers, further multiplied by the number of repetitions, and also multiplied by the number of iterations.  

Transactions Min Avg MaxTransaction Name This is the

Fastest timeThis is the arithmetic mean

This is the slowest time

                            

4.4.4 HTTP Responses Summary

HTTP 200 ("OK") is considered successful. HTTP 302 highlights a redirection ("Moved Temporarily"), a normal event. HTTP 404 is a "resource not found" error. HTTP 500 is a "server busy" error HTTP 503 is an authentication denial

4.5 Standard Graphs (Provided by Default)

These graphs are sequenced according to the sequence in Controller menu option Graph > Add Graph... During Analysis this order is automatically applied from our custom Template.

  Vusers Transactions Web Resources System Resources

5. DEFINING THE PERFORMANCE AND STRESS TESTING

5.1 Modeling the User experience

5.1.1 Simulate Realistic User Delays

Application users think, read, and type at different speeds, and it's the performance tester's job to figure out how to model and script those varying speeds as part the testing process. This How-To will explain all of the necessary theory about determining and scripting realistic user delays.

Page 10: Performance Testing Guide V6

Realistic user delays are important to test results Determine realistic durations and distribution patters for user delay times Incorporate realistic user delays into test designs and test scripts

5.1.2 Model Representative User Groups

Modeling a user community has some special considerations in addition to those for modeling individual users. This How-To demonstrates and how to develop user community models that realistically represent the usage of the application by focusing on groups of users and how they interact from the perspective of the application.

Recognize and select representative groups of users Model groups of users with appropriate variance Consolidate groups of users into one or more production simulation models Identify and model special considerations when blending groups of users into single models

5.1.3 Simulate Realistic User Patterns

Applications typically allow for many ways to accomplish a task or many methods to navigate the application. While users think this is a good thing (at least most of the time), this complicates matters for testers, requiring that we look, not only at what users do within an application but also at how they do it. To effectively predict performance in production, these variants of individual user patterns must be accounted for. This How-To discusses how to apply these individual user patterns to groups or communities of users.

Identify individual usage scenarios Account for variances in individual usage scenarios Incorporate individual usage scenarios and their variances into user groups Script individual usage scenarios with their variances.

5.1.4 How to use the transaction markers to evaluate performance

It’s very much important about the inserted transactions and think times or timers and delays in between the script. We should mimic the appropriate think and delays as per the end user experiences and application behavior. Once completion of the executions we need to exclude the thinks and delays time from respective response times.

Identify individual transactions and should be proper naming convention Identify the think times and delay time in between the transactions and iterations. Incorporate all the think times and delays in individual usage scenarios and their variances into

user groups

5.1.5 How to handle hidden fields

Generally, hidden values will generated in every application but some time it’s very important to handle the hidden and dynamic values. For example in .net application “VIEW STATE” values and in Java application “J SESSION” VALUES and so on.

Identify the hidden values that may cause errors in scripts and executions. Incorporate proper functions as per the tools were it required to handle and hidden values. If required create re-usable actions or classes in virtual user generator scripts.

5.1.6 How to handle script and test failures

There might be chances for errors in scripts after doing all required enhancements; we need to add the error handling in all individual scripts. This is the best practice to trouble shoot and nail down the issue in environment and application.

Page 11: Performance Testing Guide V6

5.1.7 Create Tests to Identify Points of Failure and Bottlenecks

There are at least as many ways of identifying bottleneck suspects as there are people who’ve ever observed something being slow when working on a computer. It’s our job as performance testers to identify as many of those suspects as possible and then sort, categorize, verify, test, exploit, and potentially help resolve them. This How-To demonstrates some common methods to create tests that identify bottlenecks.

Design bottleneck exploiting tests by eliminating ancillary activities Design bottleneck exploiting tests by modifying test data Design bottleneck exploiting tests by adding related activities Design bottleneck exploiting tests by changing navigation paths

5.1.8 Create Tests to Optimize Critical User Actions

We use tools to determine things like “The Submit loan Page takes a REALLY long time to load when we do more than 500 submits in an hour.” While that is extremely valuable information, it’s only mildly useful to the person who needs to resolve the issue. The person responsible for providing a resolution needs to know what part of that page, or what process on a back end server, or which table in the database is the cause of the issue before we can start trying to find a resolution for it.

Capture metrics by tier Design tests to find breakpoints Design tests to find resource constraints Employ unit tests for performance optimization

5.1.9 Performance Testing Across Load Balancers (clusters).

It’s a “FAIL OVER TESTING” across the Load balancers in real time environment. All expected load generation will be generated through the load balancer. Load balancers and networks shouldn’t actually be Causing performance problems or bottlenecks, but if they are, some configuration changes will usually remedy the problem. Load balancers are conceptually quite simple. They take the incoming load of client requests and distribute that load across multiple server resources. When configured correctly, a load balancer rarely causes a performance problem. The only way to ensure that a load balancer is configured properly is to test it under load before it’s put into use in the production system. The bottom line is that if the load balancer isn’t speeding up our site or increasing the volume it can handle, it’s not doing its job properly and needs to be reconfigured.

Page 12: Performance Testing Guide V6

Physical architecture with load balancer

5.1.10 Load Balancer - Why do I need load balancers?

Because of availability as well as performance, we want to achieve high-availability. Maintenance is no longer a nightmare and does not require downtime.

5.1.11 Scheduling and Balancing Methods

Round Robin

Weighted Round Robin

Least Connection

Weighted Least Connection

Agent-based Adaptive

Chained Failover (Fixed Weighting)

Layer 7 Content Switching

5.1.12 Execute Tests to Tune Specific Components Across Network Tiers

Use the third party tools to do network simulation testing while doing performance testing. We can do performance across the network tiers by using different tool in industry.

6. HOW TO AVOID MISTAKES IN PERFORMANCE TESTING

6.1 Unsystematic approach

Decisions with out Running of the Client application Un acceptable Load Generation Not using Proper Monitoring and comparisons. No Goals No general purpose model Goals =>Techniques, Metrics, Workload Not trivial Biased Goals ‘To show that OUR system is better than THEIRS”

Page 13: Performance Testing Guide V6

Analysts = Jury Unsystematic Approach Analysis without Understanding the Problem Incorrect Performance Metrics Unrepresentative Workload Wrong Evaluation Technique Overlook Important Parameters Ignore Significant Factors Inappropriate Experimental Design Inappropriate Level of Detail No Analysis Erroneous Analysis No Sensitivity Analysis Ignoring Errors in Input Improper Treatment of Outliers Assuming No Change in the Future Ignoring Variability Too Complex Analysis Improper Presentation of Results Ignoring Social Aspects Omitting Assumptions and Limitations

6.2 Performance testing beyond test execution

We probably need to know about tuning and have understanding of how applications should be designed to perform well (Software Performance Engineering). We don't need to be an expert, for example, in database tuning – most companies have DBAs for that – but you need to speak to a DBA in his language to coordinate efforts effectively. Or raise concerns about performance consequences of the current application design. Unfortunately it is not easy – you need to know enough to understand what is going on and communicate effectively.

6.3 Translate Stakeholders Language into Real Performance Goals and Requirements

Translate the stake holder’s requirements in to SLAs example: - Response times

6.4 Emulating production environments

Creating pilot environments as it is like the production environment. By doing the performance testing on pilot environment will give us the results and expected behavior.

6.5 Handle Outliers in Performance Test Reports

Handling outlier is an observation that is numerically distant from the rest of the data. Statistics derived from data sets that include outliers may be misleading in terms of response times or 90% tile. Example: - Generally if we have any failed transactions due to time outs errors, we need to find it out how this time out occurred in our execution. Is it because of processor queue length? Or running out memory in web server or application server? Especially we need to display graphs in our reports for these of issues in the environment. Some time due to network outage we may see some spike in while executing the tests.

6.6 Handle Performance Data Correctly Avoiding Over-Averaging

Base lining the application is where test execution actually begins. First, all scripts need to be executed, validated and debugged (if necessary). Create a baseline for a single user scenario and for backend / batch process scenarios. While executing tests, evaluate results, identify the baseline, exploit potential vulnerabilities and performance threats. Perform an application performance walkthrough. Analyze first-page load time to evaluate client-side performance. Perform network analysis for a single user and tune the configuration against goals and constraints.

Page 14: Performance Testing Guide V6

7. STRESS TESTING WITH LOAD RUNNER

7.1 Performance testing taking a part in SDLC

Page 15: Performance Testing Guide V6

7.2 Installing and Set up Loadrunner / Basics

7.3 Core Concepts

VU – generator - Virtual User Generator (Vugen) –records Vusers scripts that emulate the steps of real users using the application.Parameterization - Also known as Application Data and the data is resident in the application’s database–Examples: ID numbers and passwordsCorrelation – also known as user generated data or dynamic values. Originates with the user–Examples: new unique ID or email address or session idsControllers - The Controller is an administrative center for creating, maintaining, and executing Scenarios. The Controller assigns Vusers and load generators to Scenarios, starts and stops load tests, and performs other administrative tasks.Load generators - (also known as hosts) are used to run the Vusers that generate load on the application under test.LR Analysis - uses the load test results to create graphs and reports that are used to correlate system information, identify bottlenecks, and performance issues.Monitors –

Page 16: Performance Testing Guide V6

7.4 Benchmarking Run / Execution

To validate that there is enough test hardware available in the test environment, benchmark the business processes against the testing hardware. Take a business process and execute a small number of users against the application.

Validates the functionality of the business process Potentially exposes unknown data dependencies

7.5 Test design

Based on the test strategy detailed test scenarios would be prepared. During the test design period the following activities will be carried out:

Scenario design Detailed test execution plan Dedicated test environment setup Script Recording/ Programming Script Customization (Delay, Checkpoints, Synchronizations points) Data Generation Parameterization/ Data pooling

7.6 Running Test

The test execution will follow the various types of test as identified in the test plan. All the scenarios identified will be executed. Virtual user loads are simulated based on the usage pattern and load levels applied as stated in the performance test strategy.

The following artifacts will be produced during test execution period:

Test logs Test Result

Page 17: Performance Testing Guide V6

7.7 Hardware Setup

Minimum requirements should be PIII machine and every virtual user will utilize 1 MB of memory.

7.8 Performance Analysis Report

The test logs and results generated are analyzed based on Performance under various load, Transaction/second, database throughput, Network throughput, Think time, Network delay, Resource usage, Transaction Distribution and Data handling. Manual and automated results analysis methods can be used for performance results analysis

The following performance test reports/ graphs can be generated as part of performance testing:-

Transaction Response time Transactions per Second Transaction Summary graph Transaction performance Summary graph Transaction Response graph – Under load graph Virtual user Summary graph Error Statistics graph Hits per second graph Throughput graph Down load per second graph

Based on the Performance report analysis, suggestions on improvement or tuning will be provided to the design team:

Performance improvements to application software, middleware, database organization. Changes to server system parameters. Upgrades to client or server hardware, network capacity or routing.

7.9 Performance counters

The following measurements are most commonly used when monitoring the Oracle server

Oracle server

Measurement Description

CPU used by this session The amount of CPU time (in 10s of milliseconds) used by a session between the time a user call

started and ended. Some user calls can be completed within 10 milliseconds and, as a result, the

start and end-user call time can be the same. In this case, 0 milliseconds are added to the statistic.

A similar problem can exist in the operating system reporting, especially on systems that suffer from

many context switches.

Bytes received via

SQL*Net from client

The total number of bytes received from the client over Net8.

Logons current The total number of current logons

Opens of replaced files The total number of files that needed to be reopened because they were no longer in the process

Page 18: Performance Testing Guide V6

file cache.

User calls Oracle allocates resources (Call State Objects) to keep track of relevant user call data structures

every time you log in, parse, or execute. When determining activity, the ratio of user calls to RPI

calls gives you an indication of how much internal work is generated as a result of the type of

requests the user is sending to Oracle.

SQL*Net roundtrips

to/from client

The total number of Net8 messages sent to, and received from, the client.

Bytes sent via SQL*Net to

client

The total number of bytes sent to the client from the foreground process(es).

Opened cursors current The total number of current open cursors.

DB block changes Closely related to consistent changes, this statistic counts the total number of changes that were

made to all blocks in the SGA that were part of an update or delete operation. These are changes

that generate redo log entries and hence will cause permanent changes to the database if the

transaction is committed. This statistic is a rough indication of total database work and indicates

(possibly on a per-transaction level) the rate at which buffers are being dirtied.

Total file opens The total number of file opens being performed by the instance. Each process needs a number of

files (control file, log file, database file) to work against the database.

Web Server Metrics

1. Request Execution Time (ASP.NET)2. Request Wait Time (ASP .NET)3. Requests Current (ASP.NET)4. Requests Queued (ASP.NET)5. Request Bytes In Total (ASP.NET Applications__Total__)6. Request Bytes Out Total (ASP.NET Applications__Total__)7. Requests Executing (ASP.NET Applications__Total__)8. Requests In Application Queue (ASP.NET Applications__Total__)9. Available MBytes (Memory)10. Page Faults/sec (Memory)11. Pages/sec (Memory)12. Pool Nonpaged Bytes (Memory)13. Threads (Objects) 14. %Disk Time (PhysicalDisk_Total)15. Page File Bytes (Process_Total)16. Private Bytes (Process_Total)17. Virtual Bytes (Process_Total)18. *%Interrupt Time (Processor_Total)19. *%Privileged Time (Processor_Total)20. %Processor Time (Processor_Total)21. Interrupts/sec (Processor_Total)22. Pool Nonpaged Failures (Server)23. File Data Operations/sec (System)24. Processor Queue Length (System)25. Context Switches/sec (Thread_Total)26. %Disk Time (PhysicalDisk_Total)27. %Processor Time (Processor_Total)28. File Data Operations/sec (System)29. Interrupts/sec (Processor_Total)

Page 19: Performance Testing Guide V6

30. Page Faults/sec (Memory)31. Pages/sec (Memory)32. Pool Nonpaged Bytes (Memory)33. Private Bytes (Process_Total)34. Processor Queue Length (System)35. Threads (Objects)

Database Server Metrics

1. Concurrency Wait Time2. Consistent Gets3. Commit SCN cached4. CPU Used By This Session (Sesstat)5. CPU Used When Call Started6. Db Block Changes7. Db Block Gets8. Db Block Gets from cache9. Dirty buffers inspected10. Logons Current11. Opened Cursors Cumulative12. Opened Cursors Current13. Background Timeouts14. Buffer Is Pinned Count15. Buffer Is Not Pinned Count16. Parse Count(Hard)17. Parse Count(Total)18. Parse Time Cpu19. Parse Time Elapsed20. Physical Reads21. Physical Read IO Request22. Physical Writes23. Physical Write IO Request24. Queries Parallelised25. Redo Size26. Sorts (Memory)27. Sorts (Rows)28. Table Fetch By Rowid29. Table Fetch Continued Row30. User Calls31. User Commits32. Work Area Memory Allocated33. Available MBytes (Memory)

7.10 Perfromance Metrics

The Common Metrics selected /used during the performance testing is as below Response time

Turnaround time = the time between the submission of a batch job and the completion of its output.

Stretch Factor The ratio of the response time with single user to that of concurrent users.

Throughput Rate (requests per unit of time) Examples: Jobs per second Requests per second Millions of Instructions Per Second (MIPS) Millions of Floating Point Operations Per Second (MFLOPS) Packets per Second (PPS)

Page 20: Performance Testing Guide V6

Bits per second (bps) Transactions per Second (TPS)

Capacity:

Nominal Capacity: Maximum achievable throughput under ideal workload conditions. E.g., bandwidth in bits per second. The response time at maximum throughput is too high.

Usable capacity: Maximum throughput achievable without exceeding a pre-specified response-time limit

Efficiency: Ratio usable capacity to nominal capacity. Or, the ratio of the performance of an n-processor system to that of a one-processor system is its efficiency.

Utilization: The fraction of time the resource is busy servicing requests. Average Fraction used for memory.

8. DATA PRESENTATION

8.1 Data Presentation at different levels in the organization

Data presentation will get differ when we are participating in the execution meeting. The execution data / analysis data should be with low granularity and with out think times or delay times. The data report should talk about the individual transactions counts and window resource utilization from the entire web, app and if we have any external service related and shared services data.

8.2 How to organize efficient data graphs

It’s a very good to organize the data that need to be very easy to understand, so we need to correlate the graphs properly with required graphs information’s. Example: - Through put, hits per second, average transaction response times and if we have any spike in the environment.

8.3 Summarize Results Across Tests Runs efficiently

Need to provide the brief summary for the all graph after every execution, so it will be easy to compare the results with previous executions. The best practice is to create reports in form of “TEMPLATE” and then apply the “TEAMPLATES” when ever we require after every execution.

8.4 Use Degradation Curves in Reports

It’s very important, our data graphs should display the degradations curve graphs in proper manner and need to provide exact summary for the respective spikes. If we have degradation in the graphs then we need to do the root cause analysis and help the developer where exactly the problem occurs.

8.5 Report Abandonment and Other Performance Problems

Performance problem might be differ from execution to execution because there might be a lot number of issue across environment in terms of issues in web server and application server configuration file, network, virtual IP’s, load balancers, external services, shared services and fire wall.

8.6 Load Runner solution aiding with reporting analysis

Load runner will provide full fledged reporting for analysis after every execution either in crystal reports, HTML reports or Microsoft word reports.

9. PERFORMANCE TESTING FOR CAPACITY PLANNING AND SCALABILITY

9.1 Understanding the environment

Page 21: Performance Testing Guide V6

9.2 Performance testing to aid in checking Availability and sustainability

Allows data to be written, read, and destroyed without affecting production users Allows test system to be rebooted during test runs without affecting production users Should mirror the production system. Needs business processes that are functioning correctly Should include Benchmark runs Must contain sufficient hardware to generate the load test.

Page 22: Performance Testing Guide V6

9.3 Evaluate the testing infrastructure against the footprint.

Do I have enough hardware to generate the user load? Do I have enough memory? Do I have enough CPUs?

9.4 Manual Testing is Problematic

10. BEST PRACTICES - PERFORMANCE TESTING

Page 23: Performance Testing Guide V6

Figure: - Performance Best Practices

10.1 Performance Testing Activities

Identify the test environment Identify performance acceptance criteria Plan and design tests Configure the test environment Implement the test design Execute the test Analyze results, report, and retest

10.2 Identify the test environment

Identify the physical test environment and the production environment as well as the tools and resources available to the test team.

The physical environment includes hardware, software, and network configurations. Having a thorough understanding of the entire test environment at the outset enables more

efficient test design and planning and helps you identify testing challenges early in the project. In some situations, this process must be revisited periodically throughout the project’s life cycle.

10.3 Identify Performance Acceptance Criteria

Identify the response time, throughput, and resource utilization goals and constraints. In general, response time is a user concern, throughput is a business concern, and resource

utilization is a system concern. Additionally, identify project success criteria that may not be captured by those goals and

constraints; for example, using performance tests to evaluate what combination of configuration settings will result in the most desirable performance characteristics.

Page 24: Performance Testing Guide V6

10.4 Plan and design tests

Identify key scenarios, determine variability among representative users and how to simulate that variability, define test data, and establish metrics to be collected.

Consolidate this information into one or more models of system usage to be implemented, executed, and analyzed.

10.5 Configure the test environment

Prepare the test environment, tools, and resources necessary to execute each strategy as features and components become available for test.

Ensure that the test environment is instrumented for resource monitoring as necessary.

10.6 Implement the test design

Develop the performance tests in accordance with the test design.

10.7 Execute the Test

Run and monitor your tests. Validate the tests, test data, and results collection. Execute validated tests for analysis while monitoring the test and the test environment.

10.8 Analyze Results, Report, and Retest

Consolidate and share results data. Analyze the data both individually and as a cross functional team. Reprioritize the remaining tests and re-execute them as needed. When all of the metric values are within accepted limits, none of the set thresholds have been

violated, and all of the desired information has been collected, you have finished testing that particular scenario on that particular configuration.