Getting started with performance testing

10
Getting Started With Performance Testing Tom Miseur, Technical Specialist

Transcript of Getting started with performance testing

PowerPoint Presentation

Getting Started With Performance TestingTom Miseur, Technical Specialist

Alright, well thank you very much Elizabeth! And thank you all for joining us today. So, let's dig in. As is true with most complex undertakings, especially in an enterprise software setting, planning is king. 1

Proper Planning and Preparation Prevents Poor Performance

There's a famous quote here that I was admittedly somewhat hesitant to use... It is known as "the 7 P's"... but what you'll see in front of you is actually the condensed version featuring only 6 P's just so we don't get into any trouble here... you can probably guess what that 7th P might have been: Proper Planning and Preparation Prevents ... Poor Performance!

This is of course very appropriate given that we are dealing with performance testing! And it is true to a tee: there are so many variables involved in performance testing, so it's important that they are all accounted for. But before I delve into what all these variables are, I will point out that I probably won't be exposing every single one of these variables, that might take a bit longer than we have time for today. Instead, I'll be talking about the bare necessities, the things you will always have to think about regardless of what type of software you are testing. With that said, let's begin our journey...2

Performance Test PlanFirst things first: Proper PlanningDefine requirements & objectivesEstablish testing scopeOutline approach

So in the performance testing world, this plan is aptly called the Performance Test Plan, or PTP if you're a fan of acronyms. The purpose of the PTP is to encompass all of these different variables in a format that is easily digestible by its audience. And that audience can stretch all the way up to upper management, so it's important that the content is laid out in layman's terms as often as possible. At the very least, the PTP should define:

- what the testing requirements & objectives are- the scope of the testing- and details surrounding the overall testing approach itself

3

Defining requirements & objectivesWhich parts of the system should we focus our efforts on?What kind of user concurrency are we looking to achieve?Where are we most likely to experience bottlenecks?What kind of tests do we want to run?Baseline, soak, stress, volume, etc.

Gathering test requirements is tricky business in and of itself. It requires an understanding of the business as a whole, as well as the architecture that's going to be supporting it. On the business side, one needs to have some idea of what kind of load is to be expected when the software is put into production. This is where the term 'concurrency' comes into play; "how many 'concurrent users' are we likely to expect when we open our website to the public?" is a classic example. Working out what this figure should be is a bit of a science in itself. In most cases, the total user population is a known figure, but how that relates to actual concurrency at any given point in time might not be so obvious. Often, you will need to delve into log files to work out a more accurate figure. Data gleaned from tools like Google Analytics can really help build an accurate picture of how much your software is actually being used.

Another business-oriented question that may need answering would be "Which parts of the system should we focus our efforts on?". The reasons here are two-fold: First, this core functionality is likely the revenue-generating part of the software, or the stuff that would render your software useless if it wasn't working, and so it is important it is tested. Secondly, as we'll see later on, *time* is ultimately going to limit how much performance testing you can actually do, and so you will be required to prioritize testing of the parts of the system that are most important to the business.

Architectural questions might be, "which parts of the system are most likely to cause bottlenecks?". This is a bit more difficult to answer, because these parts of the system might not necessarily be the same as those considered core to the business. They might be things like batch processes running overnight, performing some clean-up routines - you wouldn't want those running into the morning when your users start coming back online!

The objectives should also state the types of tests that are going to be run. It usually won't be a case of running "a" performance test and that's that. Performance testing involves running all manner of different tests, each providing unique insight that combined will give the complete picture, and ultimately the confidence needed to release quality software.

4

Establishing testing scopeWhat are our testing constraints?What can we realistically test in the given timeframe?Which parts of the system are outside of our control?

It's also worth mentioning what kind of constraints you may encounter. One of the harsher realities with performance testing is that there will only ever be so much you can test before you simply run out of time; Eventually, it'll become difficult to justify delaying the release of software any further on the grounds that you haven't managed to get around to performance testing some really obscure part of the system.

Other constraints may be that your system relies on some 3rd party system that you have no control over, and may not even be *allowed* to performance test without some prior agreement that it would be OK to do so. It is important to make others aware of what is and isn't within the scope of the testing, along with the reasons why, and what potential performance issues those out of scope items may present to the system as a whole.

A good example is Salesforce. We hear a lot of customers wanting to test the performance of their Salesforce deployment, or more specifically the customizations that have been made to Salesforce to have it interact with all of the various systems it pushes & pulls data to & from in your specific environment. But this presents a number of interesting challenges: Which environment are you going to run the test on? If it's a sandbox environment, how will you know how it compares to the Production environment? If you're going to test on the Production environment, you'll inevitably be putting load on Salesforce's own servers and they might not be too happy with that. Should you include modules such as Chatter, the functionality that gives you the ability to chat to other Salesforce users? Probably not; it is fairly safe to assume that Salesforce have done their own performance testing, and have ensured that the chat functionality is given lower priority than other more important functionality on a given page. Or perhaps that's not the case...? Lots of questions there.

I hope you're starting to appreciate the need for Proper Planning, and for there to be agreement on what the eventual testing is going to consist of.5

Defining the test approachWhich software are we going to use?Do we need to reset anything between tests, in order to re-use data?What kind of test data do we need?Server monitoring?

The PTP will also allow you to lay out your intended approach. That is to say, which software you are going to use to conduct the testing, what kind of test data your tests will need and how you're going to go about generate that data. Sometimes you may even need to reset the environment you are testing against somehow; this is especially true if you are *creating* data. What about testing on an environment that features lots of already-generated data, rather than testing against an empty database?

Eventually, the PTP will contain everything that's needed in order to perform the testing and it'll have been signed off by stakeholders and you are ready to implement it. This is finally where the software steps in!6

Studio

In order to be able to conduct these kinds of tests, you need the ability to automate all manner of different protocols. But as important is a tool that has the flexibility to support all of the different variables we've just been talking about.

Let's see how this plays out with eggPlant Performance. What you are looking at now is the Studio component in use, which is where all of the design aspects of performance testing takes place. The first step to developing your performance test is creating the scripts that replace the need for real people sitting in front of computers 'doing stuff' at the same time. That's just not a very scientific or accurate test.

The nature of these scripts will depend on what it is you are testing, and how they are created will also depend on it. For example, scripts simulating Web traffic, namely the sending and receiving of HTTP messages (which is essentially what sits behind the internet), that involves using a browser together with a proxy recorder that listens in on that HTTP conversation, capturing it into a script. This is quite important, because it could take a long time if you were to have to write this kind of script from scratch; some web sites can be very complex behind-the-scenes. especially if they've been generated by some kind of framework. Wordpress is an example that springs to mind here.

The result of this recording is a script that will usually need some kind of tweaking in order to get it to do what you want. For example, you might have logged on to the website using a set of credentials. Those credentials would have made their way into the script, so you would want to be able to modify this script so that you are instead feeding in login credentials from a file, where each Virtual User can pick its own credentials from.

One of the benefits of using eggPlant Performance is you will have the ability to define this kind of task using Generation Rules rather than actually modifying the script code itself. We built Generation Rules to make the computer do all of these script edits rather than the end user, which ultimately gets you up and running with a script faster than if you were to do those manual edits yourself. Another common situation where this applies is the process of "correlation". This is the task of taking some dynamic value that the server generates and sends to the client, and ensuring that the script uses this value rather than whatever was planted into the script during the recording. Automating this process allows you to get going quicker and in a much less error-prone way.

The recording and script generation "phase" if you like, can take a significant amount of time in the overall performance testing process, especially if there are a lot of business processes that need scripting. But once done, you'll be able to use them for all manner of different tests, and sometimes even outside of performance testing. A classic example is a script that creates a new user on your system. This kind of script will likely come in handy for your functional testing team too. It's usually a laborious task, and you've just automated it... So put people out of their misery and tell them you can do it at the press of a single button, and they will absolutely love you for it!!7

Test DesignHow many Virtual Users?How long should the test run for?How quickly should the VUs ramp up to full concurrency?What test data should they use?Connection/response timeout?Pick data sequentially/randomly?Group-specific start times?Max number of errors before terminating?Think-time between VU actions?Logging level?How many injectors to use?

Let's move on to test design. You've got your tests laid out in the PTP, and so now it's time to translate those into the tool. With eggPlant Performance, you've got all the flexibility you need to design all manner of tests. The basics are having full control over things like:

- the number of Virtual Users you want to run,- how long they should run for,- how quickly they should ramp-up to the desired concurrency,- and what data they should use, like which credentials to log in as

But there are lots of other settings that I won't delve into here, but be assured we've got functionality to cover them all. Suffice to say that any of the tests mentioned on the first slide are all possible to implement in the tool, be it a soak test running for several hours, or a stress test aiming to establish at what point the servers are no longer able to cope with the load being generated.8

Test Controller

This is the fun bit. Now you're ready to actually run your tests! This happens in the second eggPlant Performance component, namely Test Controller. Here, you'll want to be able to look at what your Virtual Users are getting up to. Each Virtual User will be maintaining its own Event Log, where you can view things like verification messages that you might have planted into your scripts during key transactions, as well as any errors they might be encountering. Useful message output could be things like server-generated identifiers for when something new has been created, like an "order ID".

Along with these event logs, you'll also want to be able to get a feel for the overall health of the system you are trying to test. Hopefully you've configured server Monitoring so that you can have a look at what's going on with the servers you are loading as the test runs. This is where you'll start to derive value out of running these tests; you will not only be able to say "my average response times were 5 seconds at 2000 concurrent users", you'll also be able to say "the web server was seeing CPU usage averaging around the 80% mark". Imagine then, there was a requirement for the application to be able to support 3000 users. You know that's going to cause some problems!9

Analyzer

Eventually, your test will complete, making it available to the third and final component of eggPlant Performance, namely Analyzer.

Here you'll have a wealth of information at your disposal, based on what was collected during the test. From charts depicting the change in response times for all of your transactions over the course of the test, to tables listing their overall statistics at the end of the test. Examples include the ever-so-important average response times, minimum and maximum response times, how many times each transaction was executed, percentiles, standard deviation,... All of these figures help build the complete picture of the performance of your application.

If you monitored any servers during your test, these figures and the corresponding charts will appear in Analyzer too, allowing you to make correlations like the one mentioned earlier. [show example if live demo]

Once you've exposed all of the useful information there is to glean from the results, it is time to compile that information into a report. Let's call it the PTR so we start and finish on an acronym. The Performance Test Report condenses the information into a format that is easy to interpret for almost anyone in the organization, much like what the Performance Test Plan should be. This is important, because chances are that the information will be shared with stakeholders that likely aren't as technical as the person actually running the tests.10