Automated Performance Testing with HP LoadRunner

38
Best Practices for Tableau Server Automated Performance Testing with HP LoadRunner

Transcript of Automated Performance Testing with HP LoadRunner

Page 1: Automated Performance Testing with HP LoadRunner

Best Practices for Tableau Server

Automated Performance Testing with HP LoadRunner

Page 2: Automated Performance Testing with HP LoadRunner

2

Contents

OVERVIEW .............................................................................................................................................. 4

SCOPE ...................................................................................................................................................... 4

INTRODUCTION TO TABLEAU SERVER .......................................................................................... 4

THE PERFORMANCE TESTING CHALLENGE ................................................................................. 5

GENERAL PERFORMANCE TESTING BEST PRACTICES ............................................................................ 6

1. Get Stakeholder Agreement on Requirement Performance Levels .................................................................. 6

2. The Required Performance Levels Should Align with Business Goals .............................................................. 6

3. Defining Required Performance Levels Can Bring Certainty to Test Evaluation ........................................... 6

4. Performance Testing Is a Collaborative Effort ....................................................................................................... 7

5. An Accurate Baseline Test Is Needed for Ongoing Performance Testing and Monitoring ......................... 7

6. Don’t Mix Functional Testing and Performance Testing ...................................................................................... 7

7. Early and Frequent Performance Test Reporting is Good .................................................................................. 8

8. Visual Performance Test Reporting Helps Reveal Important Trends ............................................................... 8

GETTING STARTED IN PERFORMANCE TESTING OF TABLEAU SERVER .............................. 9

TEST PLANNING .................................................................................................................................... 9

1. Environmental Considerations ................................................................................................................................... 9

2. Test Scheduling ............................................................................................................................................................ 10

3. Test Objectives ............................................................................................................................................................ 11

Page 3: Automated Performance Testing with HP LoadRunner

3

CONFIGURATION OF TABLEAU SERVER AND LOADRUNNER SERVER .................................................. 11

1. Setting Up JavaScript ................................................................................................................................................... 11

2. Defining LoadRunner Settings .................................................................................................................................. 11

3. Setting Up Microsoft Windows Servers (LoadRunner Server) ........................................................................ 12

CONFIGURING TRUCLIENT GENERAL SETTINGS ................................................................................. 13

TABLEAU-SPECIFIC RECOMMENDATIONS ............................................................................................ 15

1. Preferred Recording Protocol for Creating Test Scripts ................................................................................... 15

2. Approaches for Authentication ................................................................................................................................ 15

3. Handling Session IDs .................................................................................................................................................. 16

4. Server vs. Browser Rendering ................................................................................................................................ 16

5. Timing Considerations ............................................................................................................................................... 17

6. Varying Filter Selections ............................................................................................................................................. 18

7. Scripting Advanced Mouse-driven Actions ............................................................................................................ 18

8. Testing Initial Platform Configurations ................................................................................................................... 19

9. Object Parameterization in TruClient .................................................................................................................... 19

DEBUGGING TRUCLIENT SCRIPTS ....................................................................................................... 22

1. Automatic Leveling During Replay ......................................................................................................................... 22

2. How to Resolve Object Identification Issues ...................................................................................................... 23

TEST EXECUTION ................................................................................................................................ 25

TEST ANALYSIS .................................................................................................................................... 26

.............................................................................................................................................................................................. 28

SUMMARY .............................................................................................................................................. 30

APPENDICES ......................................................................................................................................... 31

A. SECURITY SETTINGS ........................................................................................................................ 31

B. ADDITIONAL INSTRUCTIONS AND RESOURCES ............................................................................... 38

C. REFERENCE TEST ............................................................................................................................. 38

Page 4: Automated Performance Testing with HP LoadRunner

4

Overview This guide covers Tableau’s recommended approach for implementing performance tests for Tableau Server. The primary audiences for this document are:

• The Tableau administrators in your organization who are responsible for defining the goalsand requirements for Tableau performance testing.

• The testing team responsible for designing and implementing performance tests thencommunicating results back to the Tableau administrators and other internal stakeholders.

This guide describes some of the specific considerations that the testing team will want to incorporate into their performance testing procedures.

Scope Although this paper focuses on HP’s LoadRunnerTM, some of the general principles may also be applicable to other performance-test tools. However, many of the techniques described will be specific to the use of HP’s TruClient protocol.

Introduction to Tableau Server Tableau Server is an enterprise analytics platform for sharing interactive dashboards and visualizations built in Tableau Desktop. Tableau Server provides a highly scalable, governed collaboration platform.

Anyone in an organization can access Tableau Server using web browsers, mobile devices, or Tableau Desktop on Windows or Mac. When deploying Tableau Server, enterprises have flexibility in the sizing and configuration of server resources (processing, memory, IO configurations, etc.). The data sources and resulting interactive dashboard are flexible and can have varying levels of complexity.

Page 5: Automated Performance Testing with HP LoadRunner

5

The Performance Testing Challenge IT organizations face several challenges when sizing for an initial deployment and planning for server capacity. They must consider both the potential number of users as well as the desired flexibility of content.

The IT organization responsible for configuring and deploying the Tableau Server environment must ensure that the environment will meet the needs of the business while minimizing costs and risk for application failures. Enterprises will often require performance testing as part of the sizing and provisioning activity for server resources.

In this guide, we will not cover Tableau Desktop or Tableau Mobile performance testing. Desktop load on Tableau Server is usually minimal. And mobile performance can be inferred from a web browser, as mobile devices generally place a similar load on Tableau Server as Tableau Desktop browsers.

You may find it helpful to utilize HP’s new Native Mobile protocol, which provides a way to record and replay native mobile applications on both Android and iOS devices. This protocol enables you to record mobile interactions accessing a mobile web site through a device’s browser and create a TruClient script. Like other TruClient scripts, you can enhance test scripts of mobile devices by utilizing standard TruClient functionality including parameterization, transactions, and JavaScript coding.

The Native Mobile protocol uses actual mobile devices in a centralized mobile-device lab hosted by HP instead of simulated devices.

In the context of performance testing, customers are mostly concerned with the single-user page load time as well as interaction response times for users accessing Tableau Server from browsers like Internet Explorer (IE), Chrome, or Firefox.

Tableau Server is implemented as a highly interactive Client Server application using HTML5 canvas and asynchronous JavaScript. The user experience is that of a highly graphical, interactive, and responsive desktop application all in a web browser.

HP’s LoadRunner supports several protocols for web applications like Tableau Server including HP’s TruClient protocol. Ajax TruClient is a new advanced protocol developed for LoadRunner that supports modern JavaScript-based applications including AJAX.

The TruClient protocol emulates user activity within a web browser. Scripts are developed interactively in IE or Firefox.

Describing all the features and functionality of TruClient is beyond the scope of this guide. However, the Appendix includes helpful references on TruClient.

Page 6: Automated Performance Testing with HP LoadRunner

6

Before we discuss performance testing practices specific to HP’s TruClient and Tableau Server, there are some general principles that are helpful to understand regardless of the toolset or application under test.

General Performance Testing Best Practices While the purpose of this guide is to outline Tableau’s recommended approach for implementing performance tests for Tableau Server, there are some general recommended practices for performance testing that deserve mention.

1. Get Stakeholder Agreement on Requirement Performance Levels

This may be the most challenging task in performance testing. Stakeholders can include end users, business management, IT management, and others who are impacted by Tableau’s performance. It is not uncommon for various stakeholders to have different expectations of adequate performance.

The challenge is to reach consensus while staying within technical constraints. For example, achieving some stakeholders’ performance expectations might cost more in terms of environment upgrades than the organization is willing to pay.

2. The Required Performance Levels Should Align with Business Goals

The return on investment of performance testing pays out in the form of user productivity. While performance testing requires investments in tools, environments and people, the costs are outweighed by reductions in delays and gains in productivity.

For example, for highly-interactive applications such as Tableau, a response time of 0.1 to 0.2 seconds, considered instantaneous, boosts user productivity. On the other hand, response times of 1 to 2 seconds may be slow some users.

Faster application performance and higher user efficiency yield tangible economic value for the organization.

3. Defining Required Performance Levels Can Bring Certainty to TestEvaluation

Since performance levels can be subjective, it is helpful to define them in measureable ways. This includes defining required response times in given usage scenarios, load levels, and processing timeframes. This includes testing for peak load, average load, and maximum load levels. The clearer the definition of expected performance levels, the easier test design and evaluation becomes.

Page 7: Automated Performance Testing with HP LoadRunner

7

4. Performance Testing Is a Collaborative Effort

In addition to specialists responsible for the planning, execution and evaluation of performance tests, people in other roles have important points of involvement. These include:

• Product vendors (this may include vendors in addition to Tableau)

• Architects

• Testers (both performance testers and functional testers)

• Database administrators

• Security administrators

• System administrators

• Network administrators

5. An Accurate Baseline Test Is Needed for Ongoing Performance Testingand Monitoring

A baseline of performance test results, along with a defined associated system configuration, will reveal whether application performance is improving or declining. If system performance starts to degrade, a common question asked is, “What changed to cause the slower performance?” A baseline provides a way to compare two performance tests to help determine what is contributing to the slower performance.

It is crucial to have stable, controllable configurations and environments for a baseline. Once variations are introduced (through a new version of the application, an updated operating system, or new hardware), a new baseline is generated. This is common as baselines will evolve over time.

It is also important to have the current release of Tableau Server installed.

6. Don’t Mix Functional Testing and Performance Testing

The goal of functional testing is to find defects and failures revealed by test conditions that exercise the norms and extremes of an application. For example, boundary values and state transitions are often tested by functional tests.

Performance testing has a different goal: to measure application performance under a variety of conditions (load, data, and environmental) while performing normal tasks.

Page 8: Automated Performance Testing with HP LoadRunner

8

Typical user scenarios provide sufficient basis for performance testing in most cases. Complex functional conditions and processes often introduce additional and unrealistic factors to a performance test.

The following types of scenarios are often seen as useful in performance testing:

• Common and frequently-used scenarios

• Business-critical scenarios

• Performance-intensive scenarios (such as those that require intensive computations)

• Scenarios that validate contractual obligations and service level agreements

7. Early and Frequent Performance Test Reporting is Good

When discovered early, performance issues can be addressed at lower cost and lower risk than at a later stage. With late-stage performance problems, the root causes may be so deeply ingrained in the system and application architecture that solutions may be limited.

8. Visual Performance Test Reporting Helps Reveal Important Trends

Having a robust performance test dashboard helps everyone involved understand whether performance goals are being met. LoadRunner has a dashboard, and as we will see in the upcoming section on Test Analysis, Tableau itself can be a very effective tool for displaying performance test results.

Page 9: Automated Performance Testing with HP LoadRunner

9

Getting Started in Performance Testing of Tableau Server A performance test of Tableau Server should be treated as a project with its own set of objectives, resources, and requirements.

Five major activities are addressed in this guide and should be reflected in your own performance test planning for Tableau Server:

• Test planning

• Configuration of Tableau Server and LoadRunner

• Scripting of tests

• Test execution

• Test analysis

Test Planning As with any type of testing, performance-test planning requires allotting enough time to obtain the resources needed to perform the desired level of testing. Resources include both human and technical elements, including:

• Performance testers

• Stakeholders

• Performance test environment, which includes tools and test data

1. Environmental Considerations

The test environment should be representative of the target implementation environment in terms of CPU type and speed, hard drive capacity, speed and usage, memory capacity, and platform diversity (operating system and browser combinations). Judge environment configuration in terms of general sizes (small, medium and large), memory size, number of CPUs, and processor speeds.

Page 10: Automated Performance Testing with HP LoadRunner

10

While it may be impractical to achieve an exact duplicate image of the production environment, the more closely the performance test environment can mirror production, the better.

There are other environmental considerations:

1. When planning performance tests, consider co-existing applications on the same server asthey might consume computing resources and compete with Tableau Server. Generally,you do not want security settings, anti-virus software, or any background services with diskor CPU-intensive operations to interfere with the testing application and serverperformance.

2. To maintain consistency, tests should be rerun on the same environment.

3. Environment settings should distinguish between physical and virtual machines. In the caseof VMs, the servers need to be dedicated to the load test and not shared with otherapplications, users, etc.

4. The client machine plays an important role in load testing. Saturating resources on a clientmachine gives the false impression that the server is not fully utilized. Take advantage ofLoad Runner’s distributed clients feature. More details follow in the section titled “Tableau-Specific Recommendations” under “Server vs. Browser Rendering.”

2. Test Scheduling

Part of the test plan is the test execution schedule that defines when the tests will be performed and how long the tests will take. The time required for testing will depend on:

• The number of tests performed

• The expected duration of each test

• Time needed to resolve any issues

• Time needed for re-testing

• Scheduling tests to run in off-hours to ensure the system is not accessible by users or othersystems (like web apps with embedded applications, tabcmd or tabadmin scripts, scheduledjobs, etc.) that could interfere and yield inaccurate results.

Page 11: Automated Performance Testing with HP LoadRunner

11

3. Test Objectives

The test plan also includes test objectives, which define what the performance test is intendedto accomplish.

Here are a few examples of test objectives:

• Confirm that Tableau Server response times are within pre-defined acceptable limits givenN number of concurrent users.

• Confirm that Tableau Server users can complete tasks within pre-defined times given Nnumber of concurrent users using a variety of workbooks.

In addition, performance profiles must specify:

• The target performance levels expected to be seen by users

• The expected load levels of concurrent users at various times, which often include:

• Peak load times (such as a certain time of day or events that occur on a monthly or weeklybasis)

• Maximum load levels (the highest level of concurrent user load expected at any given time)

• The workbook mix used for different load-testing scenarios

Configuration of Tableau Server and LoadRunner Server This section shows how to configure Tableau Server and LoadRunner Server to achieve an accurate performance test with minimal troubleshooting effort.

1. Setting Up JavaScript

Before recording Tableau transactions, JavaScript should be enabled on the LoadRunner server through the DOS command line. To enable JavaScript, run the regsvr32 jscript.dll command from the DOS command shell.

2. Defining LoadRunner Settings

There are two tasks:

1. Launch VuGen and go to Tools àOptions àScriptingàScript Management

2. Make sure that .js and .java are allowed file extensions

Page 12: Automated Performance Testing with HP LoadRunner

12

3. Setting Up Microsoft Windows Servers (LoadRunner Server)

In addition to Firefox settings, Internet Explorer settings should also be configured properly even if you are using TruClient Firefox, as Firefox inherits some of the settings from IE.

However, Firefox doesn’t inherit all the IE settings. You need Firefox security and other settings properly configured for Firefox TruClient protocol. JavaScript, for instance, will have to be enabled in Firefox settings.

To cover your bases, some of the settings need to be correct in IE as well.

For example, raising the security level of the Internet Zone (Tools à Internet Options à Security tab) in Internet Explorer from the default level ("medium-high") to "high" causes a problem for Firefox.Below are the correct procedure and settings for Windows Servers 2012, 2008 and 2003:

1. Launch IE.

2. Go to Toolsà Internet Options.

3. Go to Security.

4. Uncheck “Enable Protected Mode.”

5. Click on “Custom Level” and follow thesettings from the screenshots found inAppendix A – Security Settings.

Fig. 2 – IE Security Settings

Fig. 1 – IE Security Level

Page 13: Automated Performance Testing with HP LoadRunner

13

Configuring TruClient General Settings As we saw in the previous section, it is important to have the correct browser security settings configured to allow successful completion of performance tests. Here are screenshots from TruClient’s Browser Settings under TruClient General Settings.

You can set the parameters according to your needs, but this series of screenshots shows how we configured TruClient when recording our set of sample scripts.

Fig. 3 – TruClient’s Browser Setting icon

Fig. 4 – Proxy Settings

Page 14: Automated Performance Testing with HP LoadRunner

14

Fig. 6 – Interactive options

Fig. 5 – Advanced Settings

Page 15: Automated Performance Testing with HP LoadRunner

15

Tableau-Specific Recommendations In planning your performance test of Tableau Server, there are key test conditions to be considered. These considerations are vital to the accuracy of your tests.

1. Preferred Recording Protocol for Creating Test Scripts

The way to find the best protocol is to run “Protocol Advisor” in VuGen. “Protocol Advisor” suggests TruClient protocols (IE and Firefox), Click and Script, and Http/Html.

• TruClient AJAX – IE

• TruClient AJAX – Firefox

• Ajax - Click and Script

• Web HTTP/HTML

TruClient protocols, along with Click and Script, are preferable because Tableau is a graphically-rich application and TruClient protocols are able to handle it best. TruClient has distinct advantages in testing AJAX applications like Tableau Server, such as:

Faster and easier scripting as compared to traditional approaches

• AJAX support (TruClient is asynchronous by design)

• Scripts that are easy to read and understand

• Ease of maintenance

2. Approaches for Authentication

There are three possible options for authentication. These are not mutually exclusive in the context of Tableau. You will have either local authentication or Active Directory authentication, but there is also the option of using guest accounts. There are clear benefits and downsides to each approach:

Page 16: Automated Performance Testing with HP LoadRunner

16

Guest Account

Using guest accounts is the simplest and most recommended approach. Each guest account will act as a new user. This method also skips the security impact (which could skew test results) and focuses on rendering the results from Tableau Server. This is also the best balance in minimizing administrative overhead and getting a realistic test. Even though you are using the same guest account, each session ID is unique.

Local

This method is less effective than using guest accounts, but may be a possibility for some organizations.

Active Directory

This would involve creating hundreds of dummy test accounts and is not recommended for automated performance testing. If true authentication testing is desired, it should be handled with live testers.

3. Handling Session IDs

LoadRunner can handle any server ID information that is returned from the server, such as ignoring session IDs or using them. Performance engineers often correlate session IDs to avoid script failure in load mode.

LoadRunner can’t determine how many session IDs are generated or what impact the session IDs may have on the application. This is a server-side issue and can only be addressed by the system engineering team in your organization.

Another session-related issue has to do with session aging. This is a server setting that defines the timeout threshold for when activity has stopped in a given session. Your server administrator can adjust the timeout value. By default, most organizations have this set at 30 minutes.

4. Server vs. Browser Rendering

Tableau dynamic optimizations determine which actions occur on the server versus browser. This is a dynamic, automatic decision. The flow of the test may differ due to variances in user actions, such as filtering. It is possible to create two very similar tests with one test rendered by the server and the other test rendered by the browser.

Server rendering logic is decided at the server level and largely depends on the workbook itself and the number of marks.

It is possible to disable browser rendering if you want to ensure that server rendering is the only method used (to ensure test consistency).

Page 17: Automated Performance Testing with HP LoadRunner

17

Adding the parameter ?:render=false to the URL immediately after the view name will force the view to be rendered by the server. Browser rendering is used to increase view performance, so this should only be used for comparison purposes.

We have additional resources on the rendering process and threshold calculations.

5. Timing Considerations

Traditionally, “wait time” or “think time” is defined in a performance test script to account for user delays or to ensure that a response is seen before the next action is invoked. However, with HP’s TruClient and its ways of handling asynchronous communication, the handling of timing events is important to understand.

For example, an assumed response time from test is N seconds, but due to complex AJAX asynchronous calls, the response time is longer than expected and impacts the rest of the test.

The method used in the sample scripts created for use alongside this guide is the wait step, with a specified period of wait time to allow for response times before proceeding to the next step.

A consideration in using the wait step is that the timings will typically require adjustments to achieve your desired level of test execution speed. For example, as shown in Figure 7, you may initially script a test to have 10-second wait times after each action just to ensure the test script runs to completion.

However, the 10-second wait times may cause the script to run longer than actually expected. After observing performance results, you may choose to decrease the wait times depending on the actions performed and the load to be tested.

Fig. 7 – Static Wait Times

Page 18: Automated Performance Testing with HP LoadRunner

18

6. Varying Filter Selections

The goal in varying filter selections is to balance the effort of the test with the benefit of accurate test results.

The main question is, “How much should filter selections be varied to make them realistic?” The answer will differ from organization to organization as each will perform filtering in different ways at varying times.

Scripting specific filters based on typical user selections can reduce variation in test results and increase test repeatability. We will cover randomized filters in the upcoming section on parameterization.

7. Scripting Advanced Mouse-driven Actions

Earlier, we discussed scripting basic point-and-click mouse actions. Tableau includes manymouse gestures like click and drag, lasso selections, and right-click for analytics.

In Figure 8, we see two means of specifying drag actions.

In step 12, the path is used to direct the drag action.

In step 15, pixels are used to specify how far to the right and how far up to perform the mouse drag action.

Fig. 8 – Ways to Specify Mouse Drag Actions

Page 19: Automated Performance Testing with HP LoadRunner

19

8. Testing Initial Platform Configurations

Initial performance testing is harder due to the lack of a baseline and the need to determine the initial platform configuration. Often, there may not be any existing performance metrics, dashboards, or representative usage patterns.

The reference tests included in this guide (see Appendix C) can be valuable in helping define a baseline test.

9. Object Parameterization in TruClient

Parameterization provides input to TruClient test scripts. Certain actions such as user input or selection that are performed on static options like textbox entry, checkbox selection, radio-button selection and dropdown-menu selections can be parameterized.

In addition, a virtual user can be scripted to randomly click on radio buttons, check boxes, and so forth, if needed.

Parameterization Based on Business Process Need

When designing tests based on business scenarios and business processes, there are times when the business process being recording requires a specific or random object selection from a list of similar objects.

For example, the business process you wish to test might involve selecting options from a list of radio buttons. You have two options at this point: 1) script each VUser to select one option randomly, or 2) script each VUser to a select an option from a table to ensure that all options you wish to test have been covered in the test.

It is important to understand that in performance testing, you often do not need to test all possible option combinations—unless you want to test the actual performance impact of choosing an option. Testing all options and combinations of options is more often in the realm of functional testing.

As examples, the following are some of the things that can be parameterized in a Tableau workbook example.

Fig. 9 - Example of a dropdown menu from Tableau’s Overview tab.

Page 20: Automated Performance Testing with HP LoadRunner

20

Fig. 10 - Example of a radio button selection from Product tab.

Fig. 11 - Example of multiple sets of check boxes and a drop down menu from Customers tab.

Fig. 12 - Multiple groups of radio buttons and a set of check boxes from Shipping tab.

Fig. 13 - Example of radio buttons and legends from Performance tab.

Page 21: Automated Performance Testing with HP LoadRunner

21

Random Parameterization

It is possible to randomize parameterization, if desired. In this example, we select random regions from radio-button options.

In TruClient, change the “ID Method” to XPath. Click on “Central” then on “West” or any other option. This way, you will find out which value is changing. In this case, you will see the last value (div[2]) is changing.

evalXPath("/html/body/div[5]/div[2]/div[2]/div/div[2]/div["+options.valueOf()+"]");

With the XPath ID Method selected, enter the following code:

var options; options = (Math.floor(Math.random() * 5)+2); evalXPath("/html/body/div[5]/div[2]/div[2]/div/div[2]/div["+options.valueOf()+"]");

Fig. 14 – Using the XPath ID Method

Page 22: Automated Performance Testing with HP LoadRunner

22

Debugging TruClient Scripts As part of the process of recording a business process, some steps that are performed by the user while recording are not required during replay. TruClient removes steps it deems to be unnecessary and places them in higher script levels.

For example, a click step that occurs in an area of the application that has no effect is placed in level 2. TruClient assumes that this step is not significant and will not help the user to emulate a business process on the application. The default view only displays level 1 steps.

In certain cases, you may want to override TruClient's assessment and manually change the level of a step. For example, a mouse-over is usually deemed an unnecessary step and is placed at level 3. However, sometimes mouse-over steps are needed during replay.

You can manually modify higher-level steps to level 1. To move a step, open the step and click on the step section. Move the slider to the desired level. If the step is part of a group step, both the group step and the individual step must be modified.

1. Automatic Leveling During Replay

The level of a step is normally set during recording according to the importance of the events in the business process. It is possible that an important step will look unimportant and will be placed in a lower script level. This may cause the replay to fail, generating an “object not found” error.

During replay, TruClient will check if there are steps in a lower level that can affect the outcome of the current step. If found, the meaningful step will be moved to higher script level.

Figure 15 displays a small script. Note that the step numbers skip from 1 to 3 and 3 to 5. Step 2 and Step 4 are hidden in a different level.

Fig. 15 – Modifying the Script Replay Level

Page 23: Automated Performance Testing with HP LoadRunner

23

After changing the display settings by using the slide bar, all steps are now displayed and will run if replayed in interactive mode.

2. How to Resolve Object Identification Issues

Object identification presents one of the biggest challenges with recording and replaying scripts for Tableau applications. That’s because objects that have been recorded cannot be located solely based on how they appear on the dashboard.

Also, when recorded objects change dynamically during replay, Ajax TruClient can lose the ability to automatically locate the object.

Fig. 16 – Hidden Steps Not Shown

Fig. 17 – Hidden Steps Shown

Page 24: Automated Performance Testing with HP LoadRunner

24

TruClient includes some very sophisticated mechanisms to overcome this challenge including the Highlight, Improve Object Identification, Replace Object and Related Object, and Object Identification Assistant options. (Refer to the TruClient guide or the tips-and-tricks document referenced in the Appendix).

If none of these methods successfully locates the object, HP recommends that you use the X and Y coordinates. When you record a transaction, note the X and Y coordinates (found under arguments; see Figure 17), remove the recorded coordinates, and enter them manually in the script.

While the coordinate method will always work for object recognition, there are downsides to this method, such as the scripts may be fragile in terms of test repeatability.

If the object changes location in any way (even due to window size), the script may fail because the coordinates are no longer correct. Many times, UI changes are not under your control as a tester, which means you may experience unexpected test failures using this approach.

Also, since the coordinates are based on the object location on the local machine’s screen size, pixels, and other variable factors, make sure that you DO NOT change the window size or scroll it up/down or right/left before the playback of the script.

Fig. 18 – X and Y Coordinates

Page 25: Automated Performance Testing with HP LoadRunner

25

Test Execution Here are some script views from testing the Shipping tab. These views show all levels of script actions, including mouse-over actions, and parameterization.

This script performs a test of the shipping tab, which has filters set for all shipping modes, order year 2014, Q4, and all regions.

In the script below, the First Class and Second Class filters are unchecked. As in the previous script, all levels of actions are shown.

Fig. 19 – Script view with parameters selected

Fig. 20 – Script view with parameters de-selected

Page 26: Automated Performance Testing with HP LoadRunner

26

If we look at the enlarged view of the scripting window, we can see how the VUser actions are parameterized. You will see object names, such as “element (9)” and checkboxes with options, such as “First Class” and “Second Class.”

Test Analysis In this activity, the goal is to measure test results against expected response times to assess the capability of Tableau Server to support your usage and load levels.

Test results captured in LoadRunner will inform this analysis.

You can also export the performance test results and viewed them in Tableau. To visualize your results, export your raw data into an Excel spreadsheet with the following column headers (in order):

• Line Item: This is a column that lists each row by number. (If you are exporting fromLoadRunner, you will need to add this column manually)

• Transaction End Status

• Scenario Elapsed Time

• Transaction Response Time

• Transaction Names

Fig. 21 – Script view with object names

Page 27: Automated Performance Testing with HP LoadRunner

27

If you are exporting from LoadRunner, you can export the raw data from the "Transaction Summary" report and add the additional column. Once you have the data in Excel, connect to the data in Tableau. If you have existing data, select "Replace Data Source" in the Data menu to replace existing data with your new data. This will redraw your views with your new test data.

The following screenshots contain examples of how LoadRunner test results can be viewed in Tableau. You can find this workbook in the Tableau Community section of our website.

Figure 22 displays four different user scenarios along a timeline. The acceptable response time level is 3 seconds. You will notice that response times for two of the scenarios stay under 2 seconds. However, two of the scenarios exceed the 3-second response time.

By moving the mouse pointer over any data point in the chart, you can see details as shown in Figure 23.

Fig. 22 – Response times by scenario elapsed time shown in Tableau

Fig. 23 – Detailed data view

Page 28: Automated Performance Testing with HP LoadRunner

28

Fig. 24 – Transactions exceed the median response time. It is also possible to filter the view by selecting the transactions.is also possible to filter the view by selecting the transactions.

Fig. 25 – Average response time of each transaction shown in Tableau

Page 29: Automated Performance Testing with HP LoadRunner

29

Fig. 26 – Median response times for each transaction. You can enter an acceptable response time as a point of comparison.

Fig. 27 – Difference from the average response time for the scenario. This view shows the transactions in descending order from highest response times.

Fig. 28 – Transactions with response times lower than average. Scrollable view in Tableau

Page 30: Automated Performance Testing with HP LoadRunner

30

Summary Tableau Server is an enterprise analytics platform for sharing interactive dashboards and visualizations. Tableau Server provides a highly- scalable collaboration and governance platform.

Tableau Server is implemented as a highly interactive Client Server application that uses web technologies such as the HTML5 canvas and asynchronous JavaScriptTM. The user experience is that of a highly graphical, interactive, and responsive desktop application all in a web browser.

HP’s LoadRunner supports web applications like Tableau server with several protocols, including HP’s TruClient protocol. Ajax TruClient is a new advanced protocol developed for LoadRunner that supports JavaScript-based applications, including AJAX.

This guide outlines what you need to consider in planning and conducting accurate performance testing Tableau Server using HP’s LoadRunner tool.

You can also perform rich analysis of LoadRunner test results using Tableau. Follow the recommendations in this guide to save time and effort, and benefit from lessons learned by other organizations. You can also leverage the outlined best practices for performance testing, which can be applied to other projects as well.

Page 31: Automated Performance Testing with HP LoadRunner

31

Appendices A. Security Settings

Page 32: Automated Performance Testing with HP LoadRunner

32

Page 33: Automated Performance Testing with HP LoadRunner

33

Page 34: Automated Performance Testing with HP LoadRunner

34

Page 35: Automated Performance Testing with HP LoadRunner

35

Page 36: Automated Performance Testing with HP LoadRunner

36

Page 37: Automated Performance Testing with HP LoadRunner

37

Page 38: Automated Performance Testing with HP LoadRunner

38

B. Additional Instructions and ResourcesThe following are available for additional performance testing guidance but are beyond the scope of this document.

HP LoadRunner Ajax TruClient Tips and Tricks

(Note: You must have access to the HP customer support site to access this link.)

Introduction to LoadRunner’s new TruClient – Native Mobile protocol

C. Reference TestTo provide working samples of the techniques discussed in this paper, a set of sample LoadRunner TruClient scripts are available from Tableau support by request.