Page 1

Updated: Using IBM Rational Robot to run your first performance test Level: Introductory Michael Kelly, Software Engineer, QA, Liberty Mutual 05 Aug 2004 This beginner-level article focuses on the process of designing tests, recording test scripts, creating automated or manual performance test suites, executing the test suites against various configurations, and then analyzing the reports. Note: This article applies to IBM® Rational® Robot® v2003. Want to know, before deployment, how your application and its operating environment will perform when many people try to use it at the same time? Do you wonder what kinds of response times users will experience in a real-world situation? Or do you just want to ensure that your code meets performance expectations? Performance testing will give you the answers you're seeking. In this article I'll show you how to do some basic performance testing using the Rational® suite of tools. Specifically, I'll show you how to do load testing -- measuring an application and its operating environment against specific response criteria. The ultimate goal of performance testing and monitoring is to create an excellent end-user experience. To reach this goal you'll need to plan and design tests, record some test scripts, create automated or manual performance test suites, execute these test suites against various configurations, and then review and analyze the reports. This article is intended for the novice and covers the topic from start to finish. If you're already familiar with performance testing and need more in-depth information on it, take a look at the "User Experience, Not Metrics" series and the "Beyond Performance Testing" series by Scott Barber.

First, some terminology I'll refer in this article to virtual testers and test suites, which may be new terms to you. To make sure we get off on the right foot, I'll define these terms as they're used by Rational and in this article. A virtual tester (also referred to as a virtual user) emulates traffic between a client and its servers. By running many virtual testers, you can emulate multiple users on a single computer simultaneously and add a workload to a client/server system. Use of virtual testers also lets you determine scalability and measure end-to-end response times by timing server and client responses. For example, you might have a timer associated with one virtual tester to find out how much time a query takes when 1000 other virtual testers are sending requests to the same server at the same time. This is more indicative of what a real user experiences when significant client-processing or screen-painting time is associated with the user activity you're measuring. A test suite (or a TestManager suite) is a Rational® TestManager® object that enables you to manage how test scripts are run and the computers that will be used for testing. In performance testing, you can also customize the number of virtual users, how the users and tests are distributed, and in what order the performance tests are executed.

Planning and designing your performance tests Planning your performance tests is a complex task during which you establish a test plan that answers these questions: 

What types of performance tests do I need to run to ensure the agreed-upon level of quality? What performance benchmarks are we interested in and how will I measure whether they've been

What performance benchmarks are we interested in and how will I measure whether they've been met?

These questions get at the heart of performance test planning. The first directly addresses the quality of the product, while the second addresses how you'll know when you've met the quality guidelines and how you'll report your results. Your performance test plan should evolve slowly. Start with some simple objectives. Then as your application grows and you become more comfortable with your performance-testing tools, grow your suite of tests and extend your view into the application-under-test. This article on performance test planning will help you learn the process: "End-to-End Testing of IT Architecture and Applications." Once you've established your benchmarks and decided on the types of performance tests you need, the next step is to design your tests. You'll need to identify the user navigation paths you want to test, how you'll implement these scenarios, what setup will be involved in your testing, and what your acceptance criteria will be. Realize that designing performance tests is different from designing functional tests in that it's more of an art form. As much as I hate to repeat myself, the two series mentioned above by Scott Barber illustrate how to design excellent performance tests.

Creating a performance test script Now we'll create a very basic virtual user (VU) script. You can follow along and create it as you read this section, or if you're already familiar with this process you can just go ahead and download the script. Once you've downloaded it, you'll want to copy the script to \\<Repository Path>\<Project Name>\TestDatastore\DefaultTestScriptDatastore\TMS_Scripts.

To create a script, you record a session with the Rational速 Robot software. The examples in this article were developed using Robot v2003.06 running on Windows XP Professional. Robot records all of a client's requests to the server and all of the server's responses from the time you begin recording until the time you stop recording. This traffic is the only activity that Robot records; it ignores GUI actions such as keystrokes and mouse clicks. After recording the session, Robot generates an appropriate test script. When you run the test script in TestManager, it plays back the requests you recorded, but the GUI actions you performed and the displays you saw at record time aren't played back. For our application-under-test we'll use the Web site (this is where I buy almost all of my technical books -- and no I don't own stock). We'll create a VU script that will emulate a user performing a search on the home page. For this script we'll simply open the Web site, search for books on the topic of automated testing using the Search field in the top left corner of the page, and select a book to see its details. 1. Open Robot and choose File > Record Session from the menu bar. 2. In the Record Session -- Enter Session Name window, enter "BookPool -- Session One" as the name of the session.

3. Because your settings may have been changed from the default at some point, we'll verify them before we record. Click the Options button to open the Session Record Options window. 4. Click the Generator per Protocol tab and verify that the protocol is set to HTTP. (We'll be using this protocol because it supports transmission of text and records the actions that Web servers and browsers take in response to various commands.) Verify that the other options on this tab are set as shown below.

5. Click the Generator Filtering tab and verify that the Auto Filtering check box is selected and all of the protocols except DCOM are selected. (DCOM is exclusive -- it can't be selected in combination with any other protocol.)

6. Click the Generator tab and verify that the "Use datapools," "Verify playback return codes," and "Bind output parameters to VU variables" check boxes are selected and that timing is set to "per command."

7. Click OK. 8. Once returned to the Record Session -- Enter Session Name window, click OK again. This will start the Session Recorder (a background process similar to the GUI Recorder for functional testing) and open the Start Application window. 9. Enter the path to Internet Explorer as the executable and the address of ( as the argument.

10. Click OK. 11. Wait for Internet Explorer to open. (It should load This could take a minute or two, depending on the computer.

12. Once the site opens, enter "Automated Testing" in the Search field and click Go.

13. Once the results finish loading, select the first returned result.

14. When the selection finishes loading, close the browser. 15. When the Stop Recording dialog box appears, click Yes.

16. In the Stop Recording window, enter "BookPool -- Search for a book" as the name of the justrecorded script.

17. Click OK. The Generating Scripts window will appear.

18. Wait for this process to complete. It may take a long time, depending on the speed of your computer. When the window displays the "Completed Successfully" message, click OK. The script you just recorded should open in Robot. If you've done GUI testing in the past, you may be used to just compiling and running to see if your script recorded correctly. But with the VU script, we'll wait to run it until after we set up a suite.

Creating a performance test suite Now that we have a sample VU script, we'll create an automated test suite. There are a couple of different ways to do this. For the purpose of this example we'll use the simplest method, employing the Performance Testing Wizard. As you become familiar with test suites and how they're arranged and work, you can play around with some of the other methods of creation and customization. 1. Open TestManager and choose File > New Suite from the menu bar. This will open the New Suite window.

2. Choose the Performance Testing Wizard and click OK. This will open the Performance Testing Wizard -- Computers window.

3. Click "Local computer" in the top computer list box, and then click the Add to List button. You should see "Local computer" displayed in the bottom list box. 4. Click Next. This will open the Select Test Scripts window.

5. In the top list box, click the name of the script we just created ("BookPool -- Search for a book") and click the Add to List button. You should see "BookPool -- Search for a book" displayed in the bottom list box. 6. Click Finish. This will open a temporary test suite in the TestManager workspace called Suite 1.

7. Choose File > Save from the menu bar. Then enter a name and description as shown here, and click OK.

You now have your very first performance test suite. Before we run it, let's take a look at what we've created and what it all means.

What's in a suite? A performance test suite contains user groups and scenarios, and there are thousands of ways to configure these two elements. For in-depth coverage of the options available when performance testing, see the "User Experience, Not Metrics" series and the "Beyond Performance Testing" series by Scott Barber. Here I'll provide you with a quick tour of what's in a suite.

User groups

User groups are used to set run-time information for the scripts in the group. You can set the user count (which we'll do later for the suite we've created), and you can select the computers that will be used to run the scripts in order to set up distributed performance tests. (If you're not familiar with IBM速 Rational速 TestAgent速, take some time to read about it in the online help. It's a great way to configure computers for distributed testing.) The user group is also a root node for all sorts of items. In Figure 1 you can see all of the types of artifacts you can add to a user group.

Figure 1: The types of artifacts you can add to a user group Here's a quick look at each of these types of artifacts: 

Test case -- A test case is a testable and verifiable behavior in a target test system. You can add test cases to a suite or edit the run properties of a test case that's already in a suite. You add test cases to suites so that you can run multiple test cases at one time and save them as a set. It's uncommon to use test cases in a performance test suite, but it is an option. Test script -- You can add test scripts (any scripts from the project) to a suite or edit the run properties of a test script that's already in a suite (set the number of times to execute the test script, add delays between the executions of the test script, and set the scheduling method). Suite -- You can add a suite that contains a computer group (but not one that contains a user group) to another suite. You can use suites as building blocks of tests just as you would any other suite item. A suite that's added to another suite must have been created with the "Prompt for computers before running suite" option selected for the computer group. (This can be found by right-clicking and selecting Run Properties on the computer group.) Adding suites to a suite enables you to maintain a hierarchy of suite items. You can also edit the run properties of a suite when it's already been added to another suite. Delay -- You can add a delay to a suite, or you can change the run properties of a delay that's already in a suite. A delay can be set to allow a specified number of seconds to elapse before the start of a suite, or it can be set to start the suite at a particular time of day. Scenario -- You can add a scenario to a suite or edit the run properties of a scenario that's already in a suite. You add a scenario to a suite when you want to reuse a series of events within a suite, and you want to be sure that any change made to that scenario filters to all instances of it within the suite. Scenarios aren't reusable among different suites. Selector -- You can add a selector to a suite or edit the run properties of a selector that's already in a suite. A selector defines which items each virtual tester will execute and in what sequence. Synchronization point -- You can add a synchronization point to a suite or edit the run properties of a synchronization point that's already in a suite. You use a synchronization point to coordinate the activities of a number of virtual testers by pausing the execution of each virtual tester at a particular point (the synchronization point) until a specified event occurs. Transactor -- You can add a transactor to a suite or edit the run properties of a transactor that's already in a suite. You use a transactor to set the number of tasks that each virtual tester will run in a given time period.


Scenarios enable you to reuse specific test configurations or (for lack of a better term) test scenarios. You can add the same types of items to a scenario as you can to a user group (with the exception of transactors). Scenarios are useful for suite maintenance but aren't required. For our example we won't be making one.

Running your suite Now that you have your very first performance test suite, let's go ahead and run it. If you're used to GUI testing, you'll probably see some stuff you haven't seen before, but just let it execute and we can go over what all of those charts and graphs mean afterward. Before running the suite we'll want to increase the maximum number of users to 10 from its current setting of 1. 1. Right-click VU User Group1 and choose Run Properties.

2. Set the user count number to 10 and click OK.

You should now see the number of users in the group displayed as 10.

It's important to note that 10 users signing on at the exact same millisecond (which is what we've set up to happen) is not insignificant. To correct this we'll set up the suite to start the virtual users 2 at a time. 1. From the menu bar choose Suite > Edit Runtime. This will open the Runtime Settings window.

2. Select "Start testers in groups" and set "Number to start at a time" to 2.

3. Click OK and save the changes you've made to the suite. 4. To run the suite, right-click it and choose Run.

5. This will open the Run Suite window. Set the number of users to 5 (we'll try half of the maximum number of users to start with) and click OK.

Again, if you're used to running GUI scripts, when you run the suite you may encounter a couple of screens you've never seen before. What appears will vary based on your configuration, but it should be very similar to what I show below. If not, don't worry about it. Just use the F1 key liberally. Online help is ready and waiting to answer your questions. First you should see the Messages window.

This window is simply a compilation window for the suite. TestManager will check and compile all of the

This window is simply a compilation window for the suite. TestManager will check and compile all of the scripts that you attach to your suite. If any problems are encountered, they'll be logged here and TestManager will cease executing the suite. If everything works correctly you'll see this screen for only a few seconds; it will be minimized and closed when the suite finishes executing. After that all sorts of windows will open. These windows are there to assist you in monitoring the progress of the testing and the status of the scripts. I'll describe some of the windows that appear. Run Toolbar

You can use this toolbar to stop suite execution at any time. It's quite useful if you need to stop because of an error or you want to debug something right away. Progress Toolbar

This toolbar shows you the run time, the number of active testers, and the number of finished testers. In this instance a tester is a computer that's executing a script. You can use this toolbar to quickly see if a computer has stopped abnormally and to try to determine the reason. The buttons on the right all open different views and histograms. Click each one and see what it shows you, just so you know what kinds of views are available. You may never use them all, but some of them are handy in special circumstances. Overall Progress View

This view shows you script by script the progress of your testing. State Histogram

This histogram is a graphic representation of what's currently happening on all of the computers. We have only five VU testers so our graph goes to 5 on the y-axis. This view is also handy when you're doing

only five VU testers so our graph goes to 5 on the y-axis. This view is also handy when you're doing distributed testing with your suite. Computer View

This view lists each computer included in the run and describes its current state. You can see which script it's executing, what the state of that script is, and how long it's been running. That's all of the views we'll cover. If everything worked right, you should now be looking at a log file. If not, download my suite and copy it to the following directory in your repository: \\<Repository Path>\<Project Name>\TestDatastore\TMS_Suites. Eventually you'll learn to run your suite against various configurations, but that's a complex topic that's beyond the scope of this article.

Reviewing and analyzing the reports As I just mentioned, once execution is complete you should see a log file. As shown in Figure 2, there will be an entry for each virtual user. Just like with log files for GUI scripts, you can drill down on each of these entries to see more detailed information.

Figure 2: A sample log file If you click the Test Case Results tab on the bottom, you'll get an empty screen. This is because we didn't attach our performance test script to a test case in TestManager. While I won't cover how to do that in this article, I thought you should know why this tab is blank and that you haven't done anything wrong. For more information on how to configure your scripts to use this feature, you can look up "Test Cases" and "Test Case Results" in the TestManager help file. Two other windows should also have opened after execution completed. The first is the Command Status Report Output window, as shown in Figure 3.

Figure 3: The Command Status Report Output window The command status report shows the total number of times a command was executed and how many times the command passed and failed. This report reflects the overall health of a suite run. It's similar to the next report we'll look at but focuses on the number of commands run in the suite. The command status report is useful for debugging, because you can see which commands fail repeatedly and then examine the related test script. The final window that you should see is the Performance Report Output window, as shown in Figure 4.

Figure 4: The Performance Report Output window The performance report displays the response times recorded during the suite run for each command in terms of the mean, the standard deviation, and different percentiles. This report is the foundation of reporting performance-related results in TestManager. It provides benchmark information on an

reporting performance-related results in TestManager. It provides benchmark information on an application's actions during a test and can show whether an application meets base criteria as defined in the test plan and/or the test case. Both of these reports become more useful as you refine your script-making ability. Right now, your command IDs are vague and not meaningful. Over time, you'll start to assign more meaningful names when you do the initial scripting, and this will allow you to quickly see specific results for specific actions.

Wrap-up This article has given you a beginner's look at load testing with the Rational tools. You can do other kinds of performance testing with the Rational tools as well, but this should get you started. Once you get comfortable with the basics of performance testing, you can start mixing in computers and try some distributed testing. There are also a lot of good resources on IBM developerWorks related to performance testing. I would recommend taking a look at some more advanced articles and/or just browsing the Performance and VU Testing forum. Good luck!

References and related resources   



"User Experience, Not Metrics" series by Scott Barber "Beyond Performance Testing" series by Scott Barber "End-to-End Testing of IT Architecture and Applications" by Jeff Bocarsly, Jonathan Harris, and Bill Hayduk "User Community Modeling Language (UCML) for performance test workloads" by Scott Barber "Quality Aspects of Performance" by Robert Michalsky "Inserting comments, timers, block markers, and synchronization points in VU scripts" by Mike Kelly "Using Test Agents" by Mike Kelly "A library of custom VuC functions" by Scott Barber, Richard Leeke, Roland Stens, and Chris Walters

About the author Mike Kelly is currently a software testing consultant for the Computer Horizons Corporation in Indianapolis. He's had experience managing a software automation testing team and has been working with the Rational tools since 1999. His primary areas of interest are software development lifecycles, software test automation, and project management. Mike can be reached by e-mail.

Using IBM Rational Robot  
Using IBM Rational Robot  

Using IBM Rational Robot to run your first performance test