Gathering Performance Information While Executing Everyday Automated Tests
- Needed: Rapid Feedback on Environmental Performance
- Simple Solution to a Complex Problem
- Implementing the Solution in a Simple Automation Script
Have you ever worked on a project for which you needed to provide your management team with a constant stream of performance information about the application being tested? Or perhaps a project where you were unsure about the reliability of your performance tests, and you wanted some way to prove that those tests were actually doing what you hoped they were doing? Possibly you've been debugging a performance problem with an environment and wished for some quick and current data you could use to compare multiple environments as you tried to debug the problem. A few months ago, I was looking at all three of those problems. This article explains the solution that my team and I developed.
Needed: Rapid Feedback on Environmental Performance
To provide a little context, the project team was developing a web-enabled financial application written in Java. The testing team for the project was using IBM Rational for functional automated testing and Mercury Interactive for performance testing. During the first release of the application, we encountered many of the growing pains and uncertainty that most projects go through. Management was very concerned about performance; for various political reasons, application performance gained a lot of visibility early in the project. To compound this problem, we deployed the application on a regular basis to several different environments for testing. Supposedly, each environment had the same configuration, but each returned significantly different performance results. We needed a way to provide rapid feedback on environmental performance.
Because traditional performance tests typically require detailed setup and a good portion of time to execute, we decided to take a different approach. We were already developing a data-driven framework for regression testing in the Rational tool. We had a significant number of smoke tests, executed several times a day, and traditional functional tests, executed on a regular basis. We decided to include a simple timer mechanism into our automation framework to gather page-load times. We then took that information and wrote it out to a spreadsheet, detailing the page that was loading and the environment in which the script was executing.
This decidedly simple solution solved all of our problems (well, all of those problems, anyway):
- We gained a spreadsheet—management's favorite type of document—that we could send to management, with detailed timer information for every page and every call to every external web service in the application.
- Because this information was gathered every time we ran a smoke test or any other type of automated test, we always had up-to-date information to help us debug the differences in all of the deployment environments—a task that would have been almost impossible without this rapid feedback.
- This data also served to audit our existing performance test scripts (which turned out to be working just fine). It's worth noting that this type of audit would work for most single or low-volume performance test results, but might not be appropriate for larger tests.
That's the story; let's take a look at what we actually did.