- Overview: What's a Mix?
- Real-World Low-Level Technology Stack Test Input and Mixes
- Testing and Tuning for Daily System Loads
- Testing and Tuning for Business Peaks
- Identifying Key Transactions and Business Processes
- Real-World Access Method Limitations
- Best Practices for Assembling Test Packages
- SAP Component and Other Cross-Application Test Mix Challenges
- Tools and Approaches
9.7 Best Practices for Assembling Test Packages
With regard to breaking down and assembling good test packages, I've already mentioned a number of key approaches I often use. Four of these are worth more discussion, however, and are covered next.
9.7.1 User-Based Test Packages
A simple method of controlling the amount of "noise" or any other functionally derived activities you introduce in a stress-test run is to directly control the number of users tossed into the mix. With this approach, you're not concerned about the load placed on the system per se, you're more concerned with the number of users hosted, and whether each user is executing consistent and repeatable work. I've mentioned elsewhere that I like to create packages of 100 users. One hundred is a nice number simply because it's a round number; its size is significant enough to allow me to build up a test run into a "large stress test" very quickly and in a highly controlled manner.
Load becomes an issue only if a group of 60 or 100 users (or whatever number you choose) overtaxes a system, invalidating the stress-test run. An obvious example might involve executing custom Z reportsif you toss 100 of these into a system configured for only half as many batch processes, you'll simply create a queue of work that monopolizes all batch work processes and completely destroys most DB servers. A better number in this case might be 10 instead. Experiment to find the best number for you.
9.7.2 Functionally Focused Test Packages
Although controlling the pure number of users participating in a stress test makes sense, taking this to the next level and controlling a group of 100 SAP R/3 SD users, for instance, or 10 BW custom InfoCube reporting users, makes even more sense. Of course, you'll need to be careful to ensure that the mix of users (e.g., or batch processes, or reports) adheres to the mix you need to emulate in support of your test's specific success criteria. And your test tool needs to support both the high-water number of users you wish to simulate as well as the ability to create and control multiple packages.
9.7.3 Another ApproachEnd-to-End Business Processes
Building on the previous approach, this next approach is both intuitive and in many cases simply necessary. That is, because a business process by definition feeds off one transaction (the previous transaction's output data, actually), and then goes through a processing phase only to hand off newly created or processed data to the next transaction in line, the idea of bundling these transactions into a single package seems logical. Beyond this, though, it saves time and effort in scripting, too, because a common set of fewer variables can be leveraged. And a straightforward input-output approach to scripting lends itself to making even cross-component business processes more easily controlled than is otherwise possible. Finally, the granular control made possible through this method makes it easy to quickly ramp up the user count of a stress-test run while simultaneously ramping up the number of complete business processes to be executed.
9.7.4 Tips and TricksMaking Noise with Noise Scripts
One of my favorite approaches to SAP scripting and stress testing involves the creation and deployment of noise scripts. As I said earlier, noise scripts capture and help represent the background processing or "noise" common in all SAP production systems. I typically create a variety of noise packages, some focused on general functional areas (e.g., MM or FI, where many light-weight transactions are common), whereas others might be focused on SAP Basis activities (to represent the load that monitoring places on a system), specific batch or report jobs, and so on. The key is to create a consistent baseline of user or batch-driven noise behind the scenes, and then quantify the per-package load to establish a tier of potential baselines, as depicted in Figure 9-3. Does this ancillary load represent 10% of the typical production workload? Or 20%? What is the impact of the load on your test hardware (that is, the HW hit)? Consider what many of my colleagues and I deem to be best practices as follows:
-
Baseline just your noise scripts, to ensure they do the job you envisioned for them. Baseline not only SAP application-layer performance, but lower levels as well. Eventually, I recommend that you settle on any number of online users, batch processes, and so on that create an easy-to-measure load on the system, like 10% CPU utilization or disk queue lengths of three per disk partition or drive letter.
-
Keep the target baseline utilization numbers small, so that it's easy to add incremental measurable load to a stress-test runsimply throw another package into the mix, for example, to add another 10% load on the CPU or perhaps another 40 users or three concurrent processes (whatever measurement you judge most valuable).
-
Ensure that your noise scripts are pseudorandom. As I mentioned earlier, they need to be repetitive enough that they maintain a consistent load on the CPU, while random enough to encourage physical disk accesses. In other words, you don't want to create a noise script, or any script for that matter, that executes at different speeds every time it runs, or processes significantly different data between test runs. Make it repeatable!
-
Ensure that you track the number of iterations executed, along with the specific number of discrete noise transactions executed within your noise script or scripts. This is useful after-the-fact, when you're seeking to understand and analyze a stress-test runI suggest leveraging a counter of sorts within the body of your scripts (e.g., and publishing the counter's value to your output file), or simply dumping the script's output into your output file, to be counted in more of a manual manner after the run.
-
Finally, if sound test management dictates that you should group your noise scripts together, you'll logically want to go to the trouble of creating one or more noise packages that either complement the core load being tested (you may create a noise script that effectively mirrors many of the transactions that represent core activities) or act as a "gap filler" and instead "round out" a test load (e.g., adding batch noise to a primarily online-userbased stress test).
Figure 9-3 Noise scripts are useful in providing the fundamental underlying nonprimary transaction load underneath all productive SAP systems.
One of the simplest methods of generating noise within a test run is to execute every core T-code twicenot the entire transaction, just the T-code associated with the first transaction in a business process. This kind of incremental and predictable load on the CPU is ideal when it comes time to measure overall performance, because the transaction is always executed from cache the second time it's executed. In this way, it not only does not ever disturb the system's buffer contents but it is easily scripted or added at the last minute in an iterative fashion if you need to bump up the CPU hit on a particular stress-test run.