- 1.1 Overview
- 1.2 The Role of Performance Requirements in Performance Engineering
- 1.3 Examples of Issues Addressed by Performance Engineering Methods
- 1.4 Business and Process Aspects of Performance Engineering
- 1.5 Disciplines and Techniques Used in Performance Engineering
- 1.6 Performance Modeling, Measurement, and Testing
- 1.7 Roles and Activities of a Performance Engineer
- 1.8 Interactions and Dependencies between Performance Engineering and Other Activities
- 1.9 A Road Map through the Book
- 1.10 Summary
1.6 Performance Modeling, Measurement, and Testing
Performance modeling can be used to predict the performance of a system at various times during its lifecycle. It can be used to characterize capacity; to help understand the impact of proposed changes, such as changes to scheduling rules, deployment scenarios, technologies, and traffic characteristics; or to predict the effect of adding or removing workloads. Deviations from the qualitative behavior predicted by queueing models, such as slowly increasing response times or memory occupancy when the system load is constant or expected to be constant, can be regarded as indications of anomalous system behavior. Performance engineers have used their understanding of performance models to identify software flaws; software bottlenecks, especially those occurring in new technologies that may not yet be well understood [ReeserHariharan2000]; system malfunctions (including the occurrence of deadlocks); traffic surges; and security violations. This has been done by examining performance measurement data, the results of simulations, and/or queueing models [AvBonWey2005, AvTanJaCoWey2010]. Interestingly, the principles that were used to gain insights into performance in these cases were independent of the technologies used in the system under study.
Performance models and statistical techniques for designing experiments can also be used to help us plan and interpret the results of performance tests.
An understanding of rudimentary queueing models will help us determine whether the measurement instrumentation is yielding valid values of performance metrics.
Pilot performance tests can be used to identify the ranges of transaction rates for which the system is likely to be lightly, moderately, or heavily loaded. Performance trends with respect to load are useful for predicting capacity and scalability. Performance tests at loads near or above that at which any system resource is likely to be saturated will be of no value for predicting scalability or performance, though they can tell us whether the system is likely to crash when saturated, or whether the system will recover gracefully once the load is withdrawn. An understanding of rudimentary performance models will help us to design performance tests accordingly.
Methodical planning of experiments entails the identification of factors to be varied from one test run to the next. Fractional replication methods help the performance engineer to choose telling subsets of all possible combinations of parameter settings to minimize the number of experiments that must be done to predict performance.
Finally, the measurements obtained from performance tests can be used as the input parameters of sizing tools (based on performance models) that will assist in sizing and choosing the configurations needed to carry the anticipated load to meet performance requirements in a cost-effective manner.