- Chapter 5 A Step-By-Step Approach to Capacity Planning in Client/Server Systems
- 2 Adequate Capacity
- 3 A Methodology for Capacity Planning in C/S Environments
- 4 Understanding the Environment
- 5 Workload Characterization
- 6 Workload Forecasting
- 7 Performance Modeling and Prediction
- 8 Development of a Cost Model
- 9 Cost/Performance Analysis
- 10 Concluding Remarks
- BIBLIOGRAPHY
5.5 Workload Characterization
Workload characterization is the process of precisely describing the systems's global workload in terms of its main components. Each workload component is further decomposed into basic components, as indicated in Fig. 5.4, which also shows specific examples of workload components and basic components. The basic components are then characterized by workload intensity (e.g., transaction arrival rates) and service demand parameters at each resource.
The parameters for a basic component are seldom directly obtained from measurements. In most cases, they must be derived from other parameters that are measured directly. Table 5.2 shows an example of three basic components, along with examples of parameters that can be measured for each. The last column indicates the type of basic component parameter|workload intensity (WI) or service demand (SD). Values must be obtained or estimated for these parameters, preferably through measurements with performance monitors and accounting systems. Measurements must be made during peak workload periods and for an appropriate monitoring interval (e.g., 1 hour). For example, consider the \Mail Processing" basic component. Data would be collected relative to all messages sent during a 1 hour monitoring interval. Assume that 500 messages were sent during this interval. Measurements are obtained for the message size, mail server CPU time, and server I/O time for each of the 500 messages. The average arrival rate of send mail requests is equal to the number of messages sent (500) divided by the measurement interval (3,600 sec), i.e., 500/3,600 = 0.14 messages sent per second. Similar measurements must be obtained for all basic components.
Figure 5.4. Workload characterization process.
5.5.1 Breaking Down the Global Workload
When workload intensity is high, large collections of workload measures can be obtained. Dealing with such collections is seldom practical, especially if workload characterization results are to be used for performance prediction through analytic models [8]. One should substitute the collection of measured values of all basic components by a more compact representation|one per basic component. This representation is called a workload model|the end product of the workload characterization process.
Consider a C/S-based application that provides access to the corporate database, and assume that data collected during a peak period of 1 hour provides the CPU time and number of I/Os for each of the 20,000 transactions executed in that period. Some transactions are fairly simple and use very little CPU and I/O, whereas other more complex ones may require more CPU and substantially more I/O. Figure 5.5 shows a graph depicting points of the type (number of I/Os, CPU time) for all transactions executed in the measurement interval. The picture shows three natural groupings of the points in the two-dimensional space shown in the graph. Each group is called a cluster and has a centroid|the larger circles in the figure|defined as the point whose coordinates are the average among all points in the cluster. The \distance" between any point and the centroid of its cluster is the shortest distance between the point and the centroid of all clusters. The coordinates of the centroids of clusters 1, 2, and 3, are (4.5, 19), (38, 171), and (39, 22), respectively. A more compact representation of the resource consumption of the 20,000 transactions is given by the coordinates of centroids 1, 2, and 3. For instance, transactions of class 1 perform, on the average, 4.5 I/Os and spend 19 msec of CPU time during their execution.
Figure 5.5. Space for workload characterization (no. of I/Os, CPU time).
The graph in Fig. 5.5 also shows the point whose coordinates, (28, 68), are the average number of I/Os and the average CPU time over all points. It is clear that if we were to represent all the points by this single point|the single cluster case|we would obtain a much less meaningful representation of the global workload than the one provided by the three clusters. Thus, the number of clusters chosen to represent the workload impacts the accuracy of the workload model.
Clustering algorithms can be used to compute an optimal number of basic components of a workload model, and the parameter values that represent each component. A discussion of clustering algorithms and their use in workload characterization is presented in Chap. 6.
5.5.2 Data Collection Issues
In ideal situations, performance monitors and accounting systems are used to determine the parameter values for each basic component. In reality, the tool base required for integrated network and server data collection may not be available to the system administrators, or they may not have enough time to deploy and use a complete suite of monitoring tools. This is a chronic problem for many organizations. The problem is compounded by the fact that most monitoring tools provide aggregate measures at the resource levels (e.g., total number of packets transmitted on a LAN segment or total server CPU utilization). These measurements must be apportioned to the basic workload components. Benchmarks and rules of thumb (ROTs) may be needed to apportion aggregate measures to basic components in lieu of real measurements. Figure 5.6 illustrates the range of data collection alternatives available to a capacity manager.
In many cases, it is possible to detect a fairly limited number of applications that account for significant portions of resource usage. Workload measurements can be made for these applications in a controlled environment such as the one depicted in Fig. 5.7. These measurements must be made separately at the client and at the server for scripts representing typical users using each of the applications mentioned above. These measurements are aimed at obtaining service demands at the CPU and storage devices at the client and server as well as number of packets sent and received by the server and packet size distributions. The results thus obtained for a specific type of client and server must be translated to other types of clients and/or servers. For this purpose, we can use specific industry standard benchmarks, such as SPEC ratings, to scale resource usage figures up or down.
Example 5.1
Assume that the service demand at the server for a given application was 10 msec, obtained in a controlled environment with a server with a SPECint rating of 3.11. To find out what this service demand would be if the server used in the actual system were faster and had a SPECint rating of 10.4, we need to scale down the 10 msec measurement by dividing it by the ratio between the two SPECint ratings. Thus, the service demand at the faster server would be 10 =(10:4=3:11) = 3:0 msec. Of course, the choice of which benchmark to use to scale measurements taken in controlled environments down or up depends on the type of application. If the application in question is a scientific application that does mostly number-crunching of floating-point numbers, one should use SPECfp as opposed to SPECint ratings.
Figure 5.6. Data collection alternatives for workload characterization.
Figure 5.7. Controlled environment for workload component benchmarking.
In general, the actual service demand, ActualServiceDemand, is obtained from the measured demand, MeasuredServiceDemand, at the controlled environment by multiplying it by the ratio of throughput ratings|such as the SPEC ratings|of the resource used in the controlled environment and the resource used in the actual environment:
ActualServiceDemand = MeasuredServiceDemand x ControlledResourceThroughput ActualResourceThroughput
Chapter 12 discusses in greater detail issues involved in data collection for client/server systems.
5.5.3 Validating Workload Models
In building any model, abstractions of the reality being modeled are made for simplicity, ease of data collection and use, and the computational efficiency of the modeling process. The abstractions compromise the accuracy of the model, so the model must be validated within an acceptable margin of error, a process called model validation. If a model is deemed invalid, it must be calibrated to render it valid. This is called model calibration.
Validating workload models entails running a synthetic workload composed of workload model results and comparing the performance measures thus obtained with those obtained by running the actual workload. If the results match within a 10{30% margin of error, the workload model is considered to be valid. Otherwise, the model must be refined to more accurately represent the actual workload. This process is depicted in Fig. 5.8.