- Introduction
- Performance and Disease
- Business Requirements
- Medical Analogues
- Lab Tests and Record Keeping
- Traps and Pitfalls
- Where Does the Time Go?
- Diagnostic Strategies
- Selected Tools and Techniques
- References
- Third-Party URLs
- Acknowledgments
- About the Author
- Ordering Sun Documents
- Accessing Sun Documentation Online
Traps and Pitfalls
Many patterns of error in performance work are so commonplace that they warrant a separate article on performance analysis traps and pitfalls. Some of the most common errors are discussed briefly here, with the moral that not everyone should aspire to be a doctor.
Statistics: Cache Hit Rate
Simple statistical findings often contradict instinct and intuition. Consider this simple question: "What is the practical difference between a 95 percent cache hit rate and a 96 percent cache hit rate?" I have asked this question in numerous meetings with customers and in front of multiple user-group audiences. Consistently, the correct answer never comes from the audiences. That is partly because it is a trick question. Intuitive answers like "one percent" and "not much" are wrong. The correct answer, at least in principle6, is a whopping 20 percent!
Why? Mainly because of what statisticians call sample error. What one should care about most in cache behavior is the miss rate, not the hit rate. For example, in the case of database I/O, each database cache miss results in what the database will view as a physical I/O operation7. If all of the I/O is the result of a 5 percent miss rate when the hit rate is 95 percent, then lowering the miss rate from 5 percent to 4 percent lowers the demand for I/O (at least for the reads) by 20 percent.
High cache hit rates might be the result of extensive unnecessary references involving the cache under consideration8. The best I/O, as they say, is one you never need to do.
Statistics: Averages and Percentages
"Did you hear the one about the statistician who drowned in a pool that was one inch deep, on average?"
Averages, in general, have limited analytical utility. Effects such as queuing for resources tend to be masked in proportion to the time interval over which the observations are averaged. For example, if ten operations started at the same moment and they all completed within one second, one would report this as "ten per second". However, the same experiment would be reported as "one per second" if measured over a 10 second interval. Also, if the ten operations needed to queue for a resource, the latency of each operation would depend on its time spent waiting in the queue. The same operations would each see a lower latency if they did not need to wait through a queue.
Even "100 percent busy" is not necessarily a useful metric. One can be 100 percent busy doing something useful, or 100 percent busy doing something of low strategic priority. Doctors can be 100 percent busy on the golf course. You can be 100 percent busy being efficient, or 100 percent busy being inefficient. Just as setting priorities and working efficiently are key to maximizing one's personal utility, they are also key concepts in managing computer resource utilization.
Statistics: "Type III" Error
Apart from the common notions of sample error, mathematical error, or logical error, statisticians commonly speak of two types of errors: Type I and Type II. A Type I error involves someone erroneously rejecting the correct answer, and a Type II error involves someone erroneously accepting the incorrect answer. In a 1957 article titled "Errors of the Third Kind in Statistical Consulting," Kimball introduced a third type of error (Type III) to describe the more common occurrence of producing the correct answerto the wrong question!
As discussed previously, the primary metrics of system performance come in terms of business requirements. Solving any perceived "problem" that does not correspond with a business requirement usually results in wasted effort. In the abundance of statistics and raw data that flow from complex systems, there are many opportunities to stray from the path of useful analysis.
For example, some tools report a statistic called wait I/O (%wio), which is a peculiar attempt to characterize some of a system's idle CPU time as being attributable to delays from disk I/O. Apart from the obvious flaw of not including network I/O delays in the calculation, the engineering units of "average percent of a CPU" makes very little sense at all. The method used to calculate this statistic has varied between Solaris OS releases, but none of the methods is backed by any concrete body of science. There is active discussion in the Sun engineering community that contemplates removing the wait I/O statistic entirely from the Solaris OS. The utility of this statistic is extremely limited. Mainly though, analysts must know that %wio is idle time. Whenever %wio is reported, actual idle CPU time must be calculated as either (%wio + %idle) or (100 - %usr - %sys).
Another not-so-useful statistic is the load average reported by the w(1) and uptime(1) commands. This metric was conceived long ago as a simple "vital sign" to indicate whether or not available CPU was meeting all demands. It is calculated as a moving average of the sum of the run queue depth (number of threads waiting for the CPU) plus the number of currently running threads. Apart from the previously mentioned problems with averages in general (compounded by being moving), this statistic has the disturbing characteristic that different favorable system tuning measures can variously drive it either up or down. For example:
Improving disk I/O should cause more threads to be compute-ready more often, thus increasing this metric and likely advancing business throughput.
Increasing contention for some resource can cause this metric to rise, but result in a net reduction in business throughput.
This is not to say that wait I/O and load average are not useful indicators of changes in system load, but they are certainly not metrics that should be specifically targeted as tuning goals!
Public Health
Public health is not so much a matter of medicine as it is of statistics. Doctors tend to know "what's going around," and they combine this knowledge with clinical signs to make accurate diagnoses. A common error of otherwise accomplished analysts lies in attempting to diagnose issues from low-level system statistics before thoroughly reviewing configuration parameters and researching "what's going around." Often, problems can be easily spotted as deviations from best practices or absence of a program patch. Identification of public health issues requires a view beyond what a typical end-user or consultant can directly attain.
Logic and Causality
It is easy to suspect that a bad outcome might be linked to events immediately preceding it, but to jump to such a conclusion would be to commit the common logical fallacy called "Post hoc, ergo propter hoc" (in Latin). This error is perhaps the most frequently committed of all. In English, this translates to "After that, therefore because of that," which scientists often express as "Correlation does not imply causation."
Complaints such as "We upgraded to the Solaris 8 OS, and now we have a problem" almost invariably have nothing to do with Solaris 8 OS. System upgrades often involve upgrades to other components, such as third-party storage hardware and software, and sometimes, they involve application software upgrades. In addition, upgrades occasionally involve application-migration process errors.
Accurate and timely diagnoses sometimes require full disclosure of historical factors (the patient file) and are usually accelerated by keeping focused on hypotheses that exhibit plausible chains of causality.
Experiment Design
A common troubleshooting technique is to vary one parameter with all other things held equal, then to reverse changes which produce no gain. Imagine applying this strategy to maximizing flow through a hose with three kinks in it!
Experiment design is a major topic among scientists and statisticians. Good experiments test pertinent hypotheses that are formed from a reasonable understanding of how the system works and an awareness of factors that might not be controllable. This can be as much art as science. When done wrong, huge amounts of time can be wasted by solving the wrong problem and perhaps even exacerbating the real problem.