- Introduction
- Performance and Disease
- Business Requirements
- Medical Analogues
- Lab Tests and Record Keeping
- Traps and Pitfalls
- Where Does the Time Go?
- Diagnostic Strategies
- Selected Tools and Techniques
- References
- Third-Party URLs
- Acknowledgments
- About the Author
- Ordering Sun Documents
- Accessing Sun Documentation Online
Diagnostic Strategies
Rapid and accurate diagnoses can be critical to controlling costs. Given the high stakes common in commercial computing, the importance of diagnostic strategy deserves some special attention. Rapid problem resolution can prevent lost revenues arising from inadequate system performance. Accurate analyses can prevent the waste of time and money that can result from undisciplined approaches. For example, system upgrades undertaken without proper analysis, often called "throwing iron at the problem," can be very disappointing when the extra iron fails to solve the actual performance bottleneck.
As in medicine, the shortest path to an accurate diagnosis lies in the timely consideration and exclusion of both the grave and the mundane, and in timely referrals to appropriate specialists. Following a systematic high-level diagnostic strategy can help speed this process. The following questions represent one possible process-oriented high-level strategy for performance diagnosis:
Is the system correctly configured?
Have appropriate best practices been applied?
Has appropriate research into candidate bugs been performed, and have the appropriate patches and updates been applied?
Is the performance complaint based on reasonable expectations and valid experiment design, backed by good science?
If a resource is saturated, can more resources be added, or are there options to decrease demand?
What resources are under contention?
Where is the time going?
Why is the time going wherever it is going?
Each of these questions can be augmented by asking if the services of a specialist are needed. A detailed diagnostic outline could fill volumes, but the best diagnosticians do not have, want, or need a highly detailed or rigid process. They merely combine their knowledge, experience, and skills to find problems and fix them. If they determine that the problem is outside their expertise, they make an appropriate referral.
Any strategy that succeeds will be celebrated at the time of victory, whether or not the strategy was philosophically defensible. There are innumerable cases in which trial-and-error strategies will work out well, especially when executed by highly experienced and knowledgeable practitioners. Well-informed hunches and lucky guesses that pan out are undeniably strategically efficient, but as strategies, they are not characteristically predictable.
Much of what is written on system tuning is commonly arranged by subsystem (for example, CPU, disk I/O, and messaging), as in the imminently practical and proven approach described in Chapter 21 of Configuring and Tuning Databases on the SOLARIS Platform by Allan Packer. Packer's strategy is exemplary of a classical functionally-oriented approach.
At a functional level, tuning and troubleshooting techniques share a great deal in terms of tools and knowledge base. While system tuning is characteristically a trial-and-error process, it is generally preferred that troubleshooting techniques converge on root causes with maximum determinism and minimal experimentation. The success of system tuning is measured in terms of empirical results. Drill-down troubleshooting might lead to discovery of a previously undiagnosed issue for which no empirical payoff might exist until the issue is repaired.
A simple three-step functionally-oriented strategy11 that is equally applicable to tuning or troubleshooting is to ask:
Is the system working or waiting for something?
If it is working, is it working intelligently and efficiently?
If it is waiting, can the thing it is waiting for be made faster, or can you decrease dependence on that thing?
High-level diagnostic strategies are useful as road maps for diagnostic efforts and lay solid foundations for rational problem-solving.
Regardless of the strategy used, there will commonly be challenges to successful execution. For example, a challenge faced by doctors and performance analysts alike is patients who are convinced they have provided all the data needed to make their diagnosis. Diplomacy may be required to extract from the patient or customer whatever data is actually needed.
Root Causes
The root causes of bad performance are just as diverse as the spectrum of illnesses ranging from the common cold to incurable fatal diseases. As in medicine, knowledge of the relative likelihood of various diagnoses has some bearing on the process used for diagnosis.
The process of troubleshooting most frequently reveals root causes that are more closely analogous to the common cold than the rare fatal disease. The common cold is usually diagnosed with no lab tests whatsoever, but rather by the presence or absence of clinical signs, and consideration of the season and the locale. Just as doctors do not invest in culturing the virus that causes the common cold, much of the effort that gets invested in performance analysis could be avoided by beginning with basic symptom-matching diagnostic techniques.
The following categories of root causes should not require intense analysis to diagnose, and should therefore be considered at the earliest phases of diagnostic efforts. There is no simple policy for making these considerations before delving deep into low-level root cause analysis techniques, but much of the time and effort that gets consumed in performance troubleshooting could be saved by doing so.
Best practice deviations
Known bugs
Errors in experiment design and data interpretation
Resource saturation
Just as 80 percent of disease might be avoided by good habits of diet and exercise, a similar rate of performance complaints can be traced back to deviations from known best practices. Uninformed decisions regarding system feature selections or failure to make appropriate space-versus-speed tradeoffs can often be quickly diagnosed by inspection. It is vastly preferable to diagnose these by examination of configuration parameters than to discover them from low-level technical analysis or sifting through reams of data.
By some estimates, 80 percent of all reported bugs have been previously reported. Often, bug reports already on file indicate workarounds or availability of product patches. For known issues with no workaround or patch, the best path to getting them repaired involves building a business case by raising escalations against the appropriate bugs or RFEs.
Claims of performance problems sometimes arise from psychological factors doctors routinely encounter, such as hypochondria or anxiety. Sometimes, ostensibly alarming data will turn out to be benign, and some symptoms will turn out to be of no consequence. Much effort can sometimes be saved be conducting a "sanity check" before investing too heavily in additional data collection and analysis. This involves making a preliminary assessment that the problem is real, and establishing realistic expectations regarding performance goals.
This is a major category of initial diagnosis, but more often, it is merely an indicator of a need for tuning, rather than a root cause. A 100-percent-busy issue is often the result of avoidable inefficiencies, so one should not rush to judgment over the inadequacy of the resource itself.
The preceding categories are somewhat of a mixture of science and art, but they should be properly evaluated before investing in detailed low-level root cause analyses.
Whether or not the root causes of a complaint have been previously identified and characterized, they will fall into a finite set of categories. Knowledge of the number and nature of the categories can help guide diagnostic reasoning processes. Medical texts often arrange diseases into familiar categories, such as viral, bacterial, genetic, and psychological. Here is an attempt to enumerate the general categories of root causes for computer performance issues.
Bad algorithms
Resource contention
Serialization
Latency effects
Hardware failures
Common inefficiencies
Esoteric inefficiencies
Intuition leads many people to think that bad or inappropriate algorithms are at the heart of performance issues, more often than they actually are. Still, this is a major category of diagnosis. Some algorithms have issues scaling with data volumes, while others have issues scaling to meet ever-increasing demands for scaling through parallelism. Diagnosis of the former might require a detailed analysis of execution profiles using programming tools, and the latter might be diagnosed from looking for contention symptoms. Of course, to say an algorithm is "bad" is rather broad-ranging, and the subsequent categories listed here enumerate some particular types of badness.
Given that resource-sharing issues can occur in various kernel subsystems, as well as in application software and layered, third-party software, you might need to look in many places to locate resource contention. At high-levels of resource utilization, the relative priority of resource consumers becomes an interesting topic. Failure to prioritize and failure to tune are common diagnostic findings. In this context, a resource is not simply a CPU, memory, or an I/O device. It also extends to mechanisms such as locks or latches used to coordinate access to shared resources. Queuing and polling for scarce resources can account for considerable clock time, and they can also account for a considerable percentage of overall CPU utilization.
When it is possible for pieces of a computing workload to be processed concurrently, but concurrency does not occur, the work is said to proceed serially. Missed opportunities for concurrent processing might arise from laziness or ignorance on behalf of a programmer, constraints on program development schedules, or non-optimal configuration and tuning choices. In some cases, unintended serialization will be diagnosed as a bug.
When an operation is highly iterated and not decomposed into parallel operations, the time per iteration is of particular interest. While this notion is obvious in cases of computational loops, latency effects in the areas of memory references, disk I/O, and network I/O are often found at the root of many performance complaints.
Failed or failing system components can lead to poor overall performance. Certain modes of failure might not be immediately apparent, though most failures will result in an error message being logged somewhere.
Any book on programming optimization techniques will likely feature a long list of common programming inefficiencies. Among these are major categories like memory leaks, along with common inefficiencies such as repeatedly evaluating the length of a string, poorly chosen buffer sizes, or too much time spent managing memory. Doing unnecessary work counts as a major category, especially when it involves inefficiencies in the underlying system. For example, some versions of the Solaris OS have slow getcwd(3C) and fsync(3C) performance (see BugIDs 4707713 and 4841161 for getcwd and BugID 4336082 for fsync).
Sometimes, the root cause of performance issues lies at very low levels in the system, involving factors with which most analysts have no concrete grasp. For instance, you might observe a low-level instrumentation issue in the system architecture or the CPU itself. As systems evolve toward chip multiprocessing, you can expect the subject of low-level efficiency to become an increasingly hot topic.
What really defines a performance topic as esoteric is the talent, experience, and tools required to troubleshoot it. Many simple problems will appear difficult to the practitioner who lacks the right experience. Some difficult problems will require the services of authentic gurus to diagnose. The steady advance of performance analysis tools will inevitably reduce the mystique of these factors.