Finding and Removing Bottlenecks
- Kernel Parameters Run Amok
- The Quick Fix
- Tools
- syslog
- Removing Bottlenecks
- Summary
So far, this book has focused on up-front design and configuration of an email server. The goal of this chapter is to provide some methodology and explain the use of tools that will assist the email administrator in determining the cause of poor email server performance and rectifying the situation.
Over the years, operating systems have grown increasingly sophisticated. Today, several levels of data caches are typically internal to the operating system kernel. Most internal data structures are dynamically sized and hashed, so that they no longer have fixed extents and table lookups remain rapid as the amount of data they contain grows. This increased sophistication generally has been for the best, as these additional variables ensure that operating systems require less manual intervention to get them to work well in a high-performance capacity. Nevertheless, a system administrator needs to be aware of two facts. First, these benefits can have side effects. Second, this extra tuning often makes troubleshooting more difficult.
7.1 Kernel Parameters Run Amok
Lets consider a real-life example. The Solaris operating system from Sun Microsystems contains a kernel table called the Directory Name Lookup Cache (DNLC). The DNLC is a kernel cache that matches the name of a recently accessed file with its vnode (a virtual inode, an extra level of abstraction that makes writing interfaces to filesystems easier and more portable) if the file name isnt too long. Keeping this table in memory means that if a file is opened once by a process, and then opened again within a short period of time, the second open() wont require a directory lookup to retrieve the files inode. If many of the open()s performed by the system operate on the same files over and over, this strategy could yield a significant performance win.
The DNLC table has a fixed size to make sure that it consumes a reasonable amount of memory. If the table is full and a new file is opened, this files information is added to the DNLC and an older, less recently used entry in the table is removed to make space for the new data. The size of this table can be set manually using the ncsizevariable in the /etc/systemfile; otherwise, its derived from MAXUSERS, a general sizing parameter used for most tables on the system, and a variable called max_nprocs, which governs the total number of processes that can run simultaneously on the system. In Solaris version 2.5.1, the equation used to determine ncsizewas
ncsize = (max_nprocs + 16 + MAXUSERS) + 64
In Solaris 2.6, this calculation changed to
ncsize = 4 * (max_nprocs + MAXUSERS) + 320
In Solaris 2.5.1, unless manually set, max_nprocs = 10 + 16 * MAXUSERS. I do not know if this calculation changed in Solaris 2.6.
If MAXUSERS is set to 2048, which is typical for large servers running very large numbers of processes, the DNLC on Solaris 2.5.1 would have 34,906 entries. On Solaris 2.6, using the same kernel tuning parameters, the DNLC could contain 139,624 entries. In Solaris 8, the calculation of this parameter had been changed to be more similar to the Solaris 2.5.1 method.
Performance on the new Solaris 2.6 system was horrible. File deletions on Network File Systems (NFS) took a very long time to complete, and it required a great deal of time to diagnose the problem. As it turns out, for some reason that I still dont fully understand, if one attempts to delete a file over NFS, and the DNLC is completely full, the operating system makes a linear traversal of the table to find the appropriate entry. The more entries the table holds, the longer this traversal takes. If it has nearly 140,000 entries, this operation can take considerable time. With the same /etc/systemparameters on similar hardware running Solaris 2.5.1, these lookups did not cause a noticeable problem.
In my case, a colleague who had encountered this problem before suggested setting ncsizeexplicitly to a more moderate value (we chose 8192) in the /etc/ systemfile. We then rebooted the system, and performance improved dramatically.
This is a pretty exotic example, but it indicates the following points:
Todays operating systems are complex. Its always possible that a server slowdown might occur under certain circumstances because of the misbehavior of some obscure part of the operating system.
Its always possible that very slight changes in an operating systemeven changing minor version numbers or adding a single kernel patchcan have far-reaching consequences.
Larger cache sizes are not universally a good thing, especially if the system designers do not fully explore the consequences of having large caches.
Theres no such thing as having too much knowledge about the hardware and software system that a site uses to operate a high-performance server.
This also leads to some obvious conclusions:
On a high-performance or mission-critical server, make even small changes with extreme caution.
One can never have too much expertise. Fancy tools rarely solve the problem, but a tool that can deliver one good insight during a crisis can prove exceedingly valuable.
Trust the data. When something is going wrong, the reasons may not be evident, but they will be consistent within their own logic. Although it may not seem obvious at times, once all the factors are understood, it will be apparent that computers are deterministic objects.
Test, test, and test again before deploying production system. Make the test as close to real life as possible. Theres never enough time to be as thorough as one would like, but if there isnt enough time to test at least some extreme cases, there isnt enough time to do it right. In the preceding situation, we did test, but with data set sizes less than 140,000 files (the size of the DNLC). Because we never filled the DNLC, we never tripped over the bug. Obviously, our testing wasnt close enough to real life. Chapter 8 will explore testing issues in more detail.
This example was not intended to criticize Solaris. Probably every operating system vendor makes comparable changes from release to release, and many similar stories could have been told that focus on other vendors.
There isnt enough space in this book to cover general troubleshooting methodology, but one aspect should be mentioned because it causes many people difficulty. In trying to focus on a problem, the troubleshooter often assembles a great deal of data. Some of it is relevant to the problem at hand, and some of it is tangential. Determining which data are relevant and which arent often poses the most diffi-cult aspect of solving a problem. Theres no magic to categorizing data in this way; rather, experience and instinct take over. However, when faced with a problem that isnt easily solved, its often helpful to ask, What would I think of this problem if any one of the facts involved was removed from the equation? If one arrives at a conclusion that can be tested, it is often worthwhile to do so, or at least to reexamine the datum in question to make sure it is valid. This sort of analysis is difficult to do well, but in very difficult situations, it can prove a fruitful line of attack.
Another troubleshooting technique is preventive in naturebaselining the system. To understand whats going wrong when the system behaves poorly, it is crucial to understand how the server should behave when the system performs correctly. One cannot overemphasize this point. On a performance-critical server, an administrator should record data using each diagnostic tool that might be employed during a crisis when the server is in the following states:
Idle, with services running but unused
Moderately loaded
Heavily loaded, but providing an adequate response
Then, when the server begins to perform badly, one can determine what has changed on the system. What is different about the overloaded system from the state where it is heavily loaded, but providing quality service? This is a much easier question to answer than the more abstract, Why is this server performing poorly?
The complexity of todays operating systems exacerbates this need. On most contemporary operating systems, its much more difficult to tell the difference, for example, between normal memory paging activity and desperation swapping. Its difficult to know objectively what a reasonable percentage of output packet errors on a network interface would be. Its difficult to tell objectively how many mail.local processes should be sleeping, waiting for such esoterica as an nc_rele_lock event to wake them up. As with people, on computer systems many forms of unusual behavior can be measured only in relative terms. Without a baseline, this identification cant happen.
Previously, I mentioned how important it is to distinguish information related to a present problem from incidental information. Without a baseline, it can be difficultif not impossibleto tell whether a given piece of information is even out of the ordinary. When something goes wrong, while looking for the source of the problem weve all encountered something unexpected and asked ourselves, Was this always like that? Baselining reduces the number of times this uncertainty will arise in a crisis, which should lead to faster problem resolution.
Run baseline tests periodically and compare their results against previous test runs. Going the extra mile and performing a more formal trend analysis can prove very valuable, too. It offers two benefits. First, it enables one to spot situations that slowly are evolving into problems before they become noticeable. Of course, not all changes represent problems waiting to happen, but trend analysis can also spot secular changes in the way a server operates, which may indicate new patterns in user behavior or changes in Internet operation.
Second, formal trend analysis allows administrators to become more familiar with the servers they are charged with maintaining, which is unequivocally a good thing. More familiarity means problems are spotted sooner and resolved more quickly. System administrators responsible for maintaining high-performance, critical servers who do not have time to perform these tasks are overburdened. In this case, when something fails not only will they be unprepared to deal with the crisis, but other important tasks will go unfulfilled elsewhere as a consequence.
In the old days, many guru-level system administrators could tell how, or even what, the systems in their charge were running by looking at the lights blink or listening to the disks spin or heads move. They could feel what was happening in the box. Todays trend toward less obtrusive and quieter hardware has been part and parcel of the considerable improvements made in hardware reliability. This is a good thing. However, through these hardware changes, as well as the aforementioned increasing operating system complexity and the much larger quantity of boxes for which a system administrator is responsible, weve largely lost this valuable feel for the systems we maintain. Now the data on the system state are likely the only window we have into the operational characteristics of these servers. It should be considered an investment to periodically get acquainted with the machines we maintain so as to increase the chance of finding problems before they become readily apparent, and to give us the insight necessary to reduce the time to repair catastrophic problems when they do occur.