- Introduction
- Caches
- Virtual Memory Issues
- Memory
- Input/Output Devices
- I/O Performance Tips for Application Writers
- Summary
- References
3.5 Input/Output Devices
3.5.1 Magnetic Storage Device Issues
Today’s magnetic disk technology provides the consumer with a dilemma that is not likely to resolve itself in the near future. On the one hand, disk space is cheap, with single disk drives providing several GB of capacity at a fraction of a penny per MB. Performance is another story. These same disk drives are capable of only several MB/sec of bandwidth performance. A single disk drive may have a capacity of 70 GB but can only sustain eight MB/sec of I/O bandwidth.
One way to improve bandwidth is to define a logical device which consists of multiple disks. With this sort of approach a single I/O transaction can simultaneously move blocks of data to multiple disks. For example, if a logical device is created from eight disks, each of which is capable of sustaining 10 MB/sec, then this logical device is capable of delivering up to 80 MB/sec of I/O bandwidth! Such logical devices are commonplace and are critical to the delivery of high bandwidth to and from files stored on magnetic disk.
Unfortunately, the construction of such logical devices so that they are able to deliver good I/O bandwidth can be tricky. One common mistake is to assume that multiple disk drives can be chained from a single I/O slot (that is, a single card/controller). Frequently I/O cards have a peak bandwidth inherent in their design which ultimately limits realizable I/O performance. To illustrate the problem, consider an I/O card that is capable of only 40 MB/sec. Building a logical device using a single card with the 8 disks mentioned above limits performance to only half of the disks’ aggregate capabilities. However, if two cards are used with the 8 disks (4 disks on each card), then the logical device is capable of up to 80 MB/sec of I/O bandwidth.
Most memory systems of server or mainframe class computers today are capable of delivering over 400 MB/sec of memory bandwidth per processor. In order to construct a logical device with magnetic disks capable of providing data at half this rate, one would need 20 of the disks discussed above. The capacity of these disks, using 10 GB disks, is a whopping 200 GB—for just one processor! Such is the dilemma of system configuration with regard to magnetic disk storage: High performance magnetic disk will go hand-in-hand with a tremendous amount of storage, perhaps far more than is necessary.
3.5.2 Buffer Cache
Most operating systems today maintain a block of memory to hold files or at least pieces of files that processes are reading or writing. This block of memory is usually referred to as the file system buffer, or simply buffer cache. Buffer cache plays a role very much like the processor cache. Whenever a process accesses a location in a file, the operating system moves a block, usually a file system block, into the buffer cache. Subsequent accesses to locations in that block will be made using the memory system rather than having to access magnetic disk (which is often an order of magnitude slower).
Not only can a buffer cache improve I/O to a block (of a file) that has already been accessed, but it also allows the operating system to predict user accesses. Probably the most common example of this is read ahead. Many applications will access a file sequentially, from beginning to end. Sophisticated operating systems will monitor an application’s accesses to a file and, once it detects that the file is being read sequentially, it will begin reading blocks into buffer cache asynchronously to the user process. This is yet another form of prefetching.
As it turns out, many applications also read files sequentially from the end to the beginning. Yet others stride through files, skipping over a constant number of blocks between those it accesses. HP’s SPP-UX was one of the very few operating systems sophisticated enough to perform read ahead with such complicated patterns.