Data Caches
Data caching is a special form of data duplication. Caches make copies of data to improve performance. This is in contrast to mirroring disks, in which the copies of data are made primarily to improve availability. Although the two have different goals, the architecture of the implementations is surprisingly similar.
Caches are a popular method of improving performance in computer systems. The essentially uses all available main memory, RAM, as a cache to the file system. Relational database management systems (RDBMS) manage their own cache of data. Modern microprocessors have many caches, in addition to the typical one to three levels of data cache. Hardware RAID arrays have caches that increase the performance of complex redundant arrays of independent disks (RAID) algorithms. Hard drives have caches to store the data read from the medium. The use of these forms of caching can be explained by examination of the cost and latency of the technology.
Cost and Latency Trade-Off
All memories are based on just a few physical phenomena and organizational principles [Hayes98]. The features that enable you to differentiate between memory technologies are cost and latency. Unfortunately, the desire for low cost and low latency are mutually exclusive. For example, a dynamic RAM (DRAM) storage cell has a single transistor and a capacitor. A static RAM (SRAM) storage cell has four to six transistors (a speed/power trade-off is possible here). The physical size of the design has a first order impact on the cost of building integrated circuits. Therefore, the cost of building N bits of DRAM memory is less than SRAM memory, given the same manufacturing technology. However, DRAM latency is on the order of 50 ns versus 5 ns for SRAM.
The cost/latency trade-off can be described graphically. FIGURE 2 shows the latency versus cost for a wide variety of technologies used in computer systems design. From this analysis, it is clear that use of memory technologies in computer system design is a cost versus latency trade-off. Any technology above the shaded area is not likely to survive due to costs. Any technology below the shaded area is likely to spawn radical, positive changes in computer system design.
FIGURE 2 Cache Latency Versus Cost
The preceding graph also shows when caches can be beneficial. The memory technologies to the left, above the dashed line, tend to be persistent, while the memory technologies to the right, below the dashed line, tend to be volatile. Important data that needs to survive an outage must be placed in persistent memory. Caches are used to keep copies of the persistent data in low-latency memory, thus offering fast access for data that is reused. Fortunately, many applications tend to reuse dataespecially if the "data" are really "instructions." A given technology can be effectively cached by another technology having lower latency, such as those to the right.
Cache Types
CPU caches are what most computer architects think about when they use the term cache. While CPU caches are an active source of development in computer architectures, they are only one of the caches in a modern SMP system. CPU caches are well understood and exhibit many of the advances in computer architecture.
Metadata caches store information about information. On disk, the metadata is duplicated in multiple locations across the disk. In the file system (UFS), metadata is cached in main memory. The kernel periodically flushes the UFS metadata to disk. This metadata cache is stored in volatile main memory. If a crash occurs, the metadata on disk may not be current. This situation results in a file system check, fsck, which reconciles the metadata for the file system. The logging option introduced with UFS in the stores changes in the metadata, the log of metadata changes, on the disk. This dramatically speeds up the file system check because the metadata can be regenerated completely from the log.
File system read caches are used in the to store parts (pages) of files that have been read. The default behavior for the is to use available RAM as a file system cache. When the system gets low on available memory, the system discards file system cached pages in preference to executable files or text pages. Similar behavior can be activated in the Solaris 2.6 or Solaris 7 OE by enabling priority_paging in the /etc/system file.
Buffer caches provide an interface between a fast data source and a slow data sink. This type of cache is quite common, especially in I/O subsystem design. A buffer design is commonly used to implement I/O subsystems. For instance, modern SCSI or FC-AL disk drives have 8 to 16 megabytes of track cache. Track cache is really a read buffer cache designed to contain the data on at least one track of the disk media. This design allows significant improvements in performance of read operations that occur in the same track because the media requires only one physical read. The latency for physically reading data from disk media depends on the rotation speed of the media, and is often in the 5 to 10 millisecond range. Once the whole track is in the buffer, the data can be read in the latency of a DRAM access, which is on the order of 200 nanoseconds.
File systems often have a buffer cache in main memory for writing the data portion of files. These write buffers are distinguished from the file system read caches in that the write cache pages cannot be discarded when available memory is low. UFS implements a high-water mark for the size of its write buffer cache on a per file basis that can be tuned with the ufs:ufs_HW variable in the /etc/system file. The memory used for write buffer caches can cause resource contention for memory when the available RAM is low. For low memory situations, the use of write buffer cache should be examined closely.
Cache Synchronization
What all caches have in common is that the data must be synchronized between the copy in the low-latency memory and the higher-latency memory. Sequential access caches, such as the buffer cache, have relatively simple synchronizationwhatever comes in, goes out in the same order. Random access caches such as CPU, metadata, and file system read caches can have very complex synchronization mechanisms.
Random access caches typically have different policies for read and write. Since the cost per bit of caches is higher than that of higher-latency memories, the caches tend to be smaller. All of the data that can be stored in the higher-latency memory cannot fit in the cache. The cache policy for reading data is to discard the data, if needed, because it can be read again in the future. The cache policy for writing cannot simply discard data. Write policies tend to belong to two categorieswrite-through and write-behind. A write-through policy writes the data to the lower-latency memory, immediately ensuring that the data can be safely discarded if necessary. A write- behind policy does not immediately write the data to the higher-latency memory, thus improving performance if the data is to be written again soon. Obviously, the write-behind policy is considerably more complicated. For this, the hardware or software must maintain information about the state of the data, that is, whether or not the data has been stored in the higher-latency memory. If not, then the data must be written before it can be discarded to make room for other data.
If the persistence requirement of the cache and its higher-latency memory is the same, either both are persistent or not, write-behind cache policies make good sense. If the cache is not persistent and the higher-latency memory is persistent, write- through policies provide better safety.
As can be expected, the increasing complexity in cache designs increases the number of failure modes that can affect a system. At a minimum, caches add hardware to the system, thereby reducing the overall system reliability. Alternatively, caches use existing storage resources that are shared with other tasks, potentially introducing unwanted modes. For every synchronization mechanism added, an arbitration mechanism must also be added to handle conflicts and failures. A good rule of thumb is to limit the dependence on caches when possible. This trade-off decision must often be made while taking into account the benefits of caching. You should have a firm understanding of the various caches used in a system. To help with the decision making process, you can use the example in FIGURE 2, which shows the caching hierarchy of the system.