Tuning Databases on Solaris: Role of the Buffer Cache
- Overview of the Buffer Cache
- Monitoring the Buffer Cache
- Sizing the Buffer Cache
Although disk I/O is central to the operation of relational database systems, databases go to considerable lengths to avoid it. The reason has to do with disk access times. Access to CPU cache is significantly faster than access to main memory, which in turn is significantly faster than access to disk. So databases work more effectively with data that is a memory access away from the CPU, rather than a disk access away.
When a database operation needs a database block that is located on disk, the operation typically stalls while the page is read into memory. When data is written to the log file, the logger process stalls until the write is completed. The only database disk activity that does not routinely result in stalls is writing to database tables, since multiple pages are written asynchronously.
Overview of the Buffer Cache
One of the main tools used by databases to reduce disk I/O is the database buffer cache. The database acquires a segment of shared memory and typically sets aside the largest proportion of it to hold database blocks (also referred to as database pages). When a transaction requires a block, it reads the block from disk and stores it in the buffer cache; subsequent transactions requesting the same block can then retrieve it from memory rather than from disk.
In practice, there is rarely enough memory in the buffer cache to accommodate every database block required by transactions, so blocks cannot be held in memory indefinitely. Databases use a least recently used algorithm to retain the most frequently accessed blocks and replace blocks that have fewer accesses.
Some databases also provide alternative strategies to discriminate against or on behalf of blocks in selected tables. For example, there is little point in caching blocks from tables that are rarely reused; conversely, some small tables are so heavily accessed that they benefit from being cached permanently.
Although effective database buffer caches are crucial for efficient functioning of OLTP applications, not all database operations use the buffer cache. In DSS environments, for example, Oracle and Informix XPS bypass the cache for parallel queries on large tables. The reasoning is that table sizes are often so large compared to the cache size that a table scan would overwhelm the cache without any benefit. DB2 for Solaris does use the buffer cache for DSS table scans, although buffer caches do not need to be large since blocks are discarded quickly.
The buffer cache is most effective when data access is skewed. Access is skewed if some data is accessed more frequently than other data. Let me illustrate skewed access with a practical example. On a normal Saturday, my wife and I might complete a number of financial transactions. Between us we purchase food, gas, and other items, transfer money between accounts, and withdraw cash from an ATM. Since we are such busy shoppers, our account details will appear several times in the buffer cache of the accounts database at our credit union. Our credit card account will be accessed many times, our checking account might be accessed a couple of times, and our savings account probably won't be accessed at all. The account most likely to remain in the database buffer cache is the credit card account, since it is the most heavily accessed. In practice, of course, the volume of financial transactions on a Saturday is such that no individual's account is likely to be cached for long, but the example does illustrate skewed or nonuniform access to our three accounts.
In the same way, businesses often find that 80% of their business is done with 20% of their customers, with the result that some database rows are much more frequently accessed than others. The same principle has application in many database environments.
When access to data is uniform (that is, all rows are equally likely to be referred to), caching offers fewer benefits. When access to data is skewed, caching can greatly reduce the proportion of reads that require physical disk I/Os. Data skew is typical for OLTP applications and less typical for DSS applications.