Oracle Exadata Smart Flash Cache in Depth
- Solid-State Disk Technology
- Exadata Flash Hardware
- Exadata Smart Flash Cache
- Exadata Smart Flash Logging
- Smart Flash Cache WriteBack
- Summary
The Exadata Database Machine combines Flash solid-state disks with more -traditional magnetic disks in order to achieve economic storage of large datasets together with the ability to achieve low latency and high throughput. In this chapter we review the architecture of Exadata Flash solid-state disk and examine in detail how Exadata Smart Flash Cache and Exadata Smart Flash Logging allow you to transparently leverage this Flash I/O.
Solid-State Disk Technology
To understand how Flash technology contributes to Exadata system performance and how best to exploit that technology, let’s start by comparing the performance and economics of Flash and spinning-disk technology.
Limitations of Disk Technology
Magnetic disks have been a continuous component of mainstream computer equipment for generations of IT professionals. First introduced in the 1950s, the fundamental technology has remained remarkably constant: one or more platters contain magnetic charges that represent bits of information. These magnetic charges are read and written by an actuator arm, which moves across the disk to a specific position on the radius of the platter, and then waits for the platter to rotate to the appropriate location (see Figure 15.1). The time taken to read an item of information is the sum of the time taken to move the head into position (seek time), the time taken to rotate the item into place (rotational latency), and the time taken to transmit the item through the disk controller (transfer time).
Figure 15.1 Magnetic disk architecture
Moore’s Law—first articulated by Intel founder Gordon Moore—observes that transistor density doubles every 18 to 24 months. In its broadest interpretation, Moore’s Law reflects the exponential growth that is commonly observed in almost all electronic components, influencing CPU speed, RAM, and disk storage capacity.
While this exponential growth is observed in almost all electronic aspects of computing—including hard disk densities—it does not apply to mechanical technologies such as those underlying magnetic disk I/O. For instance, had Moore’s Law been in effect for the rotational speed of disk devices, disks today should be rotating about 100 million times faster than in the early 1960s; in fact they are rotating only eight times faster.
So while the other key components of computer performance have been advancing exponentially, magnetic disk drives have been improving only incrementally. There have been some significant technical innovations—Perpendicular Magnetic Recording, for instance—but these in general have led to improved capacity and reliability rather than speed. Consequently, disk drives are slower today (when compared to other components or even their own storage capacity) than in the past. Consequently, disk I/O has increasingly limited the performance of database systems, and the practice of database performance tuning has largely become a process of avoiding disk I/O whenever possible.
The Rise of Solid-State Flash Disks
Heroic efforts have been made over the years to avoid the increasingly onerous bottleneck of the magnetic disk drive. The most prevalent, and until recently most practical, solution has been to “short stroke” and “stripe” magnetic disks: essentially installing more disks than are necessary for data storage in order to increase total I/O bandwidth. This increases the overall I/O capacity of the disk subsystem but has a limited effect on latency for individual I/O operations.
In contrast to a magnetic disk, solid-state disks (SSDs) contain no moving parts and provide tremendously lower I/O latencies. Commercial SSDs are currently implemented using either DDR RAM—effectively a battery-backed RAM device—or NAND Flash. NAND Flash is an inherently nonvolatile storage medium and almost completely dominates today’s SSD market. NAND Flash is the technology used for Exadata SSD.
Flash SSD Latency
Performance of Flash SSD is orders of magnitude superior to that of magnetic disk devices, especially for read operations. Figure 15.2 compares the read latency of various types of SSDs and traditional magnetic disks (note that these are approximate and vary markedly depending on the drive make and configuration).
Figure 15.2 Seek times for various drive technologies
Economics of Solid-State Disks
The promise of solid-state disks has led some to anticipate a day when all magnetic disks are replaced by solid-state disks. While this might someday come to pass, in the short term the economics of storage and the economics of I/O are at odds—magnetic disk provides a more economical medium per unit of storage, while Flash provides a more economical medium for delivering high I/O rates and low latencies.
Figure 15.3 illustrates the two competing trends: while the cost of I/O is reduced with solid-state technology, the cost per terabyte increases. Various flavors of SSD (PCI/SATA and MLC/SLC) offer different price and performance characteristics compared to magnetic disks (15K versus 7K RPM, for instance). The SSD devices that offer good economics of I/O offer poorer economics for mass storage. Of course the cost per gigabyte for SSD is dropping rapidly, but no faster than the falling cost of magnetic disks or the growth in database storage demand—especially in the era of Big Data.
Figure 15.3 Economics of storage for solid-state and magnetic disks
Since most databases include both hot and cold data—small amounts of frequently accessed data as well as large amounts of idle data—most databases will experience the best economic benefit by combining both solid-state and traditional magnetic disk technologies. This is why Exadata combines both magnetic disks and Flash disks to provide the optimal balance between storage economics and performance. If Exadata contained only magnetic disks, it could not provide superior OLTP performance; if it contained only SSDs, it could not offer compelling economical storage for large databases.
Flash SSD Architecture and Performance
The performance differences between solid-state drives and magnetic disks involve more than simply a reduction in read latency. Just as the fundamental architecture of magnetic disks favors certain I/O operations, the architecture of solid-state drives favors specific and different types of I/O. Understanding how an SSD handles the different types of operations helps us make the best decisions when choosing configuration options.
SLC, MLC, and TLC Disks
Flash-based solid-state disks have a three-level hierarchy of storage. Individual bits of information are stored in cells. In a single-level cell (SLC) SSD, each cell stores only a single bit. In a multilevel cell (MLC), each cell may store 2 or more bits of information. MLC SSD devices consequently have greater storage densities, but lower performance and reliability. However, because of the economic advantages of MLC, Flash storage vendors have been working tirelessly to improve the performance and reliability of MLC Flash, and it is now generally possible to get excellent performance from an MLC-based device.
Until recently, MLC SSDs contained only 2 bits of information per cell. However, triple-level cache (TLC) SSDs are now becoming available: these are MLC devices that can store 3 bits of information. In theory, higher-density MLCs may appear in the future. However, increasing the number of bits in each cell reduces the longevity and performance of the cell. So far, Exadata systems have used only SLC or two-level MLC devices.
Cells are arranged in pages—typically 4K or 8K in size—and pages into blocks of between 128K and 1M, as shown in Figure 15.4.
Figure 15.4 SSD storage hierarchy (logarithmically scaled)
Write Performance and Endurance
The page and block structure is particularly significant for Flash SSD performance because of the special characteristics of write I/O in Flash technology. Read operations, and initial write operations, require only a single-page I/O. However, changing the contents of a page requires an erase and overwrite of a complete block. Even the initial write can be significantly slower than a read, but the block erase operation is particularly slow—around 2 milliseconds.
Figure 15.5 shows the approximate times for a page seek, page write, and block erase.
Figure 15.5 Flash SSD performance characteristics
Write I/O has another consequence in Flash solid-state drives: after a certain number of writes, a cell may become unusable. This write endurance limit differs among drives but is typically between 10,000 cycles for a low-end MLC device and up to 1,000,000 cycles for a high-end SLC device. SSDs generally “fail safe” when a cell becomes unwritable, marking the page as bad and moving the data to a new page.
Garbage Collection and Wear Leveling
Enterprise SSD manufacturers make great efforts to avoid the performance penalty of the erase operation and the reliability concerns raised by write endurance. Sophisticated algorithms are used to ensure that erase operations are minimized and that writes are evenly distributed across the device.
Erase operations are avoided through the use of free lists and garbage collection. During an update, the SSD marks the block to be modified as invalid and copies the updated contents to an empty block, retrieved from a “free list.” Later, garbage collection routines recover the invalid block, placing it on a free list for subsequent operations. Some SSDs maintain storage above the advertised capacity of the drive to ensure that the free list does not run out of empty blocks for this purpose. This is known as overprovisioning.
Figure 15.6 illustrates a simplified SSD update algorithm. In order to avoid a time-consuming ERASE operation, the SSD controller marks a block to be updated as invalid (1), then takes an empty block from the free list (2) and writes the new data to that block (3). Later on, when the disk is idle, the invalid blocks are garbage collected by erasing the invalidated block.
Figure 15.6 Flash SSD garbage collection
Wear leveling is the algorithm that ensures that no particular block is subjected to a disproportionate number of writes. It may involve moving the contents of hot blocks to blocks from the free list and eventually marking overused blocks as unusable.
Wear leveling and garbage collection algorithms in the disk controller are what separate the men from the boys in SSDs. Without effective wear leveling and garbage collection we would expect SSD drives to exhibit performance degradation and a reduced effective life. With the sophisticated algorithms employed by Oracle (Sun) and other SSD vendors these issues are rarely significant.
However, the implications of garbage collection and wear leveling do influence what we expect from a database system that uses Flash SSDs. Sustained heavy sequential write I/O, or write operations that concentrate on a small number of hot pages, may not allow the garbage collection and wear leveling algorithms time to clean up invalid pages or distribute hot pages between operations. As a result, SSDs subject to these sorts of workloads may exhibit performance or storage capacity degradation. This may influence decisions around the use of Flash SSD for sequential write-intensive workloads such as those involved in redo log operations.
SATA versus PCIe SSD
Flash SSD drives come in two fundamental types: SATA and PCIe. A SATA SSD connects to the computer using the SATA interface employed by most magnetic disks. A PCIe drive connects directly to the PCI bus, familiar to most of us as the slot in our home computers to which graphics cards are attached.
SATA SSDs are convenient, because they can be attached wherever a traditional magnetic SATA disk is found. Unfortunately, the SATA interface was designed for magnetic disks, which have latencies in the order of milliseconds. When a Flash SSD—which has latencies in the order of microseconds—uses a SATA interface, the overhead of SATA becomes significant and may account for as much as two-thirds of the total read latency.
The PCIe interface was designed for extremely low-latency devices such as graphics adapters and allows these devices to interact directly with the computer’s processor bus. Consequently, PCIe SSD devices have much lower latencies than SATA SSDs—read latencies on the order of 25 microseconds versus perhaps 75 microseconds for a typical SATA SSD (see Figure 15.2).
The Oracle Database Flash Cache
Although Exadata systems do not use the Oracle Database Flash Cache (DBFC), any discussion of the Exadata Smart Flash Cache (ESFC) inevitably invites comparisons with the DBFC. So before we dig deeply into Exadata Flash, let’s quickly review how the Oracle Database Flash Cache works. The DBFC is available from Oracle RDBMS 11g Release 2 on Oracle operating systems (Solaris and Oracle Enterprise Linux).
The Database Flash Cache serves as a secondary cache to the Oracle buffer cache. Oracle manages data blocks in the buffer cache using a modified least recently used (LRU) algorithm. Simplistically speaking, blocks age out of the buffer cache if they have not been accessed recently. When the DBFC is present, blocks that age out of the data cache are not discarded but are instead written (by the Database Writer, DBWR) to the Flash device. Should the blocks be required in the future, they can be read from the Flash Cache instead of from slower magnetic-disk-based database files.
Figure 15.7 shows the Oracle Database Flash Cache architecture. An Oracle server process reads blocks from the database files and stores them in the buffer cache (1). Subsequent reads can obtain the data blocks from the buffer cache without having to access the database file (2). When the block ages out of the buffer cache, the database writer loads it into the Flash Cache (3), but only if doing so does not interfere with writing modified blocks to disk (5). Subsequent reads may now find a block in either the Flash Cache (4) or the buffer cache (2).
Figure 15.7 Oracle Database Flash Cache architecture