Solid State Disks: Now and In the Future
In 1994, I bought my first solid state disk (SSD). It cost £30 and was a single 256KB flash cell. It fit into one of the two SSD slots on my Psion Series 3 palmtop, more than doubling its available storage. The form factor was slightly larger than a Compact Flash card.
As a single cell, I could modify files on the disk, but this wouldn't reclaim free space. I had to completely erase the disk to do that. Repeatedly modifying the same file could quickly fill it up, so I got into the habit of copying files to the internal RAM drive, saving there periodically, and then copying them back when the RAM drive became full.
Now, 17 years later, for the same price (ignoring inflation), I can buy a 32GB flash drive that's about the same size as my thumbnail. The amount of flash that I can buy for the same price has doubled roughly every year in the intervening period.
EEPROM
Some of the older computer books that I have tell me that memory comes in two flavors: RAM and ROM. RAM is volatile: It can be written to, but the stored data is lost as soon as the power goes out. ROM is read-only memory, and can be used for persistent storage, but can't be modified after it's written. EPROMs are like ROM, but they're erasable and programmablejust shine ultraviolet through the window at the top and you can reprogram them.
These books also speak of a new development that's still quite expensive: electronically erasable, programmable, read-only memory. EEPROMs can be erased and rewritten entirely by sending electrical signals to themno ultraviolet lamps required.
With the exception of some battery-backed RAM devices, all solid state disks employ some form of EEPROM. The most common form, flash, is a linear descendent of the original EEPROM memories, using the same floating gate transistor as the core component.
Flash comes in two flavorsNOR and NANDbased on the logic gates that the layout of the transistors resemble. NAND flash, the most common form, has similar characteristics to a hard disk. It's addressable in blocks, not in individual words. This is fine as a drop-in replacement for hard disks, which have the same limitation, but makes it impossible to do some of the more interesting things that should be possible with nonvolatile solid state memory.
In a modern operating system, userspace code almost never actually interacts with the disk directly. All disk accesses go via the disk cache, a bit of memory in the operating system kernel that stores a copy of the data from the disk. The operating system loads data in chunkstypically 4KBand these in-memory copies are then accessed and modified by userspace programs. Periodically, the system will flush the changes back to the disk.
This approach mirrored when you get closer to the CPU. The CPU itself does something similar, typically with two or three layers, when accessing RAM. Main memory is a lot faster than the disk, but an access can still take well over a hundred cycles. If your processor had to go to main memory for every load or store instruction, it would run at something like 1 percent of its maximum speed. Instead, it accesses a copy of a small amount of main memory in its cache.
This memory hierarchy design appears because some memory is fast and expensive, while other memory is slow and fast, so you have progressively larger amounts of progressively slower memory. When you get below the main memory, there's a sharp discontinuity in the hierarchy. Hard disks are a lot slower than RAM. A fast hard disk can transfer 100MB/s with a seek time of a few milliseconds. DDR3 can transfer gigabytes of data per second with an access time of a few nanoseconds.
This sharp discontinuity is why swapping cripples performance. When you start getting a lot of CPU cache churn, performance degrades a little. When you start swapping to disk, performance dies horribly. Operations that should take milliseconds start taking seconds, and the user notices.
In an ideal world, you'd have a few terabytes of battery-backed static RAM. In an almost ideal, but actually feasible, world, you'd have progressively larger amounts of slower memory. If you had 8GB of RAM, you might have 80GB of solid state storage with an access time of 10-100ns, then more with an access time of 100ns-1ms, and so on. Most data will be in fast RAM, some in slow RAM, and the rest in very slow RAM.
You can also do some interesting tricks like execute in place with byte- or word-addressable persistent storage. This is one of the main uses for NOR flash on embedded systems. The NOR flash appears in the address space just like normal memory, so you can run programs just by jumping to their entry address in the flash; no need to copy them into memory. This potential lets the OS be a lot more conservative about caching. Rather than copying something that may be used into memory, potentially evicting something else, it can just map things from the lower tiers into the address space and replace them with a faster cached version if they are accessed a lot.
It also makes resuming from a suspend state much faster. Before sleeping, you set all of the page table entries to point to the (slower) non-volatile memory. When you resume, you can immediately start working again, without waiting for anything to be swapped in. This helps with overall power usage, too. If you can resume from the equivalent of a suspend-to-disk state almost instantly, then there's no reason to keep the RAM powered when the machine is asleep. A laptop can go to a zero-power sleep state as soon as you close the lid, yet still resume instantly, albeit with a slight performance degradation until the RAM caches have been populated again.
In spite of its limitations, flash has one significant advantage: It exists now. This means it's the baseline against which other SSD technologies are compared. Flash can achieve transfer rates of a few hundred megabytes per second, with a write time of around 0.1ms, and somewhat better seek times.