- Read Write Speed
- Defragmentation
- Internal Fragmentation
- Trimming Unused Space
Internal Fragmentation
I said that fragmentation isn't a problem for flash, but that's not quite true. Fragmentation is not a problem when you're talking about cells, but it is when you're talking about sectors. A flash drive will typically appear to the operating system as a set of 512-byte or 4KB blocks, but to the controller as 128KB (or larger) cells.
This is one of the main reasons why flash drives slow down after use. It doesn't matter if a file is scattered across a hundred cells but it does matter if it is using a lot of partial cells. For example, imagine that you have a 256KB file. From the operating system's perspective, this is in 64 blocks on disk. If this file is fragmented, then it may be spanning 64 flash cells. If it is properly aligned on cell boundaries, then it will be using two. If you are just reading the file, then neither will be slower, but if you overwrite it then you need to erase 64 cells in one case, or two in the other. The time to erase and rewrite a cell is roughly constant, so the worst case for this file is 1/32 of the maximum throughput of the device, which is something that even an unobservant user is likely to notice.
Unfortunately, this is quite hard for an operating system to work around, because it often doesn't know how blocks are mapped to cells by the controller. Embedded systems often use flash without an intelligent controller, so they see explicit cells and must do their own wear levelling, but more general-purpose systems have the flash hidden behind an interface that mimics a hard drive. These systems can make some guesses, such as favoring moderately large, power-of-two sized, aligned allocations. This doesn't always help, because the mapping from blocks to cells is fairly dynamic in modern flash drives.