- Finding, Checking, and Mounting Filesystems
- Introduction to Journaling Filesystems
- Advantages of Journaling Filesystems
Introduction to Journaling Filesystems
Some of the more inspired among us may have kept (or still keep) a journal to record the changes in our lives. Journals and diaries help us keep track of exactly what's happening to us, and also often come in handy when we need to look back and see what was happening at a specific point in time. Though not nearly so melodramatic as personal journals, this is almost exactly the same model used by journaling filesystems, which keep a record of the changes made to the filesystem in a special part of the disk called a journal or log. My analogy between filesystems and life breaks down at this point, because journaling filesystems record prospective changes to that filesystem in the log before they actually perform those operations on the filesystem. If the computer system crashes while actually making those changes to the filesystem, the operating system can use the information in the log to bring the filesystem up-to-date by replaying the log, which is usually done when it remounts the journaling filesystem or verifies its consistency.
After a computer crashes, as discussed earlier in this chapter, the integrity of standard local filesystems must be verified by performing an exhaustive examination of all the data and data structures that they contain. Changes to the filesystem may only have been partially written to disk, and the operating system has no way to determine whether all writes to a filesystem completed successfully other than by completely checking the integrity of the filesystem. This sort of check is usually unnecessary after an orderly shutdown or restart of a computer system because shutdown and restart procedures always flush all pending writes to disk and mark a filesystem as clean before unmounting it. Filesystems marked as clean do not have to be checked when a computer restarts.
Unlike standard filesystems, journaling filesystems can be made consistent by replaying any actions in the log that are not marked as having been written to disk. As discussed later, these actions range from a record of any changes to file and directory metadata (for example, files and directories that were created, deleted, moved, or whose size changed) to a complete record of the changes to the data in any file. It is not always possible to restore every change made to a journaling filesystem, because information about some of them may be incompletely written to the log. However, you can almost always safely assume that the journaling filesystem is consistent and can be brought up to date by re-executing the pending changes that were recorded in a few log records, rather than having to examine the whole filesystem. As filesystems and disks grow larger and larger, the time it would take to verify the integrity of a filesystem grows, as do the potential time savings gained by using a journaling filesystem.
Note
The terms "logging" and "journaling" are usually used interchangeably when referring to filesystems that record changes to filesystem structures and data to minimize restart time and maximize consistency. I tend to use the term "journaling," because this makes it hard to confuse journaling filesystems with log-based filesystems, which are a completely different animal. Log-based filesystems use a log-oriented representation for the filesystem itself and also usually require a garbage collection process to reclaim space internally. Journaling filesystems use a log, which is simply a distinct portion of the filesystem and disk, and can even be a file within the filesystem. Where and how logs are stored and used are explained later in this section. Journaling filesystems themselves usually follow the more classic filesystem organization explained in Chapter 1, though they often use faster algorithms and heuristics for sorting and locating files, directories, and data.
Logs are the key feature of journaling filesystems. As mentioned earlier, information about filesystem changes is written to the log (logged) before those changes actually are made to a filesystem. Traditional filesystems with fixed disk structures but no journaling capability, such as ext2, have to do synchronous writes to the filesystem to guarantee the integrity of the changes that they are making. The ext2 filesystem, in particular, uses some clever mechanisms for bunching related writes together to minimize head movement on the disk and also to minimize the amount of time that access to the filesystem is literally paused while those writes are taking place.
Because the filesystem must always be consistent when it is being used, there are almost always some number of pending writes held in buffers by the operating system. Though modern Linux and Unix systems automatically flush all pending writes to all filesystems when the system is being shut down or rebooted, older Unix systems (and early versions of Linux with more primitive types of filesystems) didn't always do such a good job. This is why you'll often see older Unix sysadmins religious type the sync command (which flushes all pending writes to disk) a few times before shutting down or rebooting a systemit's essentially the Unix version of saying a few "Hail Marys."
The ext2 filesystem is a remarkably high-performance local filesystem. One indication of this is the fact that the filesystem itself has been ported to various microkernel environments such as GNU Hurd and Mach. Although simplifying porting code to new environments is part of the goal of Open Source software, people don't bother unless the code in question is well-written, powerful, and useful. When writing files, the ext2 filesystem preallocates a few extra contiguous blocks whenever possible to minimize fragmentation as files that are being edited continue to grow.
Whenever possible, the ext2 filesystem also minimizes the distance between the inode for a file and the data blocks that contain the file data to minimize head movement as much as possible when accessing files. The ext2 filesystem also uses the idea of block groups, logical subsets of the data storage available on a disk, to provide performance optimizations. Block groups are conceptually related to the cylinder groups used in earlier high-performance filesystems such as the Berkeley Fast File System (FFS). Block groups can be viewed logically as filesystems within a filesystem because they contain their own superblocks to help localize information about free and used blocks into smaller units, reduce the size of the bitmaps that reflect free and used blocks, and simplify allocating data blocks as close as possible to the inode identifying the file associated with them.
Regardless of the optimization implemented in the ext2 filesystem, writes to the filesystem still have to be synchronous with other access to the filesystem at some point. Multiple users accessing multiple files created at widely different points in time will still have to write data that is probably located all over the disk, pausing access to the filesystem while widely separated head movement takes place. Although writing pending filesystem changes to a log and then later writing those same changes to the filesystem essentially causes two units of information to be written for each single write destined for the filesystem, only writes to the log have to be synchronous with access to the filesystem.
Migrating logged changes to the actual filesystem can largely be done asynchronously, except when another process requests access to a file or directory for which changes are already pending in the log. In this case, the pending changes in the log must be written to disk before the new process can be granted access to that file or directory. Journaling filesystems also can offset much of the additional time required by "double writes" by clever organization and use of the log itself, as explained later in this chapter.
The next few sections discuss the type of information stored in a log, where logs themselves are stored and how they are written to, and how journaling filesystems use log information during normal operation and when a system is restarted.
Contents of a Journaling Filesystem Log
Two different approaches to journaling are used by different filesystems, each with its own advantages and disadvantages:
The log contains only a record of the changes made to filesystem metadata associated with each write operation.
The log contains a record of the changes to both file data and filesystem metadata associated with each write operation.
The common denominator of logging changes to filesystem metadata is what guarantees the integrity of a journaling filesystems. Even after a system crash, the structure of files, directories, and the filesystem can be made consistent by re-executing any pending changes that are completely described in the log. Log entries in the log usually are transactional, meaning that the beginning and end of each single change is recorded because related sets of changes must either be completely performed or must not have been performed at all. For example, assume that I save a new version of a file that I am editing. This causes the following things to happen in the filesystem (though not necessarily in this order):
New blocks are allocated to hold the new data (this always happens first).
The newly allocated blocks are marked as being used in the filesystem.
The new file data is written to the newly allocated blocks on disk.
The inode or indirect block identifying the chain of data blocks associated with the file is updated to include the new blocks.
The time stamps for when the file was last accessed and written are updated.
Aside from logging these events, the log would contain information that indicated that all these events were associated with each other. If your computer crashed at this exact moment, you would either want all these things to happen or none of them to occur. Marking blocks as used that were not actually written to disk would waste space and also just be wrong. The actions associated with each filesystem change are therefore referred to as being atomicall of them must occur or none of them can.
Continuing with this example, a journaling filesystem that only logged changes to filesystem metadata would have a record of all these changes except for the actual contents of the new blocks. Replaying only atomic metadata would guarantee the consistency of the filesystem, but the modified file might contain garbage at the end because the new information written to it was not logged. To be completely safe, logs that keep a record of changes to file data must also contain a record of the information present before changes are made to the file. This provides an "undo" record that enables a journaling filesystem to erase changes that it made on the behalf of transactions that did not complete.
The more information you write to the log, the more time required to perform those writes, especially because writes to the log must be done synchronously to guarantee their integrity. The flip side of this coin is that storing both file and directory data and metadata changes increases the extent to which replaying the log gives you an exact picture of all the changes to the filesystem made up to the point at which the system crashed. From a user's point of view, this reduces the chance that changes that the user made to her data will not be visible when the system comes back up and her directory is available again.
The journaling filesystems described in the rest of this book take different approaches to the question of whether to log modified file data, directory data, and metadata, or simply to log metadata changes. Each chapter identifies the approach taken by that specific filesystem.
Where Logs Are Stored
Just as different journaling filesystems store different types of information in their logs, they also store those logs in different places. This section provides an overview of various log storage locations and the advantages and disadvantages of each.
The simplest place to store a log is as an actual file within the filesystem to which you are logging changes. This is the approach taken by the ext3 journaling filesystem, largely because a primary goal of the ext3 filesystem is to add journaling capabilities while otherwise maintaining compatibility with an existing type of filesystem, in this case ext2. Storing the journal as a file within a filesystem has two obvious performance problems. First, at some level, writing to the log has to be done through standard filesystem calls, and second, if the filesystem that contains the log is damaged somehow, you may lose the log. This is especially true for ext3 filesystems because they essentially can be summed up by the following equation:
A general problem with storing the log within a filesystem that it is keeping track of is that this can be slow due to the fact that writes to the log may compete for disk head movement with writes to the filesystem.
A second place to store the log is in a special portion of the filesystem not accessible to user programs. This enables you to use custom calls to write to the log in an optimized fashion, speeding up performance. This also substantially reduces the chance that the log will be lost if the filesystem is damaged because the log is associated with the filesystem but is stored as a specially formatted section rather than a file. However, this approach still has the problem that that writes to the log may compete for disk head movement with writes to the filesystem. In this case, however, storing the journaling filesystem within a logical volume can eliminate this competition if the log and data portions of the filesystem end up being stored on different physical devices. At the moment, this is almost impossible to guarantee, but it's something to consider for the future.
The final location for storing the log is outside the filesystem in a dedicated portion of some disk drive. This removes both the problems of corruption to the filesystem extending to the log and of log writes competing with filesystem writes for disk head movement (as long as the journaling filesystem and the log are stored on different physical devices).
The journaling filesystems described in the rest of this book store their logs in different locations. Each chapter identifies the approach taken by that specific journaling filesystem and discusses its advantages and disadvantages.
Verifying the Consistency of a Journaling Filesystem
Long-time Unix fans are used to thinking of the filesystem consistency checker as the application that guarantees the integrity of a filesystem during the boot process. This isn't true for journaling filesystems, which need only scan the log and re-execute any transactions that are completely present in the log but not marked as completed. Journaling filesystems are automatically brought up-to-date when they are mounted. The kernel code for each journaling filesystem replays any necessary portions of the log before it attaches them to the Linux filesystem.
The Linux boot process has certain expectations about how filesystems are verified to be consistent. Filesystems are generally marked as dirty when they are mounted, with the expectation that they will be marked as clean (not dirty) when they are unmounted. At boot time, the fsck wrapper executes the appropriate version of fsck for each filesystem that is to be checked. If the filesystem is not dirty, versions of fsck, such as fsck.ext2 (which is a hard link to e2fsck), report general statistics about that filesystem and then exit.
Because checking the consistency of a journaling filesystem is usually unnecessary, the best way of avoiding an unnecessary fsck is to mark journaling filesystems as not needing to have their consistency verified in the /etc/fstab file (by setting the sixth field to a 0). (Some journaling filesystems, such as JFS, replay the log when their fsck program is run rather than when they are mounted.) This is often overlooked, so most journaling filesystems therefore include an fsck utility with a name of the form fsck.filesystem-type so that it can be correctly called by the standard /sbin/fsck wrapper. The journaling filesystems discussed in this book take different approaches to what these "journaling fscks" do if executed because a journaling filesystem is still marked as dirty.
The following list summarizes the behavior of the versions of fsck that accompany the journaling filesystems discussed in this book:
/sbin/fsck.ext3 is a symbolic link to /sbin/e2fsck, which is the fsck utility for both ext2 and ext3 filesystems (because they're essentially the same thing). If e2fsck is executed as fsck.ext3, it tries to mount and umount the filesystem to cause the kernel code to replay the journal and fix anything that needs to be fixed when attaching the filesystem.
fsck.jfs is a real program that replays the transaction log and continues to check the filesystem if it is still marked as dirty.
fsck.reiserfs may not exist depending on the Linux distribution that you are using, but I always create it as a symbolic link to /sbin/reiserfsck just for good form. reiserfsck requires that you type "Yes" to run it, which always fails during the automatic boot process, so it is never really executed.
fsck.xfs is my absolute favorite. Whoever came up with this approach not only has a lot of faith in his kernel code but also has a great sense of humor. Here's the online manual page for fsck.xfs:
fsck.xfs(8) fsck.xfs(8) NAME fsck.xfs - do nothing, successfully SYNOPSIS fsck.xfs [ ...] DESCRIPTION fsck.xfs is called by the generic Linux fsck(8) program at startup to check and repair an XFS filesystem. XFS is a journaling filesystem and performs recovery at mount(8) time if necessary, so fsck.xfs simply exits with a zero exit status. FILES /etc/fstab. SEE ALSO fsck(8), fstab(5), xfs(5).
That just about says it all.
Note
You will only have the fsck.xfs man page after installing the XFS utilities, as explained in Chapter 7, "SGI's XFS Journaling Filesystem."