- Introduction to Cluster File Systems
- The NFS
- A Survey of Some Open-Source Parallel File Systems
- Commercially Available Cluster File Systems
- Cluster File System Summary
14.4 Commercially Available Cluster File Systems
In addition to the open-source cluster file systems that are readily available for free, there are a number of commercial parallel file system implementations on the market. The primary advantage of the commercial packages is the available support, which can include bug fixes and updates to functionality. Although I do not have time or space to delve deeply into all these products, three will serve as a representative sample of what is available.
There is a set of common requirements that you will find in all parallel file systems used in Linux clusters. The first requirement is POSIX semantics for the file system, which allows UNIX and Linux applications to use the file system without code modifications. The set of operations, the behavior, and the data interfaces are specified in the POSIX standard. For database use, you will find that the file system needs to provide some form of direct-access mode, in which the database software can read and write data blocks without using the Linux system's buffer or page cache. The database software performs this caching itself and removes the associated overhead.
For use with Oracle 9i RAC installations, a parallel file system allows sharing of the Oracle home directory with the application binaries and associated log and configuration files. This means a single software installation (instance) instead of one install per node participating in the cluster. This is one of the general benefits of using a file system of this type for clustersthe shared data and shared client participation in the file system can mean less system administration overhead.
14.4.1 Red Hat Global File System (GFS)
Red Hat has recently completed the purchase of Sistina Software Inc. and their GFS technology (information at http://www.sistina.com). On a historical note, the OpenGFS software described earlier is built on the last open-source version of GFS before the developers founded Sistina Inc. Red Hat offers a number of cluster "applications" for their Linux offering, including high availability and IP load balancing, and GFS adds a parallel cluster file system to that capability. The Sistina Web site mentions that the GFS software will be open source soon.
The GFS product provides direct I/O capabilities for database applications as well as POSIX-compliant behavior for use with existing UNIX and Linux applications. Both Red Hat and SUSE Linux are supported. The Red Hat Linux versions supported as of April 2004 are RHEL 3.0, RHAS 2.1, and Red Hat version 9.0. The SUSE SLES 8 release is also supported. GFS supports up to 256 nodes in the file system cluster, all with equal (sometimes referred to as symmetric) access to the shared storage.
As with other parallel cluster file systems, GFS assumes the ability to share storage devices between the members of the cluster. This implies, but does not require, a SAN "behind" the cluster members to allow shared access to the storage. For example, it is quite possible to use point-to-point Fibre-Channel connections in conjunction with dual-controller RAID arrays and Linux multipath I/O to share storage devices without implementing a switched SAN fabric.
Another way to extend the "reach" of expensive SAN storage is to use a SAN switch with iSCSI or other network protocol ports. The Fibre-Channel switch translates IP-encapsulated SCSI commands, delivered via the iSCSI protocol, into the native Fibre-Channel frames. High-performance systems may access the SAN via the switch Fibre-Channel fabric, and other, potentially more numerous systems may access the storage via GbE.
In addition to shared access to storage in an attached SAN fabric, Red Hat provides GFS server systems the ability to access a storage device via a local GFS network block device (GNBD), which accesses devices provided by a GNBD server. This type of interface on Linux (there are several, including the network block device [nbd] interface) typically provides a kernel module that translates direct device accesses on the local system to the proper network-encapsulated protocol. The GNBD interface allows extending storage device access to the server systems via a gigabit (or other) low-cost Ethernet network transport. This arrangement is shown in Figure 14-7.
Figure 14-7 GNBD and iSCSI device access to Red Hat GFS
This approach provides more fan-out at a lower cost than a switched SAN, but at a potentially lower performance. A key factor is the ability to eliminate expensive back-end SAN switches by using direct attach Fibre-Channel storage and either GNBD or iSCSI over lower cost GbE, thus decreasing costs. An entire book could be written on the design issues and implementation approach for this type of solutionI am hoping that somebody will write it soon. We will stop here before we get any deeper.
14.4.2 The PolyServe Matrix File System
The Matrix Server product is a commercially available cluster file system from PolyServe Inc. The product runs on either Microsoft Windows or Linux systems and provides a high availability, no SPOF file-serving capability for a number of applications, including Oracle databases. Red Hat and SUSE Linux distributions are supported by the software. See http://www.polyserve.com for specific information.
Access to the file system is fully symmetric across all cluster members, and is managed by a distributed lock manager. Meta-data access is also distributed across the cluster members. The Matrix Server product assumes shared storage in a SAN environment. The high-level hardware architecture is shown in Figure 14-8.
Figure 14-8 PolyServe Matrix architecture
The file system is fully journaled and provides POSIX semantics to Linux applications, so they do not need to be specially adapted to cluster environments. The high-availability package provides a number of features, including multiple fail-over modes, standby network interfaces, support for multipath I/O, and on-line addition or removal of cluster members.
An interesting management feature provided by the PolyServe file system is the concept of "context-dependent symbolic links" or CDSLs. A CDSL allows node-specific access to a common file name (for example, log files or configuration files like sqlnet.ora) in the file system. The CDSLs allow the control of which files have one instance and are shared by all members of the cluster, and which files are available on a node-specific basis. The PolyServe file system allows creating CDSLs based on the node's host name.
The PolyServe file system may be exported via NFS or made available to database clients with Oracle RPC, via the network. In this situation, the PolyServe nodes also become NFS or Oracle 9i RAC servers to their clients, in addition to participating in the cluster file system. Their common, consistent view of the file system data is available in parallel to NFS or Oracle clients. Other applications, like Web serving, Java application serving, and so forth, also benefit from exporting a single, consistent instance of the data in the cluster file system.
14.4.3 Oracle Cluster File System (OCFS)
The Oracle cluster file system (OCFS) is designed to eliminate the need for raw file access in Linux implementations of Oracle 9i RAC. The OCFS software is available for download free, and it is open source. It is intended only for supporting the requirements of the Oracle RAC software and is not a general-purpose cluster file system.
Because the Oracle database software manages the required locking and concurrency issues associated with data consistency, this functionality is not provided by the OCFS implementation. Other restrictions exist with regard to performing I/O to the OCFS, so its use should be limited to Oracle 9i RAC implementations only. It can help reduce the cost of implementing an Oracle 9i RAC cluster by eliminating the need to purchase extra cluster file system software.
Although OCFS removes the requirement to use raw partitions to contain the database, there are other side effects of which you should be aware. For this reason, you should investigate the system administration consequences, such as no shared Oracle home directory (implying multiple instances of log files and other information), before committing to OCFS as the file system for your database cluster.