- Ubiquitous Computing
- Web Services
- The Semantic Web
- Spaces Computing
- Peer-to-Peer Computing
- Collaborative Computing
- Dependable Systems
- Security
- Languages
- Pervasive Computing
- Cluster Concepts
- Distributed Agents
- Distributed Algorithms
- Distributed Databases
- Distributed Filesystems
- Distributed Media
- Distributed Storage
- Grid Computing
- Massively Parallel Systems
- Middleware
- Mobile and Wireless Computing
- Network Protocols
- Operating Systems
- Real-Time and Embedded Systems
- Commentary
- Endnotes
Distributed Filesystems
In regard to data access, the balancing of guarantees of data availability with security and efficient resource utilization is the product of the distributed filesystems fitscape of NDC. The first viable distributed filesystem was the Network File System (NFS) from Sun Microsystems. NFS enabled distributed storage, distributed media, cluster concepts, collaborative computing, and more. It was used on Sun's UNIX-based workstations exclusively at first (circa 1983) but became a standard file sharing mechanism for many operating systems with the public release of NFS 2.0 in 1985. NFS version 3 came along around 1994; major revisions are currently being implemented to allow better performance across the Internet, ultimately turning NFS into a true WAN filesystem.
NFS is not so much a filesystem as it is a collection of protocols that collectively give rise to a client-perspective distributed filesystem. Key to NFS is the concept of a remote file service that is managed by a remote server. Clients need not be aware of a file's location, but rather are given an interface similar to what a local filesystem might offer; the interface offers various file operations that the server is responsible for implementing. This approach can be viewed as a remote access model, as contrasted with an upload/download model, as shown in the Figure 3.7.[24]
Figure 3.7. General distributed filesystem models
A rudimentary example of an upload/download model is an Internet FTP service when used by a client to obtain, modify, and store data on a remote server.
NFS is implemented on top of RPC. Filesystem interfaces have, for the most part, been abstracted from operating system interfaces to traditional local filesystems to transparently offer distributed filesystem capabilities. A virtual filesystem interface (VFS) has been a standard feature in UNIX and UNIX derivatives since the mid-1980s.
Basically, all requests for file access, whether they be remote or local, go through the operating system's VFS interface. This allows applications to treat all files uniformly, which is of great benefit to NDC application developers. Nevertheless, it is also important to remember the inherent local/remote differences; the ambiguities and uncertainties of remote computing must be considered at the interface level if uniform filesystems access is to be provided.
When it comes to distributed filesystems, two general approaches should be considered. Either multiple nodes access a filesystem managed by a single node, or the data in a filesystem is distributed across several nodes.
Other notable research in NDC distributed filesystems includes the Farsite work of Microsoft (discussed in the operating systems category below), which provides for replication of files, to increase data availability across arbitrary networks, and the Extensible File System, proffered in the early 90s from research at SunLabs, that explored a stacking filesystem, wherein one filesystem can be stacked on top of an existing one, thereby allowing the sharing of the same underlying data in a coherent manner.[25]