SANs Fundamentals
Introduction
Network and server downtime is costing companies hundreds of millions of dollars in business and productivity losses. At the same time, the amount of information to be managed and stored is increasing dramatically every year.
A new concept called the Storage Area Network (SAN) could offer an answer to the increasing amount of data that needs to be stored in an enterprise network environment. By implementing a SAN, users can offload storage traffic from daily network operations while establishing a direct connection between storage elements and servers.
Basically, a SAN is a specialized network that enables fast, reliable access among servers and external or independent storage resources. In a SAN, a storage device is not the exclusive property of any one server. Rather, storage devices are shared among all networked servers as peer resources. Just as a Local Area Network (LAN) can be used to connect clients to servers, a SAN can be used to connect servers to storage, servers to each other, and storage to storage.
A SAN does not need to be a physically separate network, either. It can be a dedicated subnetwork, carrying only the business-critical I/O traffic between servers and storage devices. A SAN, for example, would not carry general-purpose traffic such as email or other end-user applications. This type of net avoids the unacceptable trade-offs inherent in a single network for all applications, such as dedicating storage devices for each server and burdening a LAN with storage and archival activity.
Furthermore, as distributed networks are re-engineered to achieve continuous operations and to host mission critical applications, a common data-center technology is being applied to them. Data centers use a network storage interface called Enterprise System Connection (ESCON) to connect mainframes to multiple storage systems and distributed networks. This type of network is also called a SAN. In other words, SANs are employed by mainframe data centers and account for approximately 58% of all network traffic. What is new is that SAN architectures are now being adopted in distributed networks out of low cost SAN technologies such as Small Computer System Interface (SCSI), Serial Storage Architecture (SSA), and Fibre Channel.
But What is a SANReally?
As previously mentioned, a SAN is a high speed network, similar to a LAN, that establishes a direct connection between storage elements and servers or clients. The SAN is an extended storage bus which can be interconnected using similar interconnect technologies as LANs or Wide Area Networks (WANs): routers, hubs, switches, and gateways. A SAN can be local or remote; shared or dedicated; and includes unique externalized and central storage and SAN interconnect components. SAN interfaces are generally ESCON, SCSI, SSA, High-Performance Parallel Interface (HIPPI), or Fibre Channel, rather than Ethernet. It doesn't matter whether a SAN is called a Storage Area Network or System Area Network, the architecture is the same in either case.
SANs create a method of attaching storage that is revolutionizing the network because of the improvements in availability and performance. SANs are currently used to connect shared storage arrays; clustered servers for failover; interconnect mainframe disk or tape resources to distributed network servers and clients; and to create parallel or alternate data paths for high performance computing environments. In essence, a SAN is nothing more than another network, like a subnet, but constructed from storage interfaces.
SANs enable storage to be externalized from the server and in doing so, allow storage to be shared among multiple host servers without impacting system performance or the primary network. The benefits are well proven as this architecture emerges from mainframe Direct Access Storage Device (DASD). It is nothing new. In fact, the DEC VMS network environment is based on SAN architectures and clustered servers. For example, EMC already has a large installed base of SAN attached disk arrays (for example, EMC's disk array product: Symmetrix) and has achieved such a high level of customer confidence that they are the standard of comparison1. So, what's new? This important technology is moving into the mainstream in distributed networking and is now the normal, adopted way of attaching and sharing storage.
Often referred to as the network behind the server, SANs represents a new model that has evolved with the advent of shared, multi-host connected enterprise storage. A SAN bypasses traditional network bottlenecks and supports direct, high-speed data transfer in three different ways:
- Server-to-storage
- Server-to-server
- Storage-to-storage2
SAN architecture and terminology is getting confused as each product camp goes about praising the merits of their solutions. The following discussion is aimed at providing a simple set of definitions and terms that should be adopted by the industry. To begin, storage can be attached to the network in one of three ways. According to Strategic Research Corporation2, ninety-nine percent of today's server storage connections are bus-attached via a form of SCSI or Integrated Development Environment (IDE) as shown in Figure 1.12. Bus attached storage operates through the server. Availability and performance are limited to the server's capabilities and loading. Storage is externalized from the server via Network Attached Storage (NAS) or SAN Attached Storage (SAS). NAS and SAS are very similar from an engineering standpoint, but it is essential to differentiate them to help the customer understand the differences in implementations.
Figure 1.1 Storage attachments.
Network Attached Storage (NAS)
NAS is a disk array that connects directly to the messaging network via a LAN interface such as Ethernet using common communications protocols (see sidebar, "Storage Sorting"). It functions as a server in a client/server relationship, has a processor, an OS or microkernel, and processes file I/O protocols such as Server Message Block (SMB) and Network File System (NFS).
Storage Sorting
Storage area networks are fast becoming part of the IT lexicon, but the abundance of other storage management acronyms is making things a bit confusing. As previously explained, a SAN is a collection of networked storage devices that can automatically communicate with each other.
NOTE
A SAN doesn't have to use Fibre Channel as its underpinnings. For example, the mainframe environment's Enterprise Systems Connection channels could form the SAN interface.
The key to understanding what makes a SAN is understanding that the goal is to divorce all users and network administrators from storage management. Storage, retrieval, and file transfers are automatically managed in a true SAN.
OK, so what is network attached storage? SANs may include NAS-enabled devices, but they aren't the same thing.
A NAS system is connected to application servers via the network. But unlike a SAN, with NAS users can directly access stored data without server intervention. A SAN will automate management of storage systems, while NAS devices don't have this capability.
And when it comes to automated management in a SAN, there's yet another definition floating around: hierarchical storage management (HSM). HSM is simply managing data movement from online to offline storage, such as to tape devices.
SAN Attached Storage (SAS)
A shared storage repository attached to multiple host servers via a storage interface such as SCSI, Fibre Channel-Arbitrated Loop (FC-AL), or ESCON. The SAN is an extended, shared storage bus which can be interconnected using similar interconnect technologies as LANs or WANs, routers, switches, and gateways.
NOTE
FC-AL is not the only fibre channel interconnect used in "SAN." More and more fibre channel users use switched fibre channel as an interconnect. Fibre channel can be used by itself, and arbitrated loop can be removed.
The next key terminology point deals directly with SAN architectures. The three SAN components are the SAN interfaces, the SAN interconnects, and the SAN fabric as shown in Figure 1.22. These are often mixed together, but really are distinct elements of the SAN. Think of the relationship between these three fitting together in a chained sequence, server-to-interface-to-interconnect-to-fabric-to-interconnect-to-interface-to-storage array.
Figure 1.2 SAN network terminology.
SAN Interfaces
SCSI, FC-AL, SSA, ESCON, bus-and-tag, and HIPPI are common SAN interfaces. All allow storage to be externalized from the server and can host shared storage configurations for clustering. Multiple channels can be installed or loops built to provide increased performance and redundancy. It is incorrect to say that SCSI cannot be extended, multiplexed, switched, and connected via gateways to WANs like serial interfaces.
SAN Interconnects
Extenders, Multiplexors (Mux), Hubs, Routers, Gateways, Switches, and Directors are the SAN interconnects. Sounds just like a LAN or WAN, and it is. SAN interconnects tie storage interfaces together into many network configurations and across large distances. Interconnects also link SAN interfaces to SAN Fabrics. One common misconception is that FC-AL, a SAN interface, is a SAN Fabric, Fibre Channel Switched (FCS). It is not (see sidebar, "SAN Myths").
SAN Myths
Anyone considering a storage area network quickly encounters a number of myths. Like most technology myths, SAN myths contain a grain of truth, but the reality is often quite different. The following are the top five common SAN myths:
The Fibre Channel Myth
When first conceived, SAN technology was specified on Fibre Channel as the preferred communications link. Fibre Channel was able to provide the speed and the distance SAN required. Today, the majority of SANs are being implemented with Fibre Channel, either arbitrated loop or switched topologies. However, the SAN is not locked into Fibre Channel. Rapid developments are occurring with SCSI over Internet Protocol (IP) for use in SANs. The Fibre Channel SAN will continue to have a place in the enterprise data center, but protocols using SCSI commands are emerging, and vendors will introduce SAN products using the SCSI protocol over the next few quarters.
The Interoperability Myth
Early SANs indeed suffered from a lack of interoperability among components from different vendors. However, interoperability is improving, especially within the switch, hub, and host bus adapter market. Through a number of interoperability events, dubbed Plug Fests, competing vendors come together to iron out many of the interoperability issues. Observers expect any remaining interoperability issues to completely fade away within 12 months.
The Skills Barrier Myth
Certainly SANs introduce new technologies into the enterprise storage world, particularly fibre and networking, which require new skills. Information technology (IT) storage experts groomed for directly attached SCSI storage now must learn new protocols and new configurations. Switched fibre SANs, in particular, require advanced networking skills. There is no getting around it; SANs require an understanding of networking. However, new tools, new products, and new service offerings from the
The Management Myth
Early SAN adopters complained that SANs were hard to manage and administer. And they were, due mainly to a lack of tools. Today, however, SAN administrators are finding a growing selection of tools to manage the SAN, perform backup, create virtual storage pools, monitor resources, manage the topology, and more. Storage vendors are responding with tools to manage the various SAN components, and more and better tools are in the pipeline.
The Cost Myth
SANs entail a large, initial capital outlay, but the long-term benefits are significant. While it is cheaper initially to attach low-cost disk storage to a server, the cost of administering storage attached to multiple servers and the inefficiency that results from underutilized pools of storage, shift to the overall total cost of ownership advantage clearly to the SAN. Recent studies suggest that half of all server-attached storage goes unused because it can't be shared. With a SAN, storage utilization increases to 70% and, ultimately, can hit 90%. And with a SAN, each administrator can manage far more storage.
The trouble with technology myths is that technology keeps changing. Even if a SAN myth was true once, it probably isn't today.
SAN Fabrics
Switched SCSI, FCS, and Switched SSA form the most common SAN fabrics. With gateways, SANs can be extended across WAN networks as well. Switches allow many advantages in building centralized, centrally managed, consolidated storage repositories shared across a number of applications.
Building A SAN
Building a SAN requires network technologies with high scalability, performance, and reliability in order to marry the robustness and speed of a traditional storage environment with the connectivity of a network. As the SAN concept has developed, it has grown beyond identification with any one technology. In fact, just as LANs use a diverse mix of technologies, so can SANs. This mix can include Fiber Distributed Data Interface (FDDI), Asynchronous Transfer Mode (ATM), and IBM's Serial Storage Architecture, as well as Fibre Channel. SAN architectures also allow for the use of a number of underlying protocols, including Transmission Control Protocol/Internet Protocol (TCP/IP) and variants of SCSI.
A SAN allows different kinds of storage (mainframe disk, tape, and Redundant Array of Inexpensive Disk [RAID]) to be shared by different kinds of servers, such as Windows NT, UNIX, and OS/390. With this shared capacity, organizations can acquire, deploy, and use storage devices more cost-effectively. SANs let users with heterogeneous storage platforms utilize all of its storage resources. This means that within a SAN, users can backup or archive data from different servers to the same storage system; allow stored information to be accessed by all servers; create and store a mirror image of data as it is created; and share data between different environments.
By externalizing storage and taking storage traffic off the operations network, companies gain a high-performance storage network, shared yet dedicated networks for the SAN and LAN, and improved network management. These features reduce network downtime and productivity losses while extending current storage resources.
In effect, the SAN does in a network environment what traditionally has been done in a back-end I/O environment between a server and its own private storage subsystem. The result is high speed, high fault tolerance, and high reliability.
With a SAN, there is no need for a physically separate network because the SAN can function as a virtual subnet operating on a shared network infrastructure, provided that different priorities or classes of service are established. Fibre Channel and ATM allow for these different classes of service. Early implementations of SANs have been local or campus-based.
But as new WAN technologies such as ATM mature, and especially as class-of-service capabilities improve, the SAN can be extended over a much wider area. Despite the hype about the coming of unlimited bandwidth, WAN services remain costly today. However, as WAN technologies improve their quality of service, they will provide (even over public WANs) the robustness needed for each application, including networked I/O.
SAN Tools
In addition to reliability and performance, SANs promise easier and less costly network administration. Today, administrative functions are labor-intensive and IT organizations typically have to replicate management tools across multiple server environments. With a SAN, there is just one set of tools, and replication costs can be avoided. The traditional software functions of security management, access control, data management, and storage management will be mapped into the SAN architecture and performed differently than they have been in the past. For example, different security strategies have to be pursued when storage devices are more widely available. Specialized I/O protocols such as Network Data Management Protocol (NDMP) are emerging, and the software functions will evolve much as LAN functionality has progressed in recent years.
Why Are SANs Important?
SANs will enable almost any application that moves data around the network to perform better. Just like conventional subnets, SANs add bandwidth for specific functions without placing a load on the primary network. In this fashion, SANs compliment LANs and WANs. SANs also enable higher performance solutions such as data warehousing. In fact, as Figure 1.2 shows, SANs are really pervasive and applicable to many networking environments2.
SAN technology enables the network architecture of shared multihost storage, connecting all storage devices as well as interconnecting remote sites. This will soon be the standard configuration for centralized networks running mission-critical applications. Both disk and tape operations are centralized, attached via the SAN, and more resilient, as well as operating faster. As the IT community has learned in the database market, the key to application performance is usually the I/O network, not the disk drives themselves. SAN architecture holds the keys to the future.
The benefits of a SAN network architecture are huge and will cause many sites to adopt this methodology of attaching storage and transferring data. This list is indicative of the types of benefits seen in sites operating with SANs (see sidebar, "SAN Benefits").
SAN Benefits
Higher Application Availability
Storage is externalized, independent of the application, and accessible through alternate data paths such as found in clustered systems.
Higher Application Performance
Server and bus overhead degrades performance. Independent SAS arrays will outperform bus-attached arrays, as well as be compatible with performance clusters.
Easier Centralized Management
SAS configurations encourage centralization and the ensuing large management benefits.
Centralized and Consolidated Storage
Storage centralization and consolidation result in higher performance, lower cost of management, more scalability, flexibility, reliability, availability, and serviceability.
Practical Data Transfer, Vaulting, and Exchange with Remote Sites
Cost effective implementations provide high availability disaster protection (remote clusters and remote mirrored arrays)2.
SAN Applications
Now look at SANs from an application viewpoint. At a high level, for example, Strategic Research Corporation has identified six application areas currently utilizing SAN architectures for data transfer as shown in Figure 1.32. This is not to mean there won't be more in the future. The purpose of Figure 1.3 is to explain how pervasive the technology is already2.
As previously discussed, in the changing network architecture, externalized storage is a generic application, fitting a myriad of network-hosted applications with many benefits. Next, is clustering. Clustering is usually thought of as a server process providing failover to a redundant server or scalable processing through using multiple servers in parallel. In a cluster, the SAN provides the data pipe, allowing storage to be shared. For example, Microsoft's ClusterServer, which is an availability cluster, shares a single array between two servers attached via a SCSI SAN. Next, data protection architectures operate through creating redundancy of storage on a dynamic basis. SANs provide the best interconnects, allowing storage mirroring, remote clustered storage, and other high availability (HA) data protection solutions because of the performance and independence as a secondary data path. SANs do not impact the primary network or the servers and they provide redundancy. Data vaulting is the process of transferring data, usually for the purpose of archive or logging, to a remote site. SANs make a very efficient transmission medium. Interchange and disaster recovery operations are very similar and use SANs the same way, whether local or remote, just for different purposes. SANs provide a very efficient pipe for moving data offsite or between sites. Disaster protection systems can be built on remote vaulting (backup) processes or with high availability remote array mirroring or clustering.
Figure 1.3 Applications utilizing SANs.