Available Storage Architectures
VMware supports several storage protocols, which can make it difficult for companies to know which option best suits their needs. Although this flexibility and freedom can be a good thing, too many options can make decision making difficult or even overwhelming. Although a few years ago, the only viable option for production environments was storage-area network (SAN) Fibre Channel (FC), today the differences between protocols are of less importance, and several criteria must be taken into account. Figure 3.2 shows the supported protocols.
Figure 3.2. Local versus centralized storage architectures.
The following storage options (chosen when creating the datastore in Figure 3.3) are available in virtual environments.
- Local storage: Hard drives are directly connected within the server or as direct-attached storage (DAS), which are disk arrays directly attached to the server.
- Centralized storage: Storage is external from the server. The following protocols are supported by ESX:
- Fibre Channel (FC)
- Internet Small Computer System Interface (iSCSI) software or hardware initiator
- Network File System (NFS) used by network-attached storage (NAS)
- Fibre Channel over Ethernet (FCoE) software or hardware initiator
Figure 3.3. The type of storage must be chosen when creating the datastore.
Local Storage
Local storage is commonly used when installing ESXi Hypervisor. When an ESXi server is isolated and not in a cluster, this storage space can be used for operating system image files (provided as ISO files) or noncritical test and development VMs. Because by definition local storage is (usually) not shared, placement of critical-production VMs should be avoided because the service levels are too low. Features such as vMotion, Distributed Resource Scheduler (DRS), High Availability (HA), and Fault Tolerance (FT) are not available except when the vSphere Storage Appliance is used.
Centralized Storage
In a centralized architecture, vSphere can be made to work in clusters, increasing service levels by using advanced features such as vMotion, DRS, HA, FT, and Site Replication Manager (SRM). Moreover, these types of architectures provide excellent performance, and the addition of the vStorage APIs for Array Integration (VAAI) relieves the host server from some storage-related tasks by offloading it to a storage array.
NAS storage servers are based on a client/server architecture that accesses data at the NFS level. This protocol, called file mode, uses the company’s standard Ethernet network. Network cards are available in 1 GbE (1 Gbps) or 10 GbE (10 Gbps).
Other protocols provide a direct I/O access (also called block mode) between host servers and storage by using SCSI commands in a dedicated network called a storage-area network (SAN). With VMware, the advantage of block mode over file mode is that Raw Device Mapping (RDM) volumes can be attributed to VMs. VMware uses the Virtual Machine File System (VMFS) in this architecture.
There are different types of SANs, both IP-based SANs and FC-based SANs:
- SAN IP (called iSCSI): Encapsulates SCSI commands through the TCI/IP network (SCSI over IP). You can access the iSCSI network by using either a software initiator coupled with a standard network card, or a dedicated hardware host bus adapter (HBA).
- SAN FC: Dedicated Fibre Channel high-performance storage network for applications requiring high I/O to access data directly and sequentially. The FC protocol encapsulates the SCSI frames. This protocol has very little overhead because SCSI packets are sent natively. The server uses a Fibre Channel HBA to access the SAN.
- SAN FCoE (rarely used in 2012): Convergence of two worlds: IP and FC networks. FCoE uses Fibre Channel technology, but on a converged FCoE network. A converged network adapter (CNA) is the type of card used.
As shown in Figure 3.4, SCSI commands are encapsulated in different layers depending on the protocol used. The more layers used, the more overhead at the host level.
Figure 3.4. Layers of SCSI commands in the different protocols.