- Storage Requirements
- I/O Size Requirements
- I/O Assessment and Analysis Tools
- Analyzing Key Application I/O Characteristics
- Simplified SAN Application I/O Models for Verification
- Final Project Definition
- Summary
3.4 Analyzing Key Application I/O Characteristics
With some data in hand, it is time to look at several examples of application complexes in order to determine the characteristics of the host systems and applications. A comparison of each application complex with the expected SAN type shows the configurations that work best and the settings that need to be applied.
When looking at the output of the I/O assessment tools used to gather data, apply local environment rules of thumb to the analysis. If the analysis of the data seems to indicate an oddity, then the local behaviors of the users or supporting systems will also need to be evaluated. For example, an oddity may be a moving peak usage time period on a system that runs the same workload every day. Additional analysis can help explain the unexpected behaviors and facilitate a more accurate sizing of the design. For example, a data warehouse batch job that starts daily at different times due to variable size of the input data set is one situation in which a moving peak usage time may be observed.
NAS Replacement SAN for an NFS Server
In the first system for examination, a SAN replaces a NAS server running NFS, as shown in Figure 3.1. The NAS server provides archived business intelligence in order to avoid retrieval of tape backups for recently processed data sets. Retrieval of data sets occurs in the case of processing errors, processing failures, or additional processing needs. The server holds several weeks of data, and the data set sizes are gradually growing. Specifically, the NAS server has been growing at a rate of approximately 100 percent every twelve months. To find
FIGURE 3.1 NAS replacement SAN for file sharing
the growth rate, determine how much storage has been added over the past twelve months and make a few quick inquiries about expected uses over the next twelve months. Now we understand the storage requirements for the SAN system.
Using the output of the scripts in Examples 3.1 and 3.2, it is possible to create several graphs of the data. The graphs show a few interesting characteristics of the NFS server. Figure 3.2 shows the bandwidth usage for the entire system over the period of a week.2 This aggregate display of bandwidth shows that the application does not consume much bandwidth. Only a fast SCSI or slower device interconnect has trouble with the peak bandwidth of the system. This fact gives a great deal of flexibility when choosing the SAN infrastructure and topology, because Fibre Channel or any other interconnect can easily handle this bandwidth.
Figure 3.3 shows the performance of the NAS server. The first graph in Figure 3.3 shows that the system will have an IOPS load close to, but not exceeding, the lower region of the IOPS performance scale for a single HBA, which Table 2.2 shows to be 500 IOPS (see page 36). This load allows for flexibility in the SAN configuration because the configuration requires only one HBA to service the IOPS and bandwidth load. Obviously other factors such as multipath I/O will affect the final number of HBAs used, but performance is not an issue based on the likely choices of hardware and the application requirements.
FIGURE 3.2 NFS server bandwidth versus time
FIGURE 3.3 Top: NFS server IOPS versus time. Bottom: NFS server I/O size versus time
The second graph in Figure 3.3 shows I/O size with respect to time for the period of a week. The I/O size graph shows that the system performs I/O in the 12KB to 16KB size characteristic of NFSv2. Peak I/O sizes can be larger than the NFS transfer size, because this is a systemwide analysis; the larger I/O sizes are approximate multiples of the typical NFS transfer size. Based on knowledge of the application, it can be assumed that during these times, multiple data transfers cause the aggregate I/O size to appear larger than expected. A quick inspection of the system processes during one of these periods shows that the assumption of multiple data transfers is correct.
No real oddities have been found from the analysis of the NAS server, and the parameters for the design have been obtained. Before defining the I/O model created to test the SAN design, a few more system types should be examined.
Storage Consolidation of a Data Warehouse (ETL) System
Data warehouse (ETL) staging systems make good examples of systems that are appropriate for storage consolidation. Figure 3.4 shows the systems.
FIGURE 3.4 Storage consolidation SAN for data warehouse
The host systems perform daily ETL tasks for a data warehouse system in a large customer service organization. The data provides general information about groups of customers in order to help provide more focused services to individuals in those groups. The storage devices are initially empty and then filled as projects arise. ETL systems perform mostly memory-intensive data transformation tasks. The I/O load on these systems consists mostly of file writes of the transformed data and data transfers to and from the host system.
STORAGE SPACE REQUIREMENTS
The amount of storage required for these systems is the sum of the following factors:
The space to receive the raw business intelligence files
The scratch work space for file transformation
The output area for the processed files
The archive area (if any)
To gather this information, look at the existing host systems.
The storage growth of the consolidated host systems is the sum of these two requirements:
The amount of storage needed to contain data sets as they grow
The amount of storage needed to accommodate additional data transformation output by any new processes
There is an additional potential reduction in excess storage from redeployment of unused storage using the shared pool in the SAN.
An examination of the three data warehouse staging hosts shows that the amount of storage grows about 1TB every six months. Each of the three systems has 1TB of storage (a total of 3TB), and each system will need an additional 2TB of storage each in the next twelve months. Therefore, the storage consolidation SAN requires 3TB of storage now plus 1.5TB for the first six months of growth. This configuration actually allows the hosts to grow exactly as if they had local storage. But storage now can be allocated to each host, as needed to accommodate uneven growth patterns.
This configuration requires the same amount of storage, but the timing of the deployment is different. The free pool of storage in the SAN can be equal to 1.5 times the size of a single host system's storage instead of 3 times the storage that a single host needs for growth. As a result, the storage consolidation SAN requires more frequent storage acquisitions to achieve the same growth rate, but allows the acquisitions to be smaller and the idle storage on the systems as a group to be smaller, because deployment is easier and more flexible.
PERFORMANCE REQUIREMENTS
An examination of the three ETL systems using the get_io.sh and get_iosize.pl scripts (Examples 3.1 and 3.2) sets the performance requirements for the ETL storage consolidation SAN. The bandwidth graphs in Figure 3.5 show widely varying usage from host system to host system.
FIGURE 3.5 Three consolidation candidate host systems, bandwidth versus time
Host 1 has the highest aggregate bandwidth and the least consistent usage timing even though the bandwidth utilization is mostly consistent. Hosts 2 and 3 have more consistent usage patterns and do not have extremely high bandwidth requirements. If it is decided to compromise on absolute bandwidth or if the peak workload on Host 1 can be relocated to one of the other hosts during a less busy time, then the bandwidth requirement for this SAN can be set at 100MBps.
The IOPS requirements we see in Figure 3.6 show an average-to-high channel demand and an average overall demand for the combined requirements of these host systems.
FIGURE 3.6 Three consolidation candidate host systems, IOPS versus time
This information, along with the preceding bandwidth information, enables us to select the following:
The number of channels required per host system
The number of channels per storage device
The number of paths through the fabric per host system
The aggregate I/O size value for these host systems is not nearly as useful in this case due to the high number of overlapping jobs running on each host system. A full assessment of the characteristic I/O size for the systems requires a detailed application analysis of each job. It is not necessary to perform the assessment at this time because the other characteristics are much clearer, and they provide the necessary amount of information about the I/O behavior.
Analyzing I/O in Other SAN Types
Examining the I/O behaviors of a system for capacity planning or a new project is difficult because the system does not exist before the deployment of the SAN. These types of SANs do have some similarities to a storage consolidation SAN and can be assessed in the same way. The results of the assessment will have less certainty but still allow for the setting of SAN parameters that will hopefully achieve a good SAN design.
If a company deploys a new data warehouse application every three to six months, with the same amount of storage and layout, then it is useful to deploy a capacity-planning SAN using several of that application's host system types. Examine one of the prior data warehouse host systems and then use it as a template for the host systems in the capacity-planning SAN. The advantage of this method, as opposed to just deploying some number of application host systems with directly attached storage, is that the capacity-planning SAN can accommodate changing requirements without any new physical work on the host systems or their storage.
If a new data warehouse application is expected to require twice the typical amount of storage that the template application host system has, the storage can easily be accommodated in the capacity-planning SAN by making changes to the SAN configuration that logically reassigns storage. If deploying a group of host systems with directly attached storage where one host system needs an increase in storage size, the host must either have storage physically reconnected from some other host system or benefit from a new storage acquisition. This reconnection potentially leaves one host system short of disk space, takes longer than a configuration change, or requires an additional storage purchase, leaving other storage underutilized. The savings in labor alone will make this a worthwhile use of a SAN.
Use the same tools for examination of the template host system, but accept more variability in the design. The bandwidth assessment of a typical midsize data warehouse system in Figure 3.7 shows peak host bandwidth in the average range.
Take the per-channel I/O bandwidth into consideration when deciding the type of I/O channel to use and the required number per system. This choice can push the per-channel I/O bandwidth from the average range to the high range using four or fewer I/O channels per host system. Fewer than four low-bandwidth I/O channels can constrain peak bandwidth, but this design choice needs some justification because there is a potential for reduced performance.
The second graph in Figure 3.7 shows the IOPS behavior of the data warehouse template system. The system has a peak IOPS performance characteristic that is in the average region for a host, but the per-channel IOPS performance moves into the high performance region with fewer than six I/O channels available.
FIGURE 3.7 Data warehouse SAN candidate, host system I/O analysis
The system bandwidth and IOPS analysis shows that the peak bandwidth occurs at a different time than peak IOPS. A quick look at the I/O size during these times can rule out obvious errors in the IOPS or bandwidth assessments. In Figure 3.7, the I/O size during the peak bandwidth period is indeed larger than during the peak IOPS period. Another interesting characteristic to note is that the peak I/O size occurs during a low IOPS time but still requires a significant amount of bandwidth.
It is now possible to determine the number of I/O channels and the expected performance of the host systems on this capacity-planning SAN, based on the IOPS and bandwidth assessment. For example, one I/O channel should be allocated per host system for every 50MBps of bandwidth or every 1000 IOPS. Two I/O channels should be added for every 50MBps of bandwidth or 1000 IOPS if multipath I/O is required. The I/O size information helps validate the assessment and gives some useful information for creating an I/O model for design verification testing.