- RAScad Modeling
- Proposed Methodology
- The Stack
- Best Practices
- Conclusion
Best Practices
The differential analysis in the preceding section leads to the following general rules for maintaining a highly available Sun Cluster environment. These rules have implications both for software designers of high-availability systems, as well as for people who maintain the high-availability environment of a deployed system. The rules relate to the top four parameters identified in the differential analysis in the preceding section.
Rule 1: If there is a node failure, minimize the probability of unsuccessful reconfiguration, thus improving p.
Rule 2: Minimize single node failures, thus increasing MTBF.
Rule 3: If there is an unsuccessful reconfiguration (leading to the entire cluster going down) minimize the time taken to bring the cluster back up, thus decreasing MTTR_2.
Rule 4: In case of successful reconfiguration, minimize the outage duration, thus decreasing Recovery_Time.
These rules are used while designing high-availability software such as the Sun Cluster software product, which aims to minimize service outage. However, it is well-known that failures in complex systems are due not only to hardware and software problems, but also to people and process issues8. A recent Boeing study9 reported that, of the fatal commercial aircraft incidents between 1959 and 2001, people/process caused 72 percent, with environment (weather) and equipment causing the remaining ones.
A set of best practices (BPs) can be derived from the rules stated previously. These BPs can be used towards minimizing the impact of people and process on the availability offered by the Sun Cluster software product. There are several projects underway at Sun that are collecting data for finding the root cause of specific cluster failures, or for configuration-related failures, to mention a few. Analyses of this data as well as the experience of several field engineers working closely with the Sun Cluster software product, have resulted in a wealth of information regarding the root causes of a multitude of cluster failure scenarios. This information, together with the rules developed previously, leads to a set of BPs, which are listed below. This set is not complete by any means, and must be augmented on a continuing basis. It also does not necessarily represent the top ten BPs, but is instead based on the frequency of failures seen by the data collection efforts mentioned previously.
Most of the following BPs are generic in nature, applying to any Sun Cluster stack; the last two are specific to the two-node RAC stack discussed in "The Stack." Additionally, some of these BPs are specific to the installation phase, some others are specific to the operational phase, and the remaining apply to both phases:
BP1: Carefully plan the installation. Planning an installation carefully includes (but is not limited to) the following steps:
Document the cluster configuration.
Ensure consistent configurations across nodes and services.
Clearly identify and label all devices and cables.
Minimize single points of failure with straightforward designs, distributing cabling and connections across multiple system boards, and IO cards.
Configure fewer active components, in a simple and consistent manner.
Ensure cluster design conforms to vendor support guidelines.
Sun has several customized programs of various costs, scaled to suit different customer install needs, that will facilitate a smooth installation. Getting the install right is critical in preventing future downtime due to a vulnerable configuration, and enables quick and consistent recovery operations. BP1 relates to rules 1 and 3.
BP2: Install the latest required Solaris OE patches, rebooting the system after patches have been installed and applied.
Ensure that all firmware is installed with the most recent, required versions for all systems and subsystems.
The latest firmware and software versions and patches have fixes for known problems and bugs. Installing these helps to improve the MTBF of each cluster node. It also decreases the chances of an unsuccessful reconfiguration due to known and fixed bugs in the software and/or firmware code, and could potentially decrease recovery time depending on the fixes in the latest revision. After nodes are rebooted, cluster membership needs to be verified to contain the expected members. BP2 relates to rules 1, 2 and 4.
Since this process requires a node outage, care should be taken to ensure rapid recovery to the Both_Up state. Policies regarding patch application vary widely between sites. This recommendation is not intended to change accepted policies nor is it intended to bypass the due diligence for patch analysis.
BP3: Perform a complete acceptance test before placing a cluster into its production mode.
A complete acceptance test includes burn-in of nodes with SunVTS and STORtools; development of thorough installation verification procedures; detailed tests with expected outcomes and times for common outages such as failover; and running a real client during testing to verify that service migration (via failover or switchover) works as expected. This is discussed in more detail in "Accelerate RAS into your Data Center",10 in the context of a specific clustering environment deploying 50 HA-NFS clusters at Sun. BP3 relates to rules 1, 2, 3 and 4.
BP4: Perform periodic auditing of the cluster hardware and software components.
We strongly recommend that all components of a cluster be audited periodically, including hardware and software components from Sun as well as any from third party vendors that form the stack in question. Configuration errors on cluster nodes can lead to a node failure, or the entire cluster failure, or the service not being started on a given node in the cluster after a reconfiguration. Several tools are available to facilitate regular monitoring of cluster components. The sccheck command that is part of the base Sun Cluster software has a set of useful checks currently, and is being augmented with several more. Also, Sun is expanding its products and service offerings to include additional proactive and preemptive support of the same. Certain contract levels provide regular access to reports that may help identify potential vulnerabilities in the system. Auditing is also discussed in some detail in "Accelerate RAS into your Data Center",10. BP4 relates to rules 1, 2, and 3.
BP5: Repair failed components promptly.
Although the failure of components can be transparent (causing no outage) or can be recoverable (causing a failover), the remaining redundant components that have taken over the service are essentially operating in a degraded mode, such that the cluster may not survive the next failure. So it is critical to repair the failed components, thus getting the cluster out of its degraded state. BP5 relates to rule 3.
BP6: Applications running on the cluster and consisting of real-time (RT) processes or threads, should be well-behaved.
The reconfiguration sequence in the Sun Cluster framework relies on the timely processing of the reconfiguration activities. This implies that any application RT threads running on the cluster nodes must be well-behaved, that is, those threads should not monopolize system resources. Otherwise, the reconfiguration threads will not run in a timely fashion. This can lead to timeouts resulting in unsuccessful reconfigurations, and nodes (or even the entire cluster) going down. Several instances of misbehaving application RT processes have been seen in the field. In these cases, the RT processes have monopolized system resources, causing timeouts and leading to cluster deadlocks or node failures.
Note that this requirement is a basic Solaris OE requirement regarding well-behaved RT processes/threads; however, non-conformance to it does exacerbate the cluster reconfiguration steps and decreases availability in a clustering environment. BP6 relates to rules 1, 2 and 4.
BP7: Periodically check any quorum devices configured in the cluster.
Although this is part of BP4, it is called out separately since it has consequences that are significantly critical to successful reconfigurations. This BP is particularly relevant in two-node clusters, which are required to have a quorum device configured. In such clusters, if a node goes down, the remaining node would not be able to continue if it could not access the quorum device for any reason. This issue also applies to larger clusters with quorum devices, although the final impact of a failed quorum device depends on the number of nodes remaining up and other quorum devices in the cluster. BP7 relates to rule 1.
BP8: Ensure (by periodically checking) that all third party licenses needed by the service in question remain valid.
This BP is necessary to prevent any potential outages in the cluster due to missing or expired licenses on the nodes when a reconfiguration occurs in the cluster. Taking the example of the stack presented in "The Stack," Veritas VxVM requires its own license and these licenses may come with an expiration time or may be tied to the hostid of a node. If, at the time of a reconfiguration, the license has expired or the hostid has changed, the remaining cluster nodes will be forced to go down due to an unsuccessful reconfiguration. BP8 relates to rule 1.
BP9: For the Sun Cluster-RAC on VxVM stack considered in this paper, configure the RAC storage layout as follows: use HW RAID-5 for RAC data, and striped RAID-0 with hardware or software mirroring for the RAC logs.
RAID-5 can tolerate a single spindle failure and it performs well for random writes. However, it does not perform as well for sequential writes. On the other hand, using striping with mirroring performs well for sequential writes, but the recovery time can be quite large in the event of a spindle failure. The typical size of RAC data is orders of magnitude larger than the size of RAC logs. This coupled with the fact that logs are characterized by sequential writes, leads to the recommendation of using striped RAID-0 for RAC logs, and RAID-5 for RAC data. BP9 relates to rule 3.
In general, a good match between the storage configuration and data access patterns, while maintaining high storage reliability, is recommended.
BP10: For the Sun Cluster and Oracle9i RAC on VxVM stack, use the Veritas SmartSync feature for the RAC data, and use dirty region logging (DRL) for the logs.
The SmartSync feature of VxVM offers increased availability of Oracle volumes by eliminating the need for the volume manager to perform resynchronization of the data volumes. Instead, VxVM relies on the Oracle recovery process to rewrite any potentially out-of-sync data blocks, thereby significantly reducing the recovery times. This feature is available for use only for Oracle9i RAC data. For the RAC logs, the DRL option of VxVM should be used to improve log related recovery performance. BP10 relates to rules 3 and 4.