Sun Cluster 3.0 Implementation Guide: Hardware Setup
- Module 1: Hardware Setup
- Sun Cluster 3.0 Hardware Planning Considerations
Module 1: Hardware Setup
Purpose
The following equipment setup and cabling procedures detail how to configure the Sun Cluster 3.0 hardware to ensure that no single points of failure (SPOF) exist. Specifically, a properly configured hardware platform provides the highest level of availability that can reasonably be expected of the configuration. Important Sun Cluster 3.0 hardware setup concepts are also reviewed during the hands-on lab modules and exercises.
Objectives
This module describes procedures to correctly assemble the cluster hardware, including:
Identifying each Sun Cluster 3.0 hardware setup component and all equipment required for the SunPlex™ component.
Identifying all cabling and interconnects.
Introduction
When using this guide to construct a Sun Cluster hardware setup, participants perform configuration steps and procedures required to successfully configure each Sun Cluster 3.0 hardware component and construct a two-node Sun Cluster 3.0 hardware cluster.
Procedures include steps necessary to configure the cluster hardware (for example, each SunPlex component), along with stated examples of "key practices" (for example, configurations and methods for optimizing availability and/or performance).
A complete enterprise solution or architecture is a complex entity. Customers looking to purchase Sun Cluster hardware are often looking for more than just a pair of nodes to run a set of applications in an high availability (HA) environment. Each implementation must achieve specific identified needs, meeting goals for: serviceability, manageability, scalability, and availability.
The objective is to configure a Sun Cluster 3.0 hardware setup, including the software, to enhance data and application availability for production (for example, mission-critical) environments. The basic Sun Cluster 3.0 hardware platform being configured connects two or more servers by means of a private network. Each server can access the same application data using multi-ported or shared disk storage and share the network resources, thereby enabling either cluster node to inherit an application when its primary server becomes unable to provide services.
All configuration exercises contained within this Sun Cluster 3.0 hardware lab guide assume that you are implementing a standard cluster hardware configuration. Many steps in this lab guide refer to manual or local procedures which are performed only if you have local or physical access to the lab hardware, and you can perform visual inspection and verification. (These local, physical verification procedures are not covered in this lab guide). Final verification of the cluster hardware setup (for example, failover) can be confirmed only after the required software has been installed and configured. Formal acceptance test procedures include full verification for each system, subsystem, and component. See ''References'' on page 19 and Modules 2 through 6 of this lab guide for additional information.
Figure 1-1 describes the Sun Cluster 3.0 lab hardware implementation and Tables 1-1 through 1-5 define each connection.
Prerequisites
This lab manual assumes that the user (participant) is a qualified Solaris Network Administrator. For installation queries, shell usage questions, patches, and packages refer to the Sun Educational Services manuals for the Solaris System Administration 1 and Solaris System Administration 2 courses.