- Sun Cluster 3.0 Series: Guide to Installation—Part 2
- Objectives
- Prerequisites
- Major Resources
- Introduction
- Sun Cluster 3.0 Software
- Install SC 3.0 On First Cluster Node—Without Reboot
- Identify the Device ID (DID) on the First Cluster Node
- Verify DID is Available on Each Additional Cluster Node)
- Install SC 3.0 Patches on the First Cluster Node
- Verify the Install Mode is Enabled
- Install SC 3.0 on Additional Cluster Nodes— Without Reboot
- Install SC 3.0 Patches on Additional Cluster Nodes
- Establish the SC 3.0 Quorum Device - First Cluster Node Only
- Configure Additional Public Network Adapters - NAFO
- Configure ntp.conf on Each Cluster Node
- Verify /etc/nsswitch Entries
- Update Private Interconnect Addresses on All Cluster Nodes
- Add Diagnostic Toolkit
Section 2.8: Establish the SC 3.0 Quorum Device - First Cluster Node Only
Step 2.8.1
On each cluster node, use the scdidadm command to verify the correct subsystem configuration has been established. Identify the DID number for the first shared disk which will be assigned as the quorum disk. The quorum devie will be physically located within shared storage. In our example, this should be d4 and should correspond to /dev/rdsk/c1t0d0 on each cluster node. To verify this, as root, enter:
# scdidadm -L
NOTE
Starting at Line 4 of the scdidadm -L output, notice the DIDs are displayed twice, once for each cluster node connection. Both nodes must share a connection to a quorum device, and is required for this two-node cluster. Verify that both nodes connect to the same physical device (for example, /dev/rdsk/c1t0d0) and have the same DID assignment (for example /dev/did/rdsk/d4). The DID assignments for all global devices should follow this example, and be identical on both cluster nodes.
Verify the following output, noting that lines 4 and 5 refer to the same physical disk or spindle:
1 clustnode1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1 2 clustnode1:/dev/rdsk/c0t1d0 /dev/did/rdsk/d2 3 clustnode1:/dev/rdsk/c0t6d0 /dev/did/rdsk/d3 4 clustnode1:/dev/rdsk/c1t0d0 /dev/did/rdsk/d4 4 clustnode2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d4 5 clustnode1:/dev/rdsk/c1t1d0 /dev/did/rdsk/d5 5 clustnode2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d5 6 clustnode1:/dev/rdsk/c1t2d0 /dev/did/rdsk/d6 6 clustnode2:/dev/rdsk/c1t2d0 /dev/did/rdsk/d6 7 clustnode1:/dev/rdsk/c1t8d0 /dev/did/rdsk/d7 7 clustnode2:/dev/rdsk/c1t8d0 /dev/did/rdsk/d7 8 clustnode1:/dev/rdsk/c1t9d0 /dev/did/rdsk/d8 8 clustnode2:/dev/rdsk/c1t9d0 /dev/did/rdsk/d8 9 clustnode1:/dev/rdsk/c1t10d0 /dev/did/rdsk/d9 9 clustnode2:/dev/rdsk/c1t10d0 /dev/did/rdsk/d9 10 clustnode1:/dev/rdsk/c2t0d0 /dev/did/rdsk/d10 10 clustnode2:/dev/rdsk/c2t0d0 /dev/did/rdsk/d10 11 clustnode1:/dev/rdsk/c2t1d0 /dev/did/rdsk/d11 11 clustnode2:/dev/rdsk/c2t1d0 /dev/did/rdsk/d11 12 clustnode1:/dev/rdsk/c2t2d0 /dev/did/rdsk/d12 12 clustnode2:/dev/rdsk/c2t2d0 /dev/did/rdsk/d12 13 clustnode1:/dev/rdsk/c2t8d0 /dev/did/rdsk/d13 13 clustnode2:/dev/rdsk/c2t8d0 /dev/did/rdsk/d13 14 clustnode1:/dev/rdsk/c2t9d0 /dev/did/rdsk/d14 14 clustnode2:/dev/rdsk/c2t9d0 /dev/did/rdsk/d14 15 clustnode1:/dev/rdsk/c2t10d0 /dev/did/rdsk/d15 15 clustnode2:/dev/rdsk/c2t10d0 /dev/did/rdsk/d15 16 clustnode2:/dev/rdsk/c0t0d0 /dev/did/rdsk/d16 17 clustnode2:/dev/rdsk/c0t1d0 /dev/did/rdsk/d17 18 clustnode2:/dev/rdsk/c0t6d0 /dev/did/rdsk/d18
NOTE
Verify local disk and CD-ROM devices are reported correctly in the output (for example, d1, d2 and d3 represent internal clustnode1 devices, while d16, d17, and d18 represent internal clustnode2 devices).
Key Practice: Label devices and cables. Devices and cables that are easily identified make troubleshooting and maintenance easier. This is helpful even for systems that do not require high availability. Where high availability is a prerequisite, cable and device identification and labelling should be high priority. Verify disks (boot devices, mirrors, disk quorums, volume manager conifiguration databases, clones, hot-spares, and so on), CD-ROM drives, tape drives, and cable labels are correct. This enables service operations to easily correlate error messages, which may specify global DID numbers, metadevice names, or controller number and sd/ssd instances. Proper labelling can help when interpreting errors and to determine specific failed devices such as specific disk spindles and related components. Tape drives should be labelled with their rmt instance numbers.
Step 2.8.2
Prepare to establish a quorum device using d4 as the quorum disk. Enter the following command and answer each prompt as indicated, on the first cluster node only:
root@clustnode1# scsetup >>>> Initial Cluster Setup <<<< {. . . . . output omitted. . . . . } Is it okay to continue? yes Do you want to add any quorum disks? yes . . . {{output omitted}} . . . Which global device do you want to use (d<N>)? d4 Is it okay to proceed with the update? yes
Observe cluster node console messages indicating cluster reconfiguration (reconfiguration #4) has completed.
Step 2.8.3
On the first cluster node press Enter when prompted.
Then, scsetup will prompt to add additional quorum disks. Enter no, as indicated:
{. . . . output omitted . . . } Do you want to add another quorum disk (yes/no)? no
Step 2.8.4
Finally, reset the cluster "install mode," as indicated on the first cluster node:
{. . . . output omitted . . . } Is it okay to reset install mode? yes
Step 2.8.5
Verify the cluster initialization completes and that each cluster node reports console messages and cluster events, as they occur.
On the first cluster node, press Enter when prompted, to proceed to the main menu. Next, enter q to quit scsetup.
NOTE
Verify the cluster advances from the "install mode" state to operational status. Observe the scconf -p output, and verify that the install mode is 'disabled'.
Step 2.8.6
Verify that the quorum device has been configured correctly as indicated by the command, scstat:
# scstat -- Cluster Nodes -- Node name Status --------- ------ Cluster node: clustnode1 Online Cluster node: clustnode2 Online -- Cluster Transport Paths -- Endpoint Endpoint Status -------- -------- ------ Transport path: clustnode1:qfe4 clustnode2:qfe4 Path online Transport path: clustnode1:qfe0 clustnode2:qfe0 Path online -- Quorum Summary -- Quorum votes possible: 3 Quorum votes needed: 2 Quorum votes present: 3 -- Quorum Votes by Node -- Node Name Present Possible Status --------- ------- -------- ------ Node votes: clustnode1 1 1 Online Node votes: clustnode2 1 1 Online -- Quorum Votes by Device -- Device Name Present Possible Status ----------- ------- -------- ------ Device votes: /dev/did/rdsk/d4s2 1 1 Online { . . . output omitted . . . }