- Sun Cluster 3.0 Series: Guide to Installation—Part 2
- Objectives
- Prerequisites
- Major Resources
- Introduction
- Sun Cluster 3.0 Software
- Install SC 3.0 On First Cluster Node—Without Reboot
- Identify the Device ID (DID) on the First Cluster Node
- Verify DID is Available on Each Additional Cluster Node)
- Install SC 3.0 Patches on the First Cluster Node
- Verify the Install Mode is Enabled
- Install SC 3.0 on Additional Cluster Nodes— Without Reboot
- Install SC 3.0 Patches on Additional Cluster Nodes
- Establish the SC 3.0 Quorum Device - First Cluster Node Only
- Configure Additional Public Network Adapters - NAFO
- Configure ntp.conf on Each Cluster Node
- Verify /etc/nsswitch Entries
- Update Private Interconnect Addresses on All Cluster Nodes
- Add Diagnostic Toolkit
Section 2.3: Verify DID is Available on Each Additional Cluster Node)
On each additional cluster node, examine the /etc/name_to_major file and verify that the DID number assigned by the first cluster node (did 300) is not already in use by another driver. For example, on clustnode2 verify that did 300 is not already in use.
NOTE
If these procedures have been followed, no conflict will occur. If the DID number is already in use on an additional cluster node, do not attempt to reconfigure the first cluster node at this time. Instead, you must reconfigure the conflicting driver on the additional cluster node to use a different major device number.
Step 2.3.1
On each additional cluster node, enter the following command to examine the /etc/name_to_major file. As indicated in the codebox below, verify there is no existing entry for did 300, on each additional cluster node:
root@clustnode2# grep 300 /etc/name_to_major root@clustnode2# root@clustnode2# grep did /etc/name_to_major root@clustnode2#
NOTE
This file will be modified later during the VxVM installation.
Step 2.3.2
In preparation for upcoming patch installations, ensure the Sun Cluster 3.0 patches are accessible in single user mode, on the first cluster node. For local (manual) installations, patches can be made available by using the EIS CD-ROM.
For remote learning environments, make the SC 3.0 U3 patches available in single-user mode by copying all required patches to the local disk (c0t0), as in the example below. These will be removed later, after they have been applied.
On the first cluster node only, verify all required patches are successfully copied to the local disk. For example:
root@clustnode1# cd /cdrom/PATCHES/CLUSTER3.0U3 root@clustnode1# ls -lR . . . {{output omitted}}. . . root@clustnode1# mkdir -p /opt/PATCHES/CLUSTER3.0U3 root@clustnode1# cp -rp ./* /opt/PATCHES/CLUSTER3.0U3 root@clustnode1# ls -lR /opt/PATCHES . . . {{output omitted}}. . .
Step 2.3.3
After veriyfing patches have been copied to local disk, reboot clustnode1 outside of the cluster before entering single-user mode. To do this, enter the following commands on the first cluster node only:
root@clustnode1# init 0 {{.... output omitted.....}} ok boot -xs {{.... output omitted.....}}
When prompted during the reboot, enter the root password to access the shell and perform system maintenance (single user mode).
The root password is: abc