- Sun Cluster 3.0 Series: Guide to Installation—Part 2
- Objectives
- Prerequisites
- Major Resources
- Introduction
- Sun Cluster 3.0 Software
- Install SC 3.0 On First Cluster Node—Without Reboot
- Identify the Device ID (DID) on the First Cluster Node
- Verify DID is Available on Each Additional Cluster Node)
- Install SC 3.0 Patches on the First Cluster Node
- Verify the Install Mode is Enabled
- Install SC 3.0 on Additional Cluster Nodes— Without Reboot
- Install SC 3.0 Patches on Additional Cluster Nodes
- Establish the SC 3.0 Quorum Device - First Cluster Node Only
- Configure Additional Public Network Adapters - NAFO
- Configure ntp.conf on Each Cluster Node
- Verify /etc/nsswitch Entries
- Update Private Interconnect Addresses on All Cluster Nodes
- Add Diagnostic Toolkit
Section 2.7: Install SC 3.0 Patches on Additional Cluster Nodes
In this section, you will install all the required Sun Cluster 3.0 U3 patches on each additional cluster node. Next, you will reboot each node, after all patches have been verified.
Step 2.7.1
For local (manual) installations, obtain all required SC 3.0 U3 patches (Solaris 8). We recommended accessing SunSolve Online to identify and obtain all required Sun Cluster patches.
NOTE
SunSolve is a contract service from Sun Enterprise Services. It is a good idea to subscribe to this service, especially if you are running a production server.
Key Practice: Create a /PATCHES directory on a dedicated Management Server to store all patches. This enables centralized patch management. For example, the Sun BluePrints™ BPLAB hardware has been configured with a 'master' JumpStart server, which will serve all software binaries and patches, and act as the repository.
Key Practice: Refer to the individual patch README files to review any installation prerequisites before installing patches.
Step 2.7.2
At this time, complete the patch installation by entering the following commands on clustnode2:
root@clustnode2# cd /cdrom/PATCHES/CLUSTER3.0U3 root@clustnode2# patchadd 110648-22 Checking installed patches......
Step 2.7.3
Verify the first patch installs successfully. Then add the next patch, on each additional cluster node:
root@clustnode2# patchadd 111554-09 Checking installed patches......
Step 2.7.4
Verify that all required patches are installed successfully. Several patches might already be installed, or might fail to install at this time.
Verify that any errors reported did not result in corrupt or missing packages required by SC 3.0 software. The following patches might not install: 111488-xx, 111555-xx, 112108-xx, and 112866-xx.
Review the list of installed patches by entering the following command into each additional cluster node:
root@clustnode2# /usr/sbin/patchadd -p
Key Practice: Verify that all patches have been installed correctly by reviewing the patch installation logs, and resolve any installation errors or failures.
Step 2.7.5
After all required patches have been installed successfully, reboot each additional cluster node:
root@clustnode2# init 6
Step 2.7.6
Wait for the reboot to finish, and the cluster to finish auto-formation. During the reboot process, observe the cluster console windows on each cluster node. Verify that cluster events are occurring on each cluster node, and that messages indicating auto-cluster formation are commencing.
For example, verify that clustnode1 console "NOTICE:" messages appear, indicating state changes during cluster reconfigurationfor example, as each cluster interconnect path is verified and brought online.
Similarly, clustnode2 console messages appear, indicating it is "Booting as part of a cluster." Additionally, note kernel and cluster initialization messages indicating "Configuring DID Devices" as all instances are being created, and the node obtains access to all attached disks.
Step 2.7.7
Wait for reboot to finish. Ensure that the cluster is stable, and all cluster interconnects and interfaces are "online" as per cluster console "NOTICE:" messages should indicate).
As root, execute the /usr/cluster/bin/scstat command and verify that each cluster node, and each interconnect path is Online.
# scstat
No errors, warnings, or degraded indicators should be indicated.
Prior to configuring the cluster quorum device, note the following example:
-- Cluster Nodes -- Node name Status --------- ------ Cluster node: clustnode1 Online Cluster node: clustnode2 Online ------------------------------------------------------------------ -- Cluster Transport Paths -- Endpoint Endpoint Status -------- -------- ------ Transport path: clustnode1:qfe4 clustnode2:qfe4 Path online Transport path: clustnode1:qfe0 clustnode2:qfe0 Path online ------------------------------------------------------------------ -- Quorum Summary -- Quorum votes possible: 1 Quorum votes needed: 1 Quorum votes present: 1 -- Quorum Votes by Node -- Node Name Present Possible Status --------- ------- -------- ------ Node votes: clustnode1 1 1 Online Node votes: clustnode2 0 0 Online -- Quorum Votes by Device -- Device Name Present Possible Status ----------- ------- -------- ------