- Sun Cluster 3.0 Series: Guide to Installation—Part 2
- Objectives
- Prerequisites
- Major Resources
- Introduction
- Sun Cluster 3.0 Software
- Install SC 3.0 On First Cluster Node—Without Reboot
- Identify the Device ID (DID) on the First Cluster Node
- Verify DID is Available on Each Additional Cluster Node)
- Install SC 3.0 Patches on the First Cluster Node
- Verify the Install Mode is Enabled
- Install SC 3.0 on Additional Cluster Nodes— Without Reboot
- Install SC 3.0 Patches on Additional Cluster Nodes
- Establish the SC 3.0 Quorum Device - First Cluster Node Only
- Configure Additional Public Network Adapters - NAFO
- Configure ntp.conf on Each Cluster Node
- Verify /etc/nsswitch Entries
- Update Private Interconnect Addresses on All Cluster Nodes
- Add Diagnostic Toolkit
Section 2.6: Install SC 3.0 on Additional Cluster Nodes Without Reboot
After the first cluster node has been installed successfully (and SC 3.0 patches have been applied), you may proceed to install the remaining cluster nodes.
Upon completion of this section, the additional cluster node(s) will be installed but will NOT be rebooted. Choose no when prompted to automatically reboot. For each additional cluster node, you will verify that SC 3.0 software has been installed successfully, before installing patches. Once this has been completed, each additional cluster node will be rebooted and will attempt to join the cluster.
Step 2.6.1
For local (manual) installations, insert the Sun Cluster 3.0 U3 CD into the clustadm workstation. Note that the vold (1M) daemon will automatically mount it in the /cdrom directory.
To make the contents of the CD-ROM available to the cluster nodes across the network, enter the following command into the clustadm workstation:
root@clustadm# share -F nfs -o ro,anon=0 /cdrom/cdrom0
Step 2.6.2
For local (manual) installations only, verify that /cdrom has been shared correctly by entering the following command on clustadm:
root@clustadm# share - /cdrom/ ro,anon=0 "" root@clustadm#
Step 2.6.3
For local (manual) installations only, enter the following command into each cluster node:
# mount F nfs -o ro clustadm:/cdrom/suncluster_3_0 /cdrom
This example assumes that the SC 3.0 administrative workstation is clustadm, which can successfully share the contents of the CD-ROM drive, and is accessible from each cluster node.
Step 2.6.4
At this time, continue installing the Sun Cluster software on each additional cluster node. Enter the following on clustnode2:
root@clustnode2# cd /cdrom/suncluster_3_0/SunCluster_3.0/Tools root@clustnode2# ./scinstall
Step 2.6.5
Proceed with the SC 3.0 software installation on each additional cluster node.
CAUTION
On the last screen prompt, enter no to the Automatic Re-Boot option. It is necessary to install the SC 3.0 patches first, prior to rebooting clustnode2.
On each additional cluster node, begin the installation by choosing option 2 from the Main Menu, and continue to add this machine to an established cluster. When prompted, continue the installation. Verify the following software packages are installed: SUNWscr, SUNWscu, SUNWscdev, SUNWscgds, SUNWscman, SUNWscsal, SUNWscsam, SUNWscvm, SUNWscdm, SUNWscva, SUNWscvr, and SUNWscvw.
Continue the installation, and enter the name of the sponsoring node as clustnode1.
Join the established cluster, named nhl.
Continue the installation. When prompted, run sccheck and verify that it completes successfully.
Enter yes to use autodiscovery, which will identify the cluster transport. Enter yes when each cluster interconnect, both qfe0 and qfe4, are "discovered" adding these connections to the configuration.
When asked to create the name for the local global devices file system, enter yes to use the default /globaldevices.
Enter no to the Automatic Re-Boot option.
When prompted, press Enter to continue.
Step 2.6.6
Confirm that the correct cluster information and installation packages are configured, before answering yes to each prompt, as indicated below:
>>> Confirmation <<< Your responses indicate the following options to scinstall: scinstall -ik \ -C nhl -N clustnode1 -A trtype=dlpi,name=qfe0 -A trtype=dlpi,name=qfe4 \ -B type=direct -m endpoint=:qfe0,endpoint=clustnode1:qfe0 -m endpoint=:qfe4,endpoint=clustnode1:qfe4 Are these the options you want to use [yes]? yes Do you want to continue the install (yes/no) [yes]? yes
NOTE
After final confirmation of the installation parameters, scinstall proceeds to install the appropriate packages and prepare the node to become a member of the cluster. On the clustnode2 console, verify the message did driver major number set to xxx, and verify the major number is correct, as indicated in previous steps (clustnode1, did 300).
On the clustnode1 console, you will begin to observe numerous cluster messages, such as: "NOTICE: CMM: .." and "WARNING:.. Path error .." indicating clustnode2 is attempting to reconfigure itself to become a cluster member.
When prompted, press Enter to continue, and return to the Main Menu.
Step 2.6.7
Upon completion of the SC 3.0 software installation, note the name of the installation log file, and identify any errors reported. For example: /var/cluster/logs/install/scinstall.log.xxx
Next, from the Main Menu, select q to quit scinstall.
This will return you to the root@clustnode2# shell prompt, where you will proceed to install the SC 3.0 patches.