- Sun Cluster 3.0 Series: Guide to Installation—Part 2
- Objectives
- Prerequisites
- Major Resources
- Introduction
- Sun Cluster 3.0 Software
- Install SC 3.0 On First Cluster Node—Without Reboot
- Identify the Device ID (DID) on the First Cluster Node
- Verify DID is Available on Each Additional Cluster Node)
- Install SC 3.0 Patches on the First Cluster Node
- Verify the Install Mode is Enabled
- Install SC 3.0 on Additional Cluster Nodes— Without Reboot
- Install SC 3.0 Patches on Additional Cluster Nodes
- Establish the SC 3.0 Quorum Device - First Cluster Node Only
- Configure Additional Public Network Adapters - NAFO
- Configure ntp.conf on Each Cluster Node
- Verify /etc/nsswitch Entries
- Update Private Interconnect Addresses on All Cluster Nodes
- Add Diagnostic Toolkit
Section 2.1: Install SC 3.0 On First Cluster NodeWithout Reboot
Upon completion of this section, the first cluster node will be installed, but will NOT be rebooted. Choose "no" when asked about the automatic reboot option. After the SC 3.0 software is installed, you should observe numerous Sun Cluster alerts and messages.
For example, the console window displays numerous "NOTICE: CMM . ." messages, describing specific cluster events as they occur. Observe similar messages indicating the status of the cluster as it configures, such as the cluster "reconfiguration" number.
Cluster console events and messages, such as an "ERROR" or "FAIL" often require administrator intervention to verify the configuration is valid and operational.
Step 2.1.1
For local (manual) installations, insert the Sun Cluster 3.0 U3 CD into the clustadm workstation. Note that the vold (1M) daemon will automatically mount it using the /cdrom directory.
In order to make the contents of the CD-ROM available to the cluster nodes through the network, enter the following command into the clustadm workstation:
root@clustadm# share -F nfs -o ro,anon=0 /cdrom/cdrom0
Step 2.1.2
For local (manual) installations only, verify /cdrom has been shared correctly by entering the following command, on clustadm:
root@clustadm# share - /cdrom/ ro,anon=0 "" root@clustadm#
Step 2.1.3
For local (manual) installations only, enter the following command on each cluster node:
# mkdir /cdrom # mount F nfs -o ro clustadm:/cdrom/suncluster_3_0 /cdrom
This example assumes the SC 3.0 admin workstation is clustadm, which can successfully share the contents of the CD-ROM drive, and is accessible from each cluster node as indicated.
Step 2.1.4
On the first cluster node only, execute scinstall, as indicated:
root@clustnode1#cd /cdrom/suncluster_3_0/SunCluster_3.0/Tools root@clustnode1# ./scinstall
CAUTION
The next few steps install the first cluster node (clustnode1). When prompted at the last scinstall screen, enter no to Automatic Re-Boot. It is necessary to install the SC 3.0 patches first, prior to rebooting clustnode1.
On the first cluster node only, begin the installation by choosing option 1 from the main menu to establish a new cluster.
On the first cluster node only, continue to install the software packages. When prompted, continue to install software packages, including: SUNWscr, SUNWscu, SUNWscdev, SUNWscgds, SUNWscman, SUNWscsal, SUNWscsam, SUNWscvm, SUNWscdm, SUNWscva, SUNWscvr, and SUNWscvw.
When prompted, continue the installation. Proceed to establish a new cluster named nhl, and continue the installation.
When prompted by Check, run sccheck and verify it completes successfully.
When prompted, enter clustnode2 as the name of the other node planned for this cluster. Enter [Ctrl-D], to end the list. Then, proceed with the installation, as prompted.
Do not enable DES Authentication.
When prompted, enter y to accept the default network address, and enter y to accept the default netmask.
Carefully read the next few prompts, observing each message. Enter no when asked if you are using transport junctions, and continue. When prompted from the list of available adapters, select qfe0 as the first transport adapter. Next, select qfe4 as the second transport adapter.
When asked to create the directory name for the (local) global devices file system, enter yes, to use the default directory, /globaldevices.
For the Automatic Re-Boot screen, enter no when asked, "Do you want scinstall to reboot for you?"
When prompted, press Enter to display the confirmation screen.
Step 2.1.5
Confirm the correct cluster information and installation packages are configured, before answering yes to each prompt, as indicated below:
>>> Confirmation <<< Your responses indicate the following options to scinstall: scinstall -ik \ -C nhl -F -T node=clustnode1,node=clustnode2,authtype=sys \ -A trtype=dlpi,name=qfe0 -A trtype=dlpi,name=qfe4 \ -B type=direct Are these the options you want to use [yes]? yes Do you want to continue the install (yes/no) [yes]? yes
Verify the information is correct. The next few steps will complete this phase of the installation, where you will return to the Main Menu.
Step 2.1.6
Upon completion of the SC 3.0 software installation, if errors are encountered, note the pathname of the installation log file and determine the cause of any failures. For example: /var/cluster/logs/install/scinstall.log.xxx
From the Main Menu, enter q to quit scinstall.
Return to the root@clustnode1# shell prompt, where you will continue installing this first cluster node only.