- Objectives
- Prerequisites
- Introduction
- Management Server Functions
- Section 2.7: Solaris OE Installation Preparation on Each Cluster Node
- Section 2.8: Install the Solaris 8 Operating Environment on Each Cluster Node
- Section 2.9: Install Recommended Solaris OE Patches on Each Cluster Node
- Section 2.10: The Solaris OE Post Installation and Configuration
- Section 2.11: Configure Additional Cluster Management Services
- Appendix A: System Configuration Files
Section 2.8: Install the Solaris 8 Operating Environment on Each Cluster Node
For local (manual) installations, configure the management server (administrative workstation) as a Solaris OE installation (JumpStart software) server by modifying the JumpStart software configuration to include new class file. These class files should configure a machine to operate as a node within a cluster, according to the following guidelines.
For these hands-on labs, a JumpStart software server has already been configured.
NOTE
The Solaris OE, plus patch, installation is not performed during these hands-on labs. Instead, each cluster nodes has been previously installed with the Solaris OE and patches.
CAUTION
The same version of the Solaris OE must be installed on the Sun Cluster 3.0 software administrative workstation and each of the cluster nodes in the SunPlex platform. In this procedure, Solaris OE version 8 07/01 is used.
Key Practice: Use the Solaris JumpStart software to maintain consistency and to fully automate the installation of the Solaris OE and additional software packages. The JumpStart software can minimize operator errors that occur during a manual installation process.
Step 2.8.1Creating a New Class File
For local (manual) installations only, on the management server (the JumpStart software install server), create a new class file (profile) in the /JumpStart directory called cluster_node.class and include the following lines.
install_type initial_install system_type server partitioning explicit cluster SUNWCXall {{we select Entire Distribution plus OEM}} usedisk c0t0d0 filesys c0t0d0s0 32768 / filesys c0t0d0s1 1024 swap
NOTE
The above code is an example only in which we configure 1 Gbyte of swap space and leave all of the remaining space to root. For Sun Cluster 3.0 software, a minimum of 750 Mbytes of swap space is recommended for each cluster node. For site-specific (actual) implementations, swap size should be configured based on the actual requirements of the application(s) to be hosted. For additional information, see the JumpStart software references in Appendix B.
Step 2.8.2Ensuring Proper Hostname Configuration
The following sample shows the /etc/inet/hosts file entries required for these exercises. At this time, ensure the correct (site-specific) hostnames are configured, as shown in the following example.
clustadm# more /etc/inet/hosts . . . {{output omitted}} xxx.xxx.xx.xxx clustadm loghost xxx.xxx.xx.xxx clustnode1 xxx.xxx.xx.xxx clustnode2 xxx.xxx.xx.xxx tc nhl-tc . . . {{output omitted}} xxx.xxx.xx.xxx lh-hanfs xxx.xxx.xx.xxx lh-apache . . . {{output omitted}}
NOTE
The preceding hostname and IP address information is an example. Carefully note the actual values required at your site. We do not use a naming service or domain name. Instead, we rely on local (/etc) files. For additional configuration information regarding these exercises, see ''Section 2.10: The Solaris OE Post Installation and Configuration'' section on page Module 2-29.
Step 2.8.3Editing the /JumpStart/rules File
For local (manual) installations, on the JumpStart software install server (management server), edit the /JumpStart/rules file, and add the following lines.
hostname clustnode1 - cluster_node.class set_root_pw hostname clustnode2 - cluster_node.class set_root_pw
NOTE
The /JumpStart/rules file should have an entry for each node in the cluster.
Substep 1Creating a Root Password for Each Cluster Node
For local (manual) installations, on the management server (administrative workstation), edit the /JumpStart/set_root_pw file to create a root password for each cluster node.
Examine the /etc/shadow file, and record the encrypted root password (the UNIX™ software encrypted value).
NOTE
Previously, we set the password on the management server (administrative workstation) to abc. Because we want to use the same password across all systems (nodes) that have to be started with the JumpStart software, we need to edit the set_root_pw script. The encrypted value of the password (for this lab exercise) is tby83XuShUxKM; however, the value of the password will be different for your installation.
CAUTION
If the password file is not edited to reflect the correct password, the machine will have a password that is unknown. It is possible, though not recommended, to set the root password to null by deleting all of the characters that follow the PASSWD= line in the set_root_pw script.
Substep 2Editing the /JumpStart/set_root_pw File
For local (manual) installations, edit the /JumpStart/set_root_pw file. Modify the PASSWD= line by entering the encrypted password value recorded in Substep 1, from the /etc/shadow/ file. The edited file should look as follows:
clustadm# more /JumpStart/set_root_pw # !/bin/sh # # @(#)set_root_pw 1.6 97/02/26 SMI # # This is an example bourne shell script to be run # after installation. It sets the system's root # password to the entry defined in PASSWD. The # encrypted password is obtained from an existing root # password entry in /etc/shadow from an installed # machine. echo "setting password for root" # set the root password PASSWD=tby83XuShUxKM {{edit this line with encrypted password value}} # create a temporary input file . . .{{the rest of the file has been omitted}}. . .
Step 2.8.4Verifying the rules and class Files
For local (manual) installations, this final step verifies the rules and class files. A script named check (copied to this directory during a previous step) will be run. This script generates a file named rules.ok, which is read during the JumpStart software installation process.
Execute the /JumpStart/check script and verify the output of the check script, as follows:
clustadm# cd /JumpStart clustadm# ./check validating rules... validating profile standard_load_.class... The custom JumpStart configuration is ok.
Step 2.8.5Verifying the rules.ok File
For local (manual) installations, verify that the rules.ok file contains the correct data. Enter the following command, and confirm that the output looks as follows:
clustadm# more rules.ok hostname clustnode1 - cluster_node.class set_root_pw hostname clustnode2 - cluster_node.class set_root_pw # version=2 checksum=5105
NOTE
The checksum and version values may be different for your installation.
Step 2.8.6Performing the JumpStart Software Installation
For local (manual) installations, perform a JumpStart software installation for each cluster node (for example, clustnode1 and clustnode2). Enter the following command into the Cluster Console Window for each cluster node being installed:
ok boot net - install {{note the spaces before and after the hyphen}}
NOTE
The boot net - install command takes approximately one hour to complete. If error messages appear during this phase, indicating ARP/RARP errors that occur during a boot over the network, the Ethernet address might be incorrect. Before proceeding to the next step, you must troubleshoot and resolve the problem.
You will see Link Down - cable problems? error messages for interfaces that are not connected.
Step 2.8.7Verifying the Installation
At this point in the installation, verify that each cluster node has been installed successfully. Each node should have been rebooted into multiuser mode, and a login prompt should be displayed.
On each cluster node, log in as superuser (root) from the cconsole:host hostname window for each cluster node.
clustnode1 console login: root Password: abc
clustnode2 console login: root Password: abc
NOTE
Prior to using a text editor (such as, vi) to view or modify files, you must verify that your terminal environment variable is set to TERM=vt220 for proper video display.
Key Practice: Review the /var/sadm/README file to determine the location of the most recent installation logs (for example, /var/sadm/system/logs). Examine the most recently updated log files for potential errors (that is, begin.log, sysidtool.log, or install_log). Confirm the cause of any patch installation error messages that may have occurred. Refer to the Sun Cluster 3.0 U1 Error Messages Manual for error message translations (see Appendix B for information on obtaining this manual).
Step 2.8.8Verifying the Installation Logs
After the Solaris OE installation is completed, examine the current (that is, the most recent) installation log files, and verify that any installation errors do not go undetected or unresolved on each cluster node, as follows:
# cd /var/sadm/system/logs # pwd /var/sadm/system/logs # ls -t sysidtool.log begin.log install_log . . . . . . {{this output lists the most recently modified files first}}) # more install_log . . . {{It is important to look for installation error messages in the logs, as in the following example: pkgadd: ERROR: postinstall script did not complete successfully}}
Note the /var/sadm/README file (when navigating the directory structure). Verify correct log files are referenced (that is, the most recent installation logs). For example, the begin.log and finish.log filenames should have the following form:
begin.log_YYYY_MM_DD and final.log_YYYY_MM_DD (where YYYY_MM_DD are the install date in year_month_day format)
Step 2.8.9Verifying the Partitioning
Verify that the primary boot disk was partitioned correctly.
NOTE
It is easier to use the format(1M) command to calculate the exact number of (even) cylinders to be configured, as when determining the size of the root (/) filesystem. Using these guidelines, the size of the root filesystem is dependent on the actual size of the disk.
On each cluster node, verify that the primary boot disk (c0t0) is partitioned correctly.
The following example is with 18 Gbyte disk:
Slice 0 = cylinders 510 - 7461 assigned to "/" (all unallocated space; approximately 10GB) Slice 1 = cylinders 7 - 500 assigned to "swap" (750MB min.) Slice 2 = cylinders 0 - 7505 assigned as "backup" (full extent of the disk) Slice 6 = cylinders 7462 - 7505 assigned to the "/globaldevices" filesystem (100MB) Slice 7 = cylinders 1 - 6 assigned to "alternates" for SDS metadata* (reserve cylinders 1- 6 for use by a volume manager) *SDS requires slice 7 for storing metadata; VxVM requires slices 3 and 4.
CAUTION
The previous example assumes that the boot disk is an 18 Gbyte disk drive with 7506 cylinders. For each configuration, you must ensure that the slicing information matches the actual disk geometry.
The following example is with 36 Gbyte disk:
Slice 0 = cylinders 510 - 7461 assigned to "/" (approximately 10GB) Slice 1 = cylinders 7 - 500 assigned to "swap" (750MB min.) Slice 2 = cylinders 0 - 24619 assigned as "backup" (full extent of the disk) Slice 6 = cylinders 7462 - 7505 assigned to the "/globaldevices" filesystem (100MB) Slice 7 = cylinders 1 - 6 assigned to "alternates" for SDS metadata* (reserve cylinders 1- 6 for use by a volume manager) *SDS requires slice 7 for storing metadata; VxVM requires slices 3 and 4.
CAUTION
The previous example assumes that the boot disk is a 36 Gbyte disk drive with 24620 cylinders. For each configuration, you must ensure that the slicing information matches the actual disk geometry.
NOTE
The partition information shown is required to support the Sun Cluster 3.0 software, per the guidelines established in Section 2.1.
Step 2.8.10Verifying the Hostname Entries
At this time, on each cluster node, verify that the corresponding hostname entries have been created in the /etc/inet/hosts file, as required for the SunPlex platform. Also, verify that any additional or site-specific entries have been created, such as the two logical host entries (as required for our two data services).
# more /etc/inet/hosts . . . {{output omitted}} xxx.xxx.xx.xxx clustadm loghost xxx.xxx.xx.xxx clustnode1 xxx.xxx.xx.xxx clustnode2 xxx.xxx.xx.xxx tc nhl-tc . . . {{output omitted}} xxx.xxx.xx.xxx lh-hanfs xxx.xxx.xx.xxx lh-apache . . . {{output omitted}}
NOTE
Example shows standard hostname entries for the cluster. You must verify that your configuration matches the actual (site-specific) installation requirements.
On each node in the SunPlex platform, examine the /etc/inet/hosts file, noting that the site-specific (actual) IP addresses and host names are included, as indicated in the codebox. Specifically, on each cluster node, examine this file, and note the entries for clustnode1, clustnode2, clustadm (the administrative workstation), tc (the terminal concentrator), and lh-hanfs (the first logical host), and lh-apache (the second logical host).
Summary of Key Practices
Use the Solaris JumpStart software to maintain consistency and to fully automate the installation of the Solaris OE and additional software packages. Verify the Solaris OE installation was successful. Prior to installing the Sun Cluster software, or additional software, ensure each node has been installed correctly, as per site-specific requirements. |