- Objectives
- Prerequisites
- Introduction
- Management Server Functions
- Section 2.7: Solaris OE Installation Preparation on Each Cluster Node
- Section 2.8: Install the Solaris 8 Operating Environment on Each Cluster Node
- Section 2.9: Install Recommended Solaris OE Patches on Each Cluster Node
- Section 2.10: The Solaris OE Post Installation and Configuration
- Section 2.11: Configure Additional Cluster Management Services
- Appendix A: System Configuration Files
Section 2.10: The Solaris OE Post Installation and Configuration
After successfully installing the Solaris OE on each cluster node and before you build the Sun Cluster software environment, you must perform the post-installation and site-specific procedures in this section to verify that the operating environment was configured correctly.
Step 2.10.1Verifying the Date and Time on the Nodes
Before you install additional software, ensure that the date and time are correct on each SunPlex platform node, using the management server or a designated time host.
NOTE
This is not intended to synchronize the date and time on a running, production cluster. For additional references, and when configuring NTP in a running cluster, see Appendix B, specifically "How to Update Network Time Protocol (NTP)" in the Sun Cluster 3.0 (U1) Installation Guide and "Cluster Time" in Sun Cluster 3.0 U1 Concepts.
Key Practice: In preparation for installing additional software, verify (set) the system date and time on each SunPlex platform node. In the future, this can also help ensure that relevant time-stamps (that is, log file entries and time-stamps, error messages, and cluster events) are synchronized, as when correlating a sequence of system events involving multiple nodes.
NOTE
It is often useful to track events closely and accurately over specific time periods. It is also important when auditing system statistics and logs (for example, for performance monitoring, capacity planning, configuration management, troubleshooting, and ensuring scheduled daily or sustaining operations are completed successfully).
For a running cluster, the date and time on all of the cluster nodes must be synchronized. For this, the SunPlex platform employs the NTP to synchronize the clocks between nodes. In normal cluster operation, you should not need to reset the date and time, unless they were set incorrectly.
At this time, ensure the system date and time is correct on each node. Synchronize the date and time to the management server (clustadm) on each cluster node:
# ping clustadm clustadm is alive # rdate clustadm
NOTE
Verify that the rdate(1M) command completes successfully and that the date and time are set correctly on each SunPlex platform node.
Step 2.10.2Setting the Environment Variables
Use a text editor to set the required environment variables listed in the Appendix A samples. Establish your selected shell environment by configuring each variable indicated.
NOTE
For example, if not done previously, it is important to set your terminal environment variable correctly (for example, TERM=vt220) for proper video display before you use a text editor to view or modify files.
The syntax for the following command is shell specific (in this case, from the korn shell).
# TERM=vt220 # export TERM
At this time, ensure that the correct settings are configured on each cluster node. Specifically, note the following:
The /etc/passwd file is set to the root shell, as appropriate.
All of the required variables listed in Appendix A have been configured and exported.
For example, use the /.profile or /.login file to make the settings listed in Appendix A (sample root user startup files) and to establish these settings for the root user, as appropriate for your shell environment.
Step 2.10.3Verifying the Superuser's Shell Environment
On each cluster node, verify the superuser (root) shell environment startup files have been established with the following variables:
Table 2-1 Superuser Shell Environment
Variable |
Label |
TERM |
vt220 |
stty |
istrip |
Prompt |
hostname# {{for example, clustnodeX#}} |
Add the following to the PATH variable |
PATH=/usr/bin:/usr/ucb:/etc:/sbin: /usr/sbin:/usr/cluster/bin: /opt/SUNWcluster/bin |
Add the following to the MANPATH variable |
MANPATH=/usr/dt/man:/usr/man: /usr/openwin/share/man: /usr/cluster/man |
Step 2.10.4Verifying the Superuser's Group Membership
On each cluster node, verify that the superuser (root) is a member of the sysadmin group (14) or that the /.rhosts file contains a hostname entry for each of the other nodes, as indicated in the next step (sample /.rhosts file).
NOTE
In Module 4, the Solstice DiskSuite software requires this to coordinate activities between nodes.
Step 2.10.5Verifying the /.rhosts File
On each cluster node, verify that the nine entries in the following code example have been added correctly to the /.rhosts file. Note that the first six entries reference the cluster interconnect and that the final three entries are for the Solstice DiskSuite software.
On the system administration workstation, verify that the final three entries are included in the /.rhosts file on clustadm:
# more /.rhosts 204.152.65.33 204.152.65.1 204.152.65.17 204.152.65.34 204.152.65.2 204.152.65.18 {{ the entries below must be included on each node in the SunPlex platform}} clustadm clustnode1 clustnode2
Step 2.10.6Excluding the Cluster Nodes as IP Routers
Ensure that each cluster node will not come up as an IP router. They are not supported on Sun Cluster 3.0 software cluster nodes. If not already done, create the /etc/notrouter file by entering the following command on each cluster node.
# touch /etc/notrouter
Key Practice: The cluster environment requires that local /etc files supporting network services are searched ahead of any naming services. This increases the availability by not relying on an outside agent.
NOTE
We are not using NIS in the hands-on lab environment. However, it is a good idea to verify the /etc/nsswitch.conf file. See Appendix A for an example nsswitch.conf file using NIS.
Step 2.10.7Verifying the NIS Settings
For these hands-on labs, on each cluster node, view the /etc/nsswitch.conf file, and verify the following settings:
passwd: files group: files hosts: files rpc: files netmasks: files services: files
Step 2.10.8Changing the CONSOLE Setting for the Superuser
When set, the CONSOLE setting requires the superuser (root) to log in only on the device; however, for these exercises, the SunPlex platform nodes must allow the superuser to log in from other devices.
NOTE
During Module 6, we reverse this procedure, conforming to best practices, to minimize potential security vulnerabilities.
For these hands-on labs, edit the /etc/default/login file, and comment out the following entry (that is, insert the comment character (#) in front of the CONSOLE=/dev/console entry).
#If console is set, root can only. . . # Comment. . . # # CONSOLE=/dev/console {{comment out this line}}
Step 2.10.9Configuring the Alternate Boot Disk
After ensuring the primary book disk (c0t0) has been configured (as in Step 2.8.9), verify and/or configure the alternate boot disk partitions (c0t1) on each cluster node.
For the Sun Cluster 3.0 software, implement the following guidelines when partitioning the primary and alternate boot disks on each cluster node. These guidelines are intended to provide a flexible and consistent format of the system disk that easily enables the use of either the Solstice DiskSuite software or the VxVM software.
Always configure a mirrored boot environment for each cluster node.
Reserve cylinders 1 through 6 for use by a volume manager. This requires approximately 10 Mbytes of disk space.
Configure the swap space. This requires a minimum of 750 Mbytes of disk space.
Configure the /globaldevices partitions. This requires a minimum of 100 Mbytes of disk space.
Allocate all of the unused disk space to the root (/) directory, slice 0.
For in-depth information on building an HA-boot environment, see Appendix B.
CAUTION
Refer to Figure 1-1 and Table 1-1 to 1-5 that describe the hardware configuration. Note that both disk0 and disk1 are connected to the embedded (c0) SCSI controller. This provides only a single datapath to the devices. A single boot path does not support highly available boot configurations.
These partitioning guidelines must be implemented on each cluster node. They must also take into consideration the special requirements for adding a volume manager and the additional space reserved for the Sun Cluster 3.0 software global devices filesystem (/globaldevices).
In prepartation for mirroring, ensure that the local disks (that is, c0t0 and c0t1) are partitioned per these guidelines.
NOTE
Use the format(1M) command to calculate the exact number of (even) cylinders to be configured, as when determining the size of the root file system. Using these guidelines, the size of the root filesystem is dependent on the actual size of the disk.
Using format, verify that the primary boot disk is partitioned correctly on each cluster node.
The following is an example of an 18 Gbyte boot disk:
Slice 0 = cylinders 510 - 7461 assigned to "/" (all unallocated space; approximately 10GB) Slice 1 = cylinders 7 - 500 assigned to "swap" (750MB min.) Slice 2 = cylinders 0 - 7505 assigned as "backup" (full extent of the disk) Slice 6 = cylinders 7462 - 7505 assigned to the "/globaldevices" filesystem (100MB) Slice 7 = cylinders 1 - 6 assigned to "alternates" for SDS metadata* (reserve cylinders 1- 6 for use by a volume manager) *The Solstice DiskSuite software requires slice 7 for storing metadata; VxVM requires slices 3 and 4.
CAUTION
The above example shows an 18 Gbyte disk with 7506 cylinders. For each configuration, you must ensure the slicing information matches the actual disk geometry. After rewriting the new virtual table of contents (VTOC), verify that it was rewritten correctly before proceeding to configure the volume manager.
The following is an example of a 36 Gbyte boot disk:
Slice 0 = cylinders 510 - 7461 assigned to "/" (approximately 10GB) Slice 1 = cylinders 7 - 500 assigned to "swap" (750MB min.) Slice 2 = cylinders 0 - 24619 assigned as "backup" (full extent of the disk) Slice 6 = cylinders 7462 - 7505 assigned to the "/globaldevices" filesystem (100MB) Slice 7 = cylinders 1 - 6 assigned to "alternates" for SDS metadata* (reserve cylinders 1-6 for use by a volume manager) *SDS requires slice 7 for storing metadata; VxVM requires slices 3 and 4.
CAUTION
The above example shows an 36 Gbyte disk with 24620 cylinders. For each configuration, you must ensure the slicing information matches the actual disk geometry. After rewriting the new virtual table of contents (VTOC), verify that it was rewritten correctly before proceeding to configure the volume manager.
NOTE
For the Sun Cluster 3.0 software and site-specific implementations, the configured swap space should be sized based on the actual requirements of the application(s) to be hosted.
Ensure that the alternate boot disk (c0t1) is configured (and partitioned) correctly. Use the format(1M) command to examine the current settings for each slice.
On each cluster node, execute the format(1M) command, and verify that the primary boot disk (c0t0) is properly partitioned, as previously described in Section 2.8, Step 2.8.9.
Configure the alternate boot disk (c0t1) to match the primary boot disk (c0t0). Execute the format(1M) command, and select the alternate boot disk (c0t1), then create the partitions to match the primary boot disk.
Alternatively, we'll present another efficient method for replicating valid VTOC data, quickly (and consistently) for populating multiple disk spindles.
CAUTION
The disk0 VTOC (partitioning) should have been verified previously in Step 2.8.9 in ''Section 2.8: Install the Solaris 8 Operating Environment on Each Cluster Node'' section on page Module 2-13.
NOTE
In this step, the partition information shown is for typical disk drives, where disk0 (c0t0) is the primary boot disk and disk1 (c0t1) is the alternate boot disk (mirror). Both physical disks are identical (that is, that are the same model and type) and are formatted identically.
Key Practice: For each cluster node, the Solstice DiskSuite software configuration will create three separate metastate database replicas on three separate disk spindles, further maximizing availability by ensuring that a dual disk failure would have to occur for the Solstice DiskSuite software to be unable to determine a valid state database (that is, for a valid Solstice DiskSuite software quorum, at least 51% of the quorum devices must be available).
Additionally, placing a metastate database replica on each of the disk arrays (that is, for a pair of Sun StorEdge D1000 arrays, placing replicas on different arrays and physically on opposite sides of each array) provides an additional measure of redundancy. The replicas can also provide a higher level of availability (for example, having separate data channels).
Example: Replicate VTOC Information
For easy and consistent replication of a standard disk drive partitioning (for example, when configuring multiple spindles), consider the following example, which uses the prtvtoc(1M) and fmthard(1M) commands to replicate a valid VTOC.
CAUTION
Always save a copy of the original VTOC before modifying or rewriting to disk. Take precautions when using these commands to modify or rewrite disk information. Also, in the following example, the original VTOC data is saved temporarily in the /tmp directory (note that the /tmp/vtoc.orig file would be erased during a system reboot).
This example assumes that both disk spindles are identical (that is, in type, model, and geometry) and that before reading the existing VTOC from the primary boot disk (c0t0), a new VTOC is written on the alternate boot disk (c0t1).
The following is an example of how to use the prtvtoc(1M) and fmthard(1M) commands:
# prtvtoc /dev/rdsk/c0t1d0s2 > /tmp/vtoc.orig {{saves copy of original disk1 VTOC}} # prtvtoc /dev/rdsk/c0t0d0s2 > /tmp/vtoc.new {{creates copy of a "valid" VTOC; here, disk0 was previously verified and will be replicated}} # fmthard -s /tmp/vtoc.new /dev/rdsk/c0t1d0s2 {{writes the correct disk0 VTOC to disk1}} # prtvtoc /dev/rdsk/c0t1d0s2 {{verifies that disk1 VTOC matches the disk0 VTOC}}
NOTE
You can use the OpenBoot PROM commands probe-scsi or probe-ide to determine disk information. Look at both disk drives to ensure they are the same size. If they are not, use the values for the smaller drive.
CAUTION
Prior to issuing either of these OpenBoot PROM commands, you should perform the reset - all command at the ok prompt to avoid hanging the system, which would require local, manual intervention.
Step 2.10.10Configuring the Shared Storage
Configure disk partitions for all shared storage (disk spindles). Ensure that each shared disk is configured correctly, according to site-specific requirements.
Ensure each shared disk (spindle) is configured and verified before proceeding to the next step.
Examine the partition table for each shared disk (that is, each Sun StorEdge D1000 array spindle), including: c1t0, c1t1, c1t2, c1t8, c1t9, c1t10, c2t0, c2t1, c2t2, c2t8, c2t9, and c2t10.
Verify the configuration follows the following guidelines:
Slice 0 is approximately 2 Gbytes in size for the shared data.
Slice 2, the backup, is defined as the full extent of the disk.
Slice 7, the alternate, Reserve cylinders 1 through 6 (the first six cylinders.
NOTE
For our hardware configuration, the twelve disks (total) are divided into disk groups for creating the mirrored metavolumes within shared storage. Shared volumes are required for HANFS and the Apache data services.
Key Practice: When partitioning many disks, save time using a standard partitioning scheme. For example, this implementation configures six disk spindles in each array. Array 1 is connected to each cluster node by way of controller c1, and array 2 is connected to each cluster node by way of controller c2. A total of 12 disks are configured. Furthermore, implement a partitioning scheme that is flexible and allows for the use of a couple of volume managers. Partitioning each disk spindle identically can save time and provide additional configuration flexibility.
Summary of Key Practices
Verify system date and time is set correctly, for all production nodes. Configure /etc/nsswitch.conf to search local (/etc) files ahead of any naming services. This increases availability by not having to rely on an outside agent. Always mirror the primary boot disk. Preferably, configure an HA-boot environment (that is, multiple datapaths) on each cluster node. For the Solstice DiskSuite software configuration, creating three separate metastate database replicas on three separate disk spindles can further maximize availability in the event of a single disk failure. Easily and consistently replicate a standard VTOC when configuring multiple disk spindles. Use a simple script to create the required partitions when configuring many shared disks. When partitioning multiple (similar) disk spindles and planning for filesystem layout, save time and reduce the opportunity for errors by using a standard partitioning scheme for multiple disk spindles. The size of swap space should be based on the actual application requirements, although the Sun Cluster 3.0 software requires a minimum of 750 Mbytes on each cluster node. |