- Objectives
- Prerequisites
- Introduction
- Enterprise Installation Services: Standard Installation Practices
- Hardware Configuration
- Solaris Configuration (clustadm)
- Install SUNWccon package on SC 3.0 Admin Workstation
- Patch Installation - Administration Workstation
- Configure Management Server for Administering Cluster Nodes
- Configure the Terminal Concentrator
- Configure Cluster Control Panel
- Configure Solaris OE (Each Cluster Node)
- Solaris OE —Post Installation and Configuration
- References
Section 1.8: Solaris OE Post Installation and Configuration
Upon successful completion of Solaris on each cluster node, perform post-installation (and site-specific) procedures, verifying the operating environment is configured correctly, prior to building the Sun Cluster.
For example, we start by ensuring the date/time is set correctly, on all SunPlex nodes, prior to installing additional software.
Step 1.8.1
In this step, prior to installing additional software, ensure the date/time are correct on each SunPlex node, using the Management Server (or, designated time host).
NOTE
This is NOT intended to synchronize the date/time on a running production cluster. For additional references, and when configuring NTP in a running cluster, see the References section, specifically SC3.0 Installation Guide "How to Update Network Time Protocol (NTP)," "SC3.0 Concepts," and "Cluster Time."
Key Practice: In preparation for installing additional software, set the system date/time on each SunPlex node. In the future, this can also help ensure that relevant timestamps (that is, log file entries and timestamps, error messages, and cluster events) are closely synchronized. On all nodes, verify date/time are set correctly, as per site-specific requirements for date/time, timezone, or timehost.
NOTE
It is often useful to track events closely and accurately over specific time periods, and can be important when auditing system statistics/logs for performance monitoring, capacity planning, configuration management, troubleshooting, as well as when ensuring scheduled, or sustaining operations are completed successful.
For a running cluster, the date and time between all cluster nodes must be synchronized. For this, the SunPlex employs the Network Time Protocol (NTP) to synchronize all nodes to designated timehosts. In normal cluster operation you should not need to reset the date and time unless it was set incorrectly, perhaps duing system installation.
At this time, ensure the system date and time is correct on each node. Synchronize the date and time to clustadm. On each cluster node, enter the following:
# ping clustadm clustadm is alive # rdate clustadm
NOTE
Verify that rdate completes successfully and the date and time are set correctly on each SunPlex node.
Step 1.8.2
We recommend setting up the Solaris syslog facility by configuring the /etc/syslog.conf file on each cluster node. After configuring this facility, verify messages are logged properly.
Key Practice: On each cluster node, configure the Solaris syslog facility to forward errors and system messages to the clustadm workstation. Each logged message includes a message header and a message body. The message header consists of a facility indicator, a severity level indicator, a timestamp, a tag string, and optionally the process ID. See man pages for syslogd(1M) and syslog(3C),for additional information.
Step 1.8.3
At this time, ensure the alternate boot disk (c0t1) is partitioned correctly. Execute the format command and examine the current settings for each slice.
On each cluster node, execute the format command and verify that the primary boot disk (c0t0) is properly partitioned, as previously described in Step 1.7.15.
Next, ensure the alternate boot disk (c0t1) is configured to match the primary boot disk (c0t0). Execute the format command, and select the alternate boot disk (c0t1), creating partitions to match the primary boot disk (c0t0).
NOTE
In this step, the partition information shown is for 'typical' disk drives, where disk0 (c0t0) is the primary boot disk and disk1 (c0t1) will be configured as the alternate boot disk (boot mirror). Both physical disks are of identical model and type, and must be formatted identically.
Step 1.8.4
At this time, configure disk partitions for all shared storage (disk spindles). Ensure that each shared disk is configured correctly, as per site-specific requirements.
Ensure that each shared disk (spindle) is configured and verified before proceeding to the next step.
Examine the partition table for each shared disk (that is, each D1000 spindle), including: c1t0, c1t1, c1t2, c1t8, c1t9, c1t10, c2t0, c2t1, c2t2, c2t8, c2t9, and c2t10.
Verify the configuration follows these guidelines, as indicated below:
Configure the partitions on each shared disk spindle.
Slice 0 approximately 2 GB (for shared data). Slice 2 (backup) defined as the full extent of the disk. Slice 7 (alternate) Reserve cylinders 1-6 (the first six cylinders).
Key Practice: Follow EIS recommendations for disk data layout. When possible, configure a standard, flexible disk partitioning scheme. In this case, use a consistent partitioning scheme that is flexible, allows for using either SVM or VxVM. Implement the following standard partitioning for boot disks. Partitioning each disk spindle identically can save time, provide flexibility, and maintain consistency across nodes.
Step1.8.5
The following is an example method for easily, and consistently replicating 'standard' disk drive partitioning (that is, as when configuring multiple spindles). This example uses the prtvtoc and fmthard commands to replicate an existing 'valid' VTOC.
CAUTION
In the following sequence, always save a copy of the original VTOC before modifying and rewriting to disk. Take extra care and precaution when using these commands to modify and rewrite disk information. Also, our example temporarily saves the original VTOC data in the /tmp directory. Note that /tmp/vtoc.orig would be erased during a system reboot.
This example assumes that both disk spindles are identical in type, model, and geometry, before reading the existing VTOC from the primary boot disk (c0t0), and writes a new VTOC on the alternate boot disk (c0t1).
# prtvtoc /dev/rdsk/c0t0d0s2 > /tmp/D0.vtoc {{create copy of a "valid" VTOC; here, disk0 was previously verified and will be replicated}} # fmthard -s /tmp/D0.vtoc /dev/rdsk/c0t1d0s2 {{write the correct disk0 VTOC to disk1}} # prtvtoc /dev/rdsk/c0t1d0s2 {{verify disk1 VTOC matches disk0 VTOC}}
NOTE
The OpenBoot Prom (OBP) commands probe-scsi or probe-ide can be used to determine disk information. For the boot disk (and alternate boot disk), ensure both disk drives are the same size. If they are not, use the values for the smaller drive.
CAUTION
Prior to issuing either of these OBP commands (for example, probe-scsi) you should temporarily set auto-boot? false and perform a reset-all command to avoid hanging the system (which would require local, manual intervention in order to re-set the hardware).
End of Module One
Module 1 is now complete. You have successfully performed the following procedures:
Verify Administrative Workstation Installation (OBP, Solaris OE, and patches).
Perform SC3.0 Admin Workstation (Management Server) setup.
Terminal Concentrator configuration.
Install and configure Cluster Console utility.
Verify cluster node installation (OBP, Solaris OE, and patches).
Configure syslog facility.