- Objectives
- Prerequisites
- Introduction
- Management Server Functions
- Section 2.1: Installing and Configuring the Management Server
- Section 2.2: Configuring the Terminal Concentrator
- Section 2.3: Configuring the Solaris JumpStart Server
- Section 2.4: Installing SUNWccon Package on the Administrative Workstation
- Section 2.5: Configuring the Management Server to Administer Cluster Nodes
- Section 2.6: Configuring the Cluster Control Panel
- Appendix A: System Configuration Files
- Appendix B: References
Section 2.1: Installing and Configuring the Management Server
This section describes how to install the Solaris 8 Operating Environment (Solaris 8 OE), plus patches, on the management server (clustadm).
Key Practice: Ensure that all firmware is installed with the recent versions for all systems and subsystems (for example, for servers, disk arrays, and controllers). For example, on each system (for example, on each SunPlex node), ensure the system EEPROM contains the most current OpenBoot™ PROM version. For the examples presented in this guide, we used OpenBoot 3.15. Additionally, ensure that all subsystems and controllers are configured using the latest versions, as appropriate.
For each SunPlex node, you can find the latest version of the OpenBoot PROM at http://sunsolve.sun.com.
Step 2.1.1Installing the Solaris OE on the Administrative Workstation
For local (manual) installations, install the Solaris OE on the administrative workstation. In subsequent steps, this workstation will be configured to act as a management server (sometimes referred to as "clustadm").
NOTE
The same version of the Solaris OE runs on both the administrative workstation (clustadm) and on each cluster node on the SunPlex platform. When configuring the SunPLEX, refer to the actual requirements to determine the appropriate version of Sun Cluster software, plus applications, and to determine the appropriate Solaris OE version and distribution packages to be installed. Determine which, if any, real dependencies exist. For example, selecting "Entire Distribution Plus OEM" may be required if there are any actual dependencies upon third- party software.
Key Practice: Configure a flexible, consistent disk partitioning scheme. For example, configure a consistent partitioning scheme that is flexible enough to allow for the use of either Solstice DiskSuite software or VxVM software.
Implement the following standard partitioning recommendations for boot disks:
Partition each disk spindle identically to save time and provide additional configuration flexibility by ensuring consistency across nodes.
Reserve cylinders 05 (the first six cylinders) on each disk. Both volume managers require these cylinders be left unused and available. For example, Solstice DiskSuite software assigns these cylinders to slice 7 on each disk (for replica metatstate databases). If a volume manager is not configured, there should not be any significant penalties for leaving these cylinders unused.
The following partitioning guidelines can be implemented for the management server (clustadm) boot disk, which reserves approximately 10MB for a volume manager and allocates 1GB (slice 1) for swap space, and assigns all remaining space to "/" (root, on slice 0).
Configure boot disk slices using the following guidelines: Slice 0 = cylinders 500 - 7461 assigned to "/" (all unallocated space; approximately 10GB) Slice 1 = cylinders 7 - 499 assigned to "swap" (approximately 1GB;) Slice 2 = cylinders 0 - 7505 assigned as "backup"(full extent of the disk) Slice 7 = cylinders 1 - 6 "unassigned" (reserve 10MB for use by a volume manager) {Reserve cylinders 1 - 6: Solstice DiskSuite software would require slice 7; VxXM requires slices 3 and 4.} CAUTION Our example assumes an 18GB disk drive with 7506 cylinders. For each configuration, always ensure the slicing information matches the actual disk geometry. |
NOTE
The actual swap space should be sized based on the actual Solaris OE version and the requirements of the applications to be hosted. During the procedures presented in this book, we will not configure a volume manager on the administrative workstation (clustadm).
Step 2.1.2Verifying the Solaris OE Configuration
To verify that your Solaris OE configuration is correct, log in as the root user on the management server (clustadm) and ensure that the Solaris OE has been installed, as specified in this section, and confirm that the primary boot disk (c0t0) is configured, per the guidelines in this section.
NOTE
For the examples presented in this guide, the root password is set to abc.
Table 2-1 Solaris OE Configuration
Hostname: |
clustadm |
IP address: |
129.153.xx.xxx |
Name service: |
None (local files) |
Set subnet: |
Yes |
Subnet mask: |
255.255.255.0 |
Default gateway: |
None |
NOTE
The values quoted in the preceding table are sample values. In a live installation, substitute the appropriate site-specific values.
Key Practice: To verify that the Solaris OE installation was successful and that no errors were reported:
Review the /var/sadm/README file to determine the location of the most recent installation logs (for example, to determine the location of /var/sadm/system/logs).
Examine the current Solaris OE installation log files for potential errors (for example, examine begin.log, sysidtool.log, or install.log).
Confirm the cause of any installation error messages that have occurred, resolving failures before proceeding further.
Refer to the SC3.0 U1 Error Messages Manual for error message translations. See Appendix B for information about obtaining this manual.
Step 2.1.3Tracking Installation Errors
On the administrative workstation, examine the installation logs, ensuring that any Solaris OE installation errors do not go undetected or unresolved. To do this, enter the following commands:
clustadm# cd /var/sadm/system/logs clustadm# pwd /var/sadm/system/logs clustadm# ls begin.log finish.log install.log begin.log_2000_04_13 finish.log_2000_04_13 sysidtool.log {{sample dates only}} clustadm# more * It is important to look for installation error messages in the logs. Example: pkgadd: ERROR: postinstall script did not complete successfully
Step 2.1.4Obtaining the Latest Solaris OE Patches
For local (manual) installations only, obtain the latest required Solaris OE patches from either the SunSolveSM CD-ROM, or from SunSolve Online at http://sunsolve.Sun.com (click the Patches option on the left column).
NOTE
SunSolve is a contract service from Sun Enterprise Services. We recommend that you subscribe to this service, especially if you are running a production server. Outside North America, the method you use for obtaining the most recent patches available may deviate from the following procedure, which obtains patches through the http://sunsolve.sun.com web site. Ask your local Sun service provider for the best method for getting the required patch clusters for your current operating environment.
Key Practice: Follow these recommendations when obtaining and using the latest available required Solaris OE patches:
Before you install new patches, create a /PATCHES directory on a dedicated server to store the patches. This enables centralized patch management. For example, the Sun BluePrints hands-on lab hardware has been configured with a "master" JumpStart technology server that will serve all software binaries, patches, and act as the repository.
Refer to the individual patch README files to review any installation prerequisites before installing patches. Using this practice could prevent conflicts with other patches, software, bootprom variables, or other unknowns.
Always install the latest Solaris OE recommended patches from SunSolve. Maintaining the latest recommended patches assures your system provides the highest reliability.
Step 2.1.5Installing Patches on the Administrative Workstation
For local (manual) installations only, install the latest Solaris OE recommended patches on the administrative workstation. Ensure patches are successfully installed and applied.
Key Practice: Follow these recommendations when installing patches on the administrative workstation.
Review the /var/sadm/README file to identify important log files to be examined, including the /var/sadm/install data/Solaris 2.8 Recommended log file.
Confirm the cause of any patch installation error messages which may occur.
Refer to the SC3.0 U1 Error Messages Manual for error message translations See Appendix B for information about obtaining this manual.
Step 2.1.6Rebooting the Management Server
For local (manual) installations, reboot the management server after all patches have been successfully installed and applied.
Reboot the system after patches have been installed and applied.
NOTE
It is often a good idea to reboot the system after changes are made to system software and configuration. For example, at this time, reboot the management server after the patches have been installed to ensure that changes are applied and that a consistent state has been achieved.
Step 2.1.7Verifying Shell Environment Variables
On the clustadm workstation, log in as the root user. Verify the shell environment variables are established as listed in the following table. Make these settings permanent by editing either the /.profile or /.login (for C shell users) file. Generic examples of each of these files can be found in Appendix A.
Note the following:
Export all variables.
Use the /.profile or /.login file to make the root user settings.
Edit the /etc/passwd file to change the shellthe default shell is /sbin/sh.
NOTE
Prior to using a text editor (vi) to view and modify files, verify and set your terminal environment variable, as appropriate (for example, set it to TERM=vt220) for proper video display.
Table 2-2 Example Environment Variables
Variable |
Label |
TERM |
vt220 |
stty |
istrip |
Prompt |
<hostname># {{e.g, clustadm#}} |
Add the following to the PATH variable |
PATH=$PATH:/usr/bin:/usr/ucb:/etc:/ sbin:/usr/sbin:/opt/SUNWcluster/bin: |
Add the following to the MANPATH variable |
MANPATH=$MANPATH:/usr/dt/man:/usr/ man:/usr/openwin/share/man:/opt/ SUNWcluster/man: |
Summary of Key Practices
Use the JumpStart software to maintain consistency and to fully automate the software installations and configurations of each cluster node. Ensure all firmware are installed with the most recent required versions for all systems and subsystems (for example, for servers, disk arrays, and controllers). For example, ensure the EEPROM contains the most current OpenBoot PROM version on each cluster node, and that disk subsystems are configured using the latest revisions, as appropriate. Configure a flexible, consistent disk partitioning scheme. As part of the initial OS installation preparation, and prior to configuring the system EEPROM, reset the EEPROM to the factory defaults. Verify that the Solaris OE installation was successful and that all required packages are installed. Create a /PATCHES directory on a dedicated server to store all patches to enable centralized patch management. Ensure that no conflicts exist. Refer to the individual patch README files to review any installation prerequisites before installing patches. Install the latest required Solaris OE patches. Validate the installation of all patches, reviewing installation logs. Reboot the system after all patches have been installed and applied. |
End of Section 2.1
This completes this section. The Solaris OE has been manually installed and all patches have been applied and verified.