- Objectives
- Prerequisites
- Introduction
- Management Server Functions
- Section 2.7: Solaris OE Installation Preparation on Each Cluster Node
- Section 2.8: Install the Solaris 8 Operating Environment on Each Cluster Node
- Section 2.9: Install Recommended Solaris OE Patches on Each Cluster Node
- Section 2.10: The Solaris OE Post Installation and Configuration
- Section 2.11: Configure Additional Cluster Management Services
- Appendix A: System Configuration Files
Section 2.11: Configure Additional Cluster Management Services
In this section, you will configure additional management services that can help the cluster administrator by making cluster operations more efficient and less error prone.
NOTE
In future modules, you will be instructed further on how to setup additional Sun Cluster 3.0 software features, as required to complete the Sun Cluster 3.0 software configuration.
Step 2.11.1Setting Up the syslog(1M) Feature
As per site-specific requirements, we recommend setting up the syslog(1M) facility by configuring the /etc/syslog.conf file on each cluster node and verifing that the messages are logged properly.
Key Practice: On each cluster node, set up the syslog(1M) facility to forward errors and system messages to the management server (administrative workstation). The logged message includes a message header and a message body. The message header consists of a facility indicator, a severity level indicator, a timestamp, a tag string, and optionally the process ID. See the syslogd(1M) and syslog(3C) man pages for additional information.
Step 2.11.2Implementing a Repository
Implement a repository on the management server (clustadm) for saving a snapshot of site-specific and system configuration files. This includes all modified system files specific to each node. Additionally, for each cluster node, ensure that a valid backup (most recent snapshot) is maintained for each file and that the owner, group, and permissions are preserved correctly.
Key Practice: Simplify cluster administration and management operations by ensuring consistent pathnames within the managed cluster environment. For example, when performing the Sun Cluster 3.0 software installation procedures, you may be required to re-access specific software (applications) or be required to re-enter certain command sequences (steps) repeatedly. This may be necessary if a data entry error occurs or when a given series of setup instructions have failed and the subsequent verification instructions do not produce the intended results. To further simplify the numerous tasks required to successfully complete the setup of a cluster, implement time-saving practices.
Simplify the numerous "management server-to-cluster node" NFS mounting of directories (and NFS exported filesystems) by creating a mount point (directory name) that can be shared from the management server and accessed remotely by each cluster node using the following syntax:
/net/mgmt_server/directory
Where mgmt_server is clustadm and directory is the name of the cluster node (for example, clustnode1).
This ensures that the path naming to the referenced resources is consistent, regardless of where the following administration functions are being performed.
Execute the following commands on the management server:
clustadm# mkdir /saved_files/clustnode1 clustadm# mkdir /saved_files/clustnode1 clustadm# chmod -R 777 /saved_files
Key Practice: Decrease administrative overhead of repetitive cluster operations and improve efficiency by implementing a centralized repository for site-specific configuration data files. Whenever any of these site-dependent files are modified, ensure a consistent, valid backup is created on the management sever (the central repository).
When setting up and maintaining the SunPlex platform, certain procedures can become repetitive when configuring multiple cluster nodes, or you may be required to reconstruct the cluster configuration to resolve problems caused by mistakes and operator errors. In theory, almost any system file that has to be modified during the configuration process can be saved to the management server (and easily restored, as needed). For each file, ensure backups are valid (copied without errors) and have retained the proper owner, group, and permissions.
Create a repository for the following site-specific files:
- /.profile
- /.cshrc
- /.login
- /.rhosts
- /etc/inet/hosts
- /etc/ftpusers
- /etc/defaultlogin
- /etc/group
- /etc/inet/ntp.conf
Additional examples of site-specific system files modified during these exercises includes, but are not limited to, the following:
- /etc/ethers
- /etc/vfstab
- /etc/dfs/dfstab
- /etc/defaultrouter
- /etc/serialports
- /etc/notrouter
- /etc/syslog.conf
- /kernel/drv/md.conf
- /etc/lvm/md.tab
VTOC information (for all disks and/or types, the output from prtvtoc(1M) is saved)
Cluster post-installation information (for example, after the installation is successfully completed, the output from the scconf(1M), scstat(1M), and scrgadm(1M) utilities is saved to document the installation and configuration of the cluster)
Consider the following for each node being managed:
- Cluster node host ID
- BIOS information
- OpenBoot PROM settings
- Logical host attributes
Step 2.11.3Setting Up the /saved_files Directory
On the management server (clustadm), create a new entry in the /etc/dfs/dfstab file so that the /saved_files directory is added and made exportable (that is, shared) with read and write (rw) permission for each cluster node. See to Step 2.3.4 and Step 2.3.5 for an example on creating a dfstab entry and sharing the directory.
Verify that the entry created in the /etc/dfs/dfstab file is correct and that the directory is shared correctly and accessible on each cluster node, before proceeding to the next step.
Step 2.11.4Copying Site-Specific Configuration Files From clustnode1
Copy all of the site-specific configuration files from clustnode1 to the /saved_files/clustnode1 directory on the management server, as in the following example.
clustnode1# cd /net/clustadm/saved_files/clustnode1 clustnode1# pwd /net/clustadm/saved_files/clustnode1
Step 2.11.5Copying Site-Specific Configuration Files From clustnode2
Copy all of the site-specific configuration files from clustnode2 to the /saved_files/clustnode2 directory on the management server.
clustnode2# cd /net/clustadm/saved_files/clustnode2 clustnode2# pwd /net/clustadm/saved_files/clustnode2
Step 2.11.6Creating a Backup of Site-Specific System Configuration Files
Create a backup of all of the site-specific system configuration files. Execute the following commands on each cluster node:
clustnode1# cp /.profile . clustnode1# cp /.cshrc . clustnode1# cp /.login . clustnode1# cp /.rhosts . clustnode1# cp /etc/inet/hosts . clustnode1# cp /etc/ftpusers . clustnode1# cp /etc/defaultlogin .
Key Practice: When making modifications to the hosts file, always reference the /etc/inet/hosts (source) file because the /etc/hosts file is a link to /etc/inet/hosts.
Step 2.11.7Verifying the Backup Files
Verify that the files were copied correctly (that is, there is a valid backup), including the correct filenames, owner, group, and permissions. For any system file that gets modified and backed up (and which may need to be restored later, simply reverse the process of copying these files). Care must be taken to know the correct file owner, group, and permission settings for each backup created. Ensure a valid backup was created with the following command:
# ls -lisa {{Verify snapshot was successful, and files were copied correctly.}}
Key Practice: Create an consistent, automated method for ensuring that all of the site-dependent system files are backed up correctly.
A simple tar(1) command can make an efficient backup method for each cluster node, saving site-specific files and retaining the correct owner, group, and permissions by executing the following commands:
# cd /net/mgmt_server/saved_files/node-specific-directory # tar cvf ./systemfiles.tar /.profile /.cshrc /.login /.rhosts /etc/inet/hosts /etc/ftpusers /etc/defaultlogin {{Verify files are copied to the correct destination directory, and a valid backup was created.}}
Ensure that a valid restoration can be achieved. After verifying the backup is created correctly, ensure that the system files can be restored correctly (that is, a valid restoration) by using the most recent (valid) backup before continuing to build the cluster.
Verify that a complete restoration occurs for all site-specific and system files and that each file retains the correct owner, group, and permissions by executing the following command:
clustnode2# cd /net/mgmt_server/saved_files/clustnode2 clustnode2# tar xvfp ./systemfiles.tar /.profile /.cshrc /.login /.rhosts /etc/inet/hosts /etc/ftpusers /etc/defaultlogin {{verify files are copied to the correct destination directory, along with correct owner, group, and permissions, and a valid system file has been restored on clustnode2.}}
NOTE
It is a good idea to re-verify the root login, as part of ensuring a valid restoration of all site-specific and system configuration files because the startup scripts may have been altered. One method is to simply verify that you can remotely log in (rlogin) from clustnode1 to clustnode2 as the superuser (root) (in this example, this can be verified after clustnode2 has been restored and before logging off the clustnode2 console). Verify the superuser login for each node that has been restored, as described.
Summary of Key Practices
On each cluster node, set up the syslog(1M) facility to forward errors and system messages to the management server (Sun Cluster 3.0 software administrative workstation). Simplify cluster administration and repetitive operations by creating a mount point that can be shared from the management server and accessed remotely by each cluster node. Decrease administrative overhead of repetitive cluster operations and improve efficiency by implementing a centralized repository for site-specific configuration data (files). Always reference the /etc/inet/hosts file when making modifications. Implement a consistent, automated method for saving and restoring site-specific and other system files. Before logging off the console, verify that the superuser can log in successfully after the system files have been restored from the most recent backup. |
End of Module Two
Module 2 is now complete. You have successfully performed the following procedures:
Completed the administrative workstation (management server) setup.
Reviewed the configuration of the JumpStart software services (for example, the Solaris OE and the Sun Cluster 3.0 software, plus all patches).
Configured the terminal concentrator.
Configured the Cluster Console (CCP) utility.
Prepared each cluster node before the Solaris OE installation.
Verified that the Solaris OE installation (plus patches) was successful and performed post installation and configuration procedures on each cluster node.
Configured root workspace on each cluster node.
Backed up (saved) all of the site-specific files.