Securing the Sun Cluster 3.0 Software
Introduction
This article describes how to secure the Solaris_ Operating Environment (Solaris OE) and the Sun_ Cluster 3.0 software. To provide a robust environment where Sun Cluster 3.0 software can be deployed, very specific requirements are placed on the configuration of the Solaris OE. Before the release of Sun Cluster 3.0 (12/01) software, no secured configurations were supported.
By implementing the recommendations for the supported agents, you can increase the reliability, availability, and serviceability (RAS) of systems running the Sun Cluster 3.0 software. These objectives are accomplished by securing the servers so that they are not as susceptible to attacks.
This article contains the following topics:
- "Background Information"
- "Securing Sun Cluster 3.0 Nodes"
- "Verifying Node Hardening"
- "Sample Results"
- "Maintaining a Secure System"
- "About the Author"
- "Acknowledgements"
Background Information
This section contains the following topics:
- "Assumptions and Limitations"
- "Qualified System Configuration"
- "Support"
- "Using the Solaris Security Toolkit Software"
- "Solaris OE Defaults and Modifications"
- "Additional Daemons and Services"
- "Terminal Server Requirements"
- "Node Authentication Options"
Assumptions and Limitations
In this article, our recommendations are based on several assumptions and limitations as to what can be done to secure Sun Cluster 3.0 nodes using a Sun supported configuration. Our recommendations assume a platform based on Solaris 8 OE (2/02) and the Sun Cluster 3.0 (12/01) or 5/02) software. We use the Sun Cluster 3.0 (5/02) software version in this article.
NOTE
Before the release of Sun Cluster 3.0 (12/01) software, no secured configurations were supported.
Solaris OE hardening can be interpreted in a variety of ways. For the purposes of developing a hardened server configuration, the recommendations in this article represent all of the possible Solaris OE hardening. That is, anything that can be hardened, is hardened. Things that are not hardened are not modified for the reasons described in this article.
Be aware that hardening Solaris OE configurations to the level described in this article might not be appropriate for your environment. For some environments, you may want to perform fewer hardening operations than recommended. The configuration remains supported in these cases; however, additional hardening beyond what is recommended in this article is not supported.
Minimizing the Solaris OE or removing Solaris OE packages to minimize security exposure is not a supported option on Sun Cluster 3.0 nodes at this time. Only the hardening tasks discussed in this article are supported for Solaris OE systems with Sun Cluster 3.0 software running supported agents.
NOTE
Standard security rules apply to hardening cluster nodes: That which is not specifically permitted is denied.
Qualified System Configuration
The configuration described in this article has the following characteristics:
- Solaris 8 OE (2/02) software
- Solaris OE packages and installation
- Sun Cluster 3.0 (5/02) software
- Supported agents
- ORACLE RAC limitations
- Cluster interconnect links
- Solstice DiskSuite_ software
The following subsections describe each of these characteristics.
Solaris 8 OE
This article is based on Solaris 8 OE (2/02). All of the hardening results presented in this article were produced on this version of the Solaris OE. Using versions other than Solaris 8 OE might produce results that are slightly different than those presented in this article.
Solaris OE Packages and Installation
Sun Cluster 3.0 software requires only the Solaris OE End User cluster. It is strongly recommended that this Solaris OE cluster be used instead of the Entire Distribution.
Minimizing the number of Solaris OE packages installed directly reduces the number of services to disable, the quantity of patches to install, and the number of potential vulnerabilities on the system.
NOTE
This article neither addresses how to install the Solaris OE and Sun Cluster 3.0 software, nor how to configure the cluster nodes.
Sun Cluster 3.0 software allows you to automate the installation of the cluster and Solaris OE software through JumpStart_ software. Correspondingly, you can include the hardening steps performed by the Solaris_ Security Toolkit software in the JumpStart installation process. This article does not discuss methods for integrating the hardening process documented in this article with JumpStart software. For information about this topic, refer to the Sun Cluster 3.0 and Solaris Security Toolkit documentation.
Sun Cluster 3.0 Software
Only Sun Cluster 3.0 (5/02 and 12/01) software support the hardened configurations described in this article. Versions prior to 12/01 do not support the hardened configurations described in this article and should not be used to deploy these configurations.
Sun Cluster 3.0 software provides mission-critical capabilities to an organization. While the Sun Cluster 3.0 software addresses issues such as fault tolerance, failover, and performance, it is very important that the systems running Sun Cluster 3.0 software are protected against malicious misuse and other attacks such as denial of service. The most effective mechanism for doing this is to configure the nodes in a cluster so that they protect themselves against attack.
Supported Agents
The security recommendations in this article are limited to the following Sun Cluster 3.0 agents, supported in secured environments:
- Sun ONE Web Server
- Apache Web Server
- Sun ONE Messaging Server
- Sun ONE Directory Server
- Domain Name Server (DNS) Server
- Network File System (NFS) Server
- VERITAS NetBackup
- HA ORACLE 8.1.7 and 9i (32- and 64-bit)
- HA Sybase ASE 12.0 (32-bit)
- ORACLE OPS/RAC 8.1.7 and 9i (32- and 64-bit)
- SAP 4.6D (32- and 64-bit)
ORACLE RAC Limitations
During ORACLE RAC installation, if an option is chosen to install RAC on all the cluster nodes, then ORACLE Installer uses rsh and rcp to copy files to other cluster nodes. Also, other ORACLE configuration tools (for example, netca) use rsh to modify configuration files on other cluster nodes.
NOTE
When using the Solaris Security Toolkit Sun Cluster 3.0 driver, both rsh and rcp are disabled by default. These services are insecure and should not be left enabled on a secured cluster.
It is possible to install a cluster on each node and set up configuration files manually on each node, if an administrator does not want to change security settings.
In sites where the availability of rsh and rcp is critical, a secure mechanism provides the same functionality (equivalent to rsh, rcp: ssh, and scp) through the Secure Shell (SSH), if configured properly. These commands provide an encrypted and authenticated mechanism for ORACLE software to perform tasks on remote machines.
Configure SSH to permit remote login without passwords, then replace the system-provided rsh and rcp binaries with links to the SSH commands. In this way, you can provide secure rsh and rcp link functionality. This approach simplifies the installation and configuration of ORACLE RAC while still maintaining a secure posture.
Cluster Interconnect Links
It is critical to the overall security of the cluster that cluster interconnect links are kept private and are not exposed to a public network. Sensitive information about the health of the cluster and information about the file system is shared over this link.
We strongly recommend that these interconnects be implemented using separate and dedicated network equipment. From a security and availability perspective, we discourage the use of VLANs because they typically restrict packets based only on tags added by the switch. Minimal, if any, assurance is provided for validating these tags, and no additional protection against directed Address Resolution Protocol (ARP) attacks is gained.
Solstice DiskSuite Software
The configuration in this article assumes the use of Solstice DiskSuite software instead of VERITAS Volume Manager. If VERITAS Volume Manager is used, then the entries added by VERITAS to the /etc/inetd.conf file should be left enabled and the Solstice DiskSuite software entries disabled.
Support
The secured Sun Cluster 3.0 software configuration implemented by the Solaris Security Toolkit suncluster30u3-secure.driver is a Sun Microsystems-supported configuration for agents described in this document. Only Sun Cluster 3.0 (5/02 or 12/01) software implementations using the agents explicitly described in this article and referenced in the Sun Cluster 3.0 (5/02 or 12/01) release notes are supported in hardened configurations.
NOTE
Hardening Sun Cluster 2.x, 3.0, and 3.0 (7/01) software is not supported. Only agents described in this article and listed in either the Sun Cluster 3.0 (5/02 or 12/01) release notes are supported in hardened configurations.
The Solaris Security Toolkit is not a supported Sun product; only the end-configuration created by the Solaris Security Toolkit is supported. Solaris Security Toolkit support is available through the Sun_ SupportForum discussion group at http://www.sun.com/security/jass
NOTE
Sun Microsystems supports a hardened Sun Cluster 3.0 (5/02 or 12/01) cluster, using the agents specified in this document, whether security modifications are performed manually or through the use of the Solaris Security Toolkit software.
Using the Solaris Security Toolkit Software
The drivers described in this article are included in version 0.3.6 of the Solaris Security Toolkit software. We use this software to implement the hardening. Use this version, or newer versions, of the software when implementing the recommendations of this article. The Solaris Security Toolkit provides an error-free, standardized mechanism for performing the hardening process. Additionally, because it allows you to undo changes after they are made, we highly recommended that you use this software to perform the hardening process.
Solaris OE Defaults and Modifications
The Solaris OE configuration of a cluster node has many of the same issues as other default Solaris OE configurations. For example, too many daemons are used and other insecure daemons are enabled by default. Some insecure daemons include: in.telnetd, in.ftpd, fingered, and sadmind. For a complete list of default Solaris OE daemons and security issues associated with them, refer to the Solaris Operating Environment Security: Updated for Solaris 8 OE Sun BluePrints_ OnLine article.
This article recommends that all unused services be disabled. Based on the Solaris OE installation cluster (SUNWCall) typically used for a Sun Cluster 3.0 node, there are over 80 recommended Solaris OE configuration changes to improve the security configuration of the Solaris OE image running on each node. While the SUNWCall Solaris OE cluster is typically used for cluster installations, only the SUNWuser cluster is required. It is strongly recommended that you limit the number of Solaris OE services and daemons installed by using the Solaris OE cluster that contains the fewest number of packages.
The typical hardening of a Solaris OE system involves commenting out all of the services in the /etc/inetd.conf file and disabling unneeded system daemons. All of the interactive services normally started from inetd are then replaced by Secure Shell (SSH). This approach cannot be used with Sun Cluster 3.0 software.
The primary reason for this limitation is that volume management software requires several remote procedure call (RPC) services to be available. And, the Sun Cluster 3.0 software installs additional RPC- based services. These RPC services include the rpc.pmfd and rpc.fed.
Implementing these modifications is automated when you use the driver script suncluster30u3-secure.driver available in version 0.3.6 of the Solaris Security Toolkit software.
Disabling Unused Services
The security recommendations in this article include all Solaris OE modifications that do not affect required Sun Cluster 3.0 node functionality. Be aware that these modifications may not be appropriate for every node. In fact, it is likely that some of the services disabled by the default suncluster30u3-secure.driver script will affect some applications. Because applications and their service requirements vary, it is unusual for one configuration to work for all applications.
NOTE
Consider the role of a secured configuration in the context of the applications and services that the Sun Cluster 3.0 software supports. The security configuration presented in this article is a high watermark for system security, because every service that is not required by the Sun Cluster 3.0 software is disabled. This information should provide you with a clear idea of which services can and cannot be disabled without affecting the behavior of the Sun Cluster 3.0 software.
Recommendations and Exceptions
Our recommendations for securing the server configuration consist of modifying recommendations made in the Solaris Operating Environment Security: Updated for Solaris 8 Operating Environment Sun BluePrints OnLine article. We customize the recommendations to provide a configuration specifically for the supported agents.
The recommendations in this article improve the overall security posture of Sun Cluster 3.0 nodes. This improvement is made by dramatically reducing access points to the Sun Cluster 3.0 nodes and by installing secure access mechanisms. To streamline the implementation of these recommendations, we provide the Solaris Security Toolkit software, which automates many of the changes.
We made the following exceptions to the recommendations provided in the previously mentioned article, due to functionality that is required by the Sun Cluster 3.0 software and support constraints:
RPC system startup script is not disabled, because RPC is used by volume management software.
Solaris_ Basic Security Module (BSM) is not enabled. The BSM subsystem is difficult to optimize for appropriate logging levels and produces log files that are difficult to interpret. This subsystem should only be enabled at sites where you have the expertise and resources to manage the generation and data reconciliation tasks required to use BSM effectively.
Solaris OE minimization (removing unnecessary Solaris OE packages from the system) is not supported with Sun Cluster 3.0 software.
Mitigating Security Risks of Solaris OE Services
Detailed descriptions of Solaris OE services and recommendations on how to mitigate their security implications are available in the following Sun BluePrints OnLine articles:
Solaris Operating Environment Security: Updated for the Solaris 8 Operating Environment
Solaris Operating Environment Network Settings for Security: Updated for Solaris 8
The recommendations are implemented by the Solaris Security Toolkit in either its standalone or JumpStart modes.
Using Scripts to Perform Modifications
Each of the modifications performed by the Solaris Security Toolkit suncluster30u3-secure.driver are organized into one of the following categories:
- Disable
- Enable
- Install
- Remove
- Set
- Update
The following paragraphs briefly describe each of these categories and the modifications the scripts within the driver perform. For a complete listing of the scripts included in the suncluster30u3-secure.driver, refer to the Solaris Security Toolkit Drivers directory.
For detailed information about what each of the scripts do, refer to the BluePrint OnLine article titled The Solaris Security Toolkit - Internals: Updated for version 0.3.
In addition, the Solaris Security Toolkit copies files from the distribution directory to increase the security of the system. These system configuration files change the default behavior of syslogd, system network parameters, and other system configurations.
Disable Scripts
These scripts disable services on the system. Disabled services include the NFS client and server, the automounter, the DHCP server, printing services, and the window manager. The goal of these scripts is to disable all of the services that are not required by the system.
A total of 30 disable scripts are included with the Sun Cluster 3.0 software-hardening driver. These scripts impose modifications to disable all, or part, of the following services and configuration files:
|
|
|
Enable Scripts
These scripts enable the security features that are disabled by default on Solaris OE. These modifications include:
Enabling optional logging for syslogd and inetd
Requiring NFS client requests to use privileged ports for all requests
Enabling process accounting
Enabling improved sequence number generation according to RFC 1948
Enabling optional stack protection and logging to protect against most buffer overflow attacks
While some of these services are disabled, their optional security features remain enabled so that they are used securely if enabled in the future.
Install Scripts
These scripts create new files to enhance system security. In the Sun Cluster 3.0 driver, the following Solaris OE files are created to enhance the security of the system:
An empty /etc/cron.d/at.allow to restrict access to at commands
An updated /etc/ftpusers file with all system accounts to restrict system FTP access
An empty /var/adm/loginlog to log unsuccessful login attempts
An updated /etc/shells file to limit which shells can be used by system users
An empty /var/adm/sulog to log su attempts
In addition to creating the preceding files, some install scripts add software to the system. For the Sun Cluster 3.0 nodes, the following software is installed:
Recommended and Security Patch Clusters
MD5 software
FixModes software
Remove Scripts
Only one remove script is distributed with the Sun Cluster 3.0 driver, and it used to remove unused Solaris OE system accounts. The following accounts that are removed are no longer used by the Solaris OE and can safely be removed:
- smtp
- nuucp
- listen
- nobody4
Set Scripts
These scripts configure the security features of the Solaris OE that are not defined by default. A total of 14 scripts are distributed with the Sun Cluster 3.0 driver, and they configure the following Solaris OE security features not enabled by default:
- root password
- ftpd banner
- telnetd banner
- ftpd UMASK
- login RETRIES
- power restrictions
- System suspend options
- TMPFS size
- User password requirements
- User UMASK
Update Scripts
These scripts update the configuration files that are shipped with the Solaris OE and that do not have all of their security settings properly set. The following configuration files are modified:
- at.deny
- cron.allow
- cron.deny
- logchecker
- inetd.conf
Additional Daemons and Services
The Sun Cluster 3.0 software adds several additional daemons to a system. These include daemons running on the system and additional RPC services. The following daemons run on a default Sun Cluster 3.0 software installation:
# ps -ef | grep cluster root 4 0 0 Oct 25 ? 0:03 cluster root 416 1 0 Oct 25 ? 0:00 /usr/cluster/lib/sc/rpc.pmfd root 82 1 0 Oct 25 ? 0:00 /usr/cluster/lib/sc/clexecd root 83 82 0 Oct 25 ? 0:00 /usr/cluster/lib/sc/clexecd root 453 1 0 Oct 25 ? 0:01 /usr/cluster/lib/sc/rgmd root 426 1 0 Oct 25 ? 0:00 /usr/cluster/lib/sc/rpc.fed root 439 1 0 Oct 25 ? 0:00 /usr/cluster/bin/pnmd
The Sun Cluster 3.0 software installation installs the following additional RPC services in the /etc/inetd.conf file:
# Start of lines added by SUNWscu 100145/1 tli rpc/circuit_v wait root /usr/cluster/lib/sc/rpc.scadmd rpc.scadmd 100151/1 tli rpc/circuit_v wait root /usr/cluster/lib/sc/rpc.sccheckd rpc.sccheckd -S # End of lines added by SUNWscu
The following RPC services are required by the Sun Cluster 3.0 software and must be present in the /etc/inetd.conf file:
# rpc.metad 100229/1 tli rpc/tcp wait root /usr/sbin/rpc.metad rpc.metad # rpc.metamhd 100230/1 tli rpc/tcp wait root /usr/sbin/rpc.metamhd rpc.metamhd
The qualified configuration uses Solstice DiskSuite software, which requires the following RPC services in the /etc/inetd.conf file:
# rpc.metamedd - DiskSuite mediator 100242/1 tli rpc/tcp wait root /usr/sbin/rpc.metamedd rpc.metamedd # rpc.metacld - DiskSuite cluster control 100281/1 tli rpc/tcp wait root /usr/sbin/rpc.metacld rpc.metacld
If you use VERITAS Volume Manager software instead of Solstice DiskSuite software, leave the appropriate VERITAS RPC entries in the /etc/inetd.conf file enabled and disable the unneeded Solstice DiskSuite software entries.
Terminal Server Requirements
Sun Cluster 3.0 software does not require a specific terminal server as Sun Cluster 2.x software did. This change is a significant improvement from a security perspective. Terminal server connections frequently do not use encryption. Lack of encryption allows a malicious individual to sniff the network and "read" the commands being issued to the client. Frequently, these commands include an administrator logging in as root and providing the root password.
We strongly recommend that you use a terminal server that supports encryption. Specifically, we recommend a terminal server that implements Secure Shell (SSH). Terminal servers that support SSH are currently available from both Cisco (http://www.cisco.com) and Perle (http://www.perle.com).
If you cannot use a terminal server that supports encryption, then only connect terminal servers to a private management network. Although this helps isolate network traffic to the terminal servers, it is not as secure as a terminal server supporting SSH.
Node Authentication Options
Node authentication is how potential nodes must identify themselves before being allowed to join a cluster. Sun Cluster 3.0 software provides several options for node authentication. Ensuring that all nodes are properly authenticated is a critical aspect of cluster security. This section describes the available and provides recommendations on which level of node authentication to use.
The available node authentication options in Sun Cluster 3.0 software are as follows:
None (for example, any system is permitted to join the cluster)
IP address
UNIX_
Diffie-Hellman using DES
In addition, the scsetup command provides the following under option 6 New nodes:
*** New Nodes Menu *** Please select from one of the following options: 1) Prevent any new machines from being added to the cluster 2) Permit any machine to add itself to the cluster 3) Specify the name of a machine which may add itself 4) Use standard UNIX authentication 5) Use Diffie-Hellman authentication ?) Help q) Return to the Main Menu
At a minimum, the node authentication should be set up to require that new cluster nodes be added manually rather than automatically. Select option 1 to restrict the ability of systems to add themselves, then use option 3 to specify the name of the new cluster node. These two options run scsetup with the following commands, which you can run manually:
# scconf -a -T node=. # scconf -a -T node=phys-sps-1
The next consideration is how to validate that a node is who it says it is. The two alternatives are standard UNIX_ or Diffie-Hellman authentication.
The default is to use UNIX authentication. If a private interconnect is used to connect the nodes and the scconf command was used to restrict new nodes from joining, this approach is probably adequate.
In environments where other systems may attempt to join into the cluster, or if the data on the cluster is particularly sensitive, then we recommend using the Diffie-Hellman authentication method.
Diffie-Hellman authentication uses Secure RPC to authenticate the nodes in the cluster. This authentication requires that the public and private keys be properly set up on each of the nodes. The most effective way to do this task is through NIS+, because it simplifies the management and maintenance of these key pairs. However, it is possible to use Secure RPC without NIS+.
For additional information on Secure RPC and Diffie-Hellman authentication, refer to the keyserv(1M), publickey(4), and nis+(1) man pages.