Securing the Sun Cluster 3.x Software
- Background Information
- Securing Sun Cluster 3.x Nodes
- Maintaining a Secure System
- About the Author
- Acknowledgements
- Related Resources
This article describes how to secure the Solaris_ Operating Environment (Solaris OE) and the Sun_ Cluster 3.x software. To provide a robust environment where Sun Cluster 3.x software can be deployed, very specific requirements are placed on the configuration of the Solaris OE. Before the release of Sun Cluster 3.0 (12/01) software, no secured configurations were supported.
By implementing the recommendations for the supported agents, you can increase the reliability, availability, serviceability, and security (RASS) of systems running the Sun Cluster 3.x software. These objectives are accomplished by securing the servers so that they are not as susceptible to attacks.
This article contains the following topics:
- "Background Information" on page 2
- "Securing Sun Cluster 3.x Nodes" on page 12
- "Maintaining a Secure System" on page 25
- "About the Author" on page 26
- "Acknowledgements" on page 26
- "Related Resources" on page 27
Background Information
This section contains the following topics:
- "Assumptions and Limitations" on page 2
- "Qualified Software Components" on page 3
- "Support" on page 6
- "Using the Solaris Security Toolkit Software" on page 6
- "Solaris OE Defaults and Modifications" on page 6
- "Sun Cluster Software Daemons and Services" on page 9
- "Terminal Server Requirements" on page 10
- "Node Authentication Options" on page 10
Assumptions and Limitations
In this article, our recommendations are based on several assumptions and limitations as to what can be done to secure Sun Cluster 3.x nodes using a Sun supported configuration. Our recommendations assume a platform based on Solaris OE version 8 or 9 and the Sun Cluster 3.x software. We use the Sun Cluster 3.1 software version in this article.
NOTE
Before the release of Sun Cluster 3.0 (12/01) software, no secured configurations were supported.
Solaris OE hardening can be interpreted in a variety of ways. For the purposes of developing a hardened server configuration, the recommendations in this article represent all of the possible Solaris OE hardening. That is, anything that can be hardened, is hardened. Things that are not hardened are not modified for the reasons described in this article.
Be aware that hardening Solaris OE configurations to the level described in this article might not be appropriate for your environment. For some environments, you may want to perform fewer hardening operations than recommended. The configuration remains supported in these cases; however, additional hardening beyond what is recommended in this article is not supported.
Minimizing the Solaris OE or removing Solaris OE packages to minimize security exposure is not a supported option on Sun Cluster 3.x nodes at this time. Only the hardening tasks described in this article are supported for Solaris OE systems with Sun Cluster 3.x software running supported agents.
NOTE
Standard security rules apply to hardening cluster nodes: That which is not specifically permitted is denied.
Qualified Software Components
The configurations described in this article have the following minimum software requirements:
- Solaris 8 OE (2/02) software
- Solaris OE packages and installation
- Sun Cluster 3.1 software
- Supported agents
- ORACLE RAC and r* service limitations
- Cluster interconnect links
- Solstice DiskSuite_ software
The following subsections describe each of these components.
Solaris 8 OE
The use of Solaris 9 OE as the core OE for secured Sun Cluster 3.x nodes is supported.
However, this article is based on Solaris 8 OE (2/02). All of the hardening results presented in this article were produced on this version of the Solaris OE. Using versions other than Solaris 8 OE might produce results that are slightly different than those presented in this article.
Solaris OE Packages and Installation
Sun Cluster 3.x software requires only the Solaris OE End User cluster. It is strongly recommended that this Solaris OE cluster be used instead of the Entire Distribution.
Minimizing the number of Solaris OE packages installed directly reduces the number of services to disable, the quantity of patches to install, and the number of potential vulnerabilities on the system.
NOTE
This article neither addresses how to install the Solaris OE and Sun Cluster 3.x software, nor how to configure the cluster nodes.
Sun Cluster 3.x software allows you to automate the installation of the cluster and Solaris OE software through JumpStart_ software. Correspondingly, you can include the hardening steps performed by the Solaris Security Toolkit software in the JumpStart installation process. This article does not describe methods for integrating the hardening process documented in this article with JumpStart software. For information about this topic, refer to the Sun Cluster 3.x and Solaris_ Security Toolkit documentation.
Sun Cluster 3.x Software
All Sun Cluster 3.x versions support the hardened configurations described in this article.
Sun Cluster 3.x software provides mission-critical capabilities to an organization. While the Sun Cluster 3.x software addresses issues such as fault tolerance, failover, and performance, it is very important that the systems running Sun Cluster 3.x software are protected against malicious misuse and other attacks such as denial of service. The most effective mechanism for doing this is to configure the nodes in a cluster so that they protect themselves against attack.
Supported Agents
The most current listing of agents, and their corresponding software product versions, supported in a hardened configuration, is available in the Release Notes distributed with the Sun Cluster 3.x software. When determining the versions of supported software, refer to the Release Notes for the version of Sun Cluster 3.x software being used.
ORACLE RAC and r* Service Limitations
During ORACLE RAC installation, if an option is chosen to install RAC on all the cluster nodes, then ORACLE Installer uses rsh and rcp to copy files to other cluster nodes. Also, other ORACLE configuration tools (for example, netca) use rsh to modify configuration files on other cluster nodes.
NOTE
When using the Solaris Security Toolkit Sun Cluster 3.x driver, both rsh and rcp are disabled by default. These services are insecure and should not be left enabled on a secured cluster.
It is possible to install a cluster on each node and set up configuration files manually on each node, if an administrator does not want to change security settings.
In sites where the availability of rsh and rcp is critical, a secure mechanism provides the same functionality (equivalent to rsh, rcp: ssh, and scp) through the Secure Shell (SSH), if configured properly. These commands provide an encrypted and authenticated mechanism for ORACLE software to perform tasks on remote machines.
Configure SSH to permit remote login without passwords, then replace the system-provided rsh and rcp binaries with links to the SSH commands. In this way, you can provide secure rsh and rcp link functionality. This approach simplifies the installation and configuration of ORACLE RAC while still maintaining a secure posture.
Cluster Interconnect Links
It is critical to the overall security of the cluster that cluster interconnect links are kept private and are not exposed to a public network. Sensitive information about the health of the cluster and information about the file system is shared over this link.
We strongly recommend that these interconnects be implemented using separate and dedicated network equipment. From a security and availability perspective, we discourage the use of VLANs because they typically restrict packets based only on tags added by the switch. Minimal, if any, assurance is provided for validating these tags, and no additional protection against directed Address Resolution Protocol (ARP) attacks is gained.
Solstice DiskSuite Software
The configuration in this article assumes the use of Solstice DiskSuite software instead of VERITAS Volume Manager. If VERITAS Volume Manager is used, then the entries added by VERITAS to the /etc/inetd.conf file should be left enabled and the Solstice DiskSuite software entries disabled.
Support
The secured Sun Cluster 3.x software configuration implemented by the Solaris Security Toolkit suncluster3x-secure.driver is a Sun Microsystems-supported configuration for agents described in this document. Only Sun Cluster 3.x software implementations using the agents explicitly described in this article and referenced in the Sun Cluster 3.x Release Notes are supported in hardened configurations.
NOTE
Hardening Sun Cluster 2.x is not supported. Only agents listed in the Sun Cluster 3.x Release Notes are supported in hardened configurations when the hardening is performed based on the recommendations contained in this article.
For information on the supportability of the Solaris Security Toolkit, refer to its documentation.
NOTE
Sun Microsystems supports hardening Sun Cluster 3.x clusters whether security modifications are performed manually or through the use of the Solaris Security Toolkit software.
Using the Solaris Security Toolkit Software
The drivers described in this article are included in version 0.3.10 of the Solaris Security Toolkit software. We use this software to implement the hardening. Use this version, or newer versions, of the software when implementing the recommendations of this article. The Solaris Security Toolkit provides an error-free, standardized mechanism for performing the hardening process. Additionally, because it allows you to undo changes after they are made, we highly recommended that you use this software to perform the hardening process.
Solaris OE Defaults and Modifications
The Solaris OE configuration of a cluster node has many of the same issues as other default Solaris OE configurations. For example, too many daemons are used and other insecure daemons are enabled by default. Some insecure daemons include: in.telnetd, in.ftpd, fingered, and sadmind. For a complete list of default Solaris OE daemons and security issues associated with them, refer to the "Solaris Operating Environment Security: Updated for Solaris 9 OE" Sun BluePrints_ OnLine article.
This article recommends that all unused services be disabled. Based on the Solaris OE installation cluster (SUNWCall) typically used for a Sun Cluster 3.x node, there are over 100 recommended Solaris OE configuration changes to improve the security configuration of the Solaris OE image running on each node. While the SUNWCall Solaris OE cluster is typically used for cluster installations, only the SUNWuser cluster is required. It is strongly recommended that you limit the number of Solaris OE services and daemons installed by using the Solaris OE cluster that contains the fewest number of packages.
The typical hardening of a Solaris OE system involves commenting out all of the services in the /etc/inetd.conf file and disabling unneeded system daemons. All of the interactive services normally started from inetd are then replaced by Secure Shell (SSH). This approach cannot be used with Sun Cluster 3.x software.
The primary reason for this limitation is that volume management software requires several remote procedure call (RPC) services to be available. And, the Sun Cluster 3.x software installs additional RPC-based services.
Implementing these modifications is automated when you use the driver script suncluster3x-secure.driver available in version 0.3.10 of the Solaris Security Toolkit software.
Disabling Unused Services
The security recommendations in this article include all Solaris OE modifications that do not affect required Sun Cluster 3.x node functionality. Be aware that these modifications may not be appropriate for every Sun Cluster environment. In fact, it is possible that some of the services disabled by the default suncluster3x-secure.driver script will affect some applications. Because applications and their service requirements vary, it is unusual for one configuration to work for all applications. Therefore, the security modifications of each Sun Cluster environment may vary slightly based on the applications being run in that environment.
NOTE
Consider the role of a secured configuration in the context of the applications and services that the Sun Cluster 3.x software supports. The security configuration presented in this article is a high watermark for system security, because every service that is not required by the Sun Cluster 3.x software is disabled. This information should provide you with a clear idea of which services can and cannot be disabled without affecting the behavior of the Sun Cluster 3.x software.
Recommendations and Exceptions
Our recommendations for securing the server configuration consist of modifying recommendations made in the Sun BluePrints OnLine article "Solaris Operating Environment Security: Updated for Solaris 9 Operating Environment." We customize the recommendations to provide a configuration specifically for the supported agents.
The recommendations in this article improve the overall security posture of Sun Cluster 3.x nodes. This improvement is made by dramatically reducing access points to the Sun Cluster 3.x nodes and by installing secure access mechanisms. To streamline the implementation of these recommendations, we provide the Solaris Security Toolkit software, which automates many of the changes.
We made the following exceptions to the recommendations provided in the previously mentioned article, due to functionality that is required by the Sun Cluster 3.x software and support constraints:
-
RPC system startup script is not disabled, because RPC is used by volume management software.
-
Solaris basic security module (BSM) is not enabled. The BSM subsystem is difficult to optimize for appropriate logging levels and produces log files that are difficult to interpret. This subsystem should only be enabled at sites where you have the expertise and resources to manage the generation and data reconciliation tasks required to use BSM effectively.
-
Solaris OE minimization (removing unnecessary Solaris OE packages from the system) is not supported with Sun Cluster 3.x software.
Mitigating Security Risks of Solaris OE Services
Detailed descriptions of Solaris OE services and recommendations on how to mitigate their security implications are available in the following Sun BluePrints OnLine articles:
-
"Solaris Operating Environment Security: Updated for the Solaris 9 Operating Environment"
-
"Solaris Operating Environment Network Settings for Security: Updated for Solaris 8 Operating Environment"
The recommendations are implemented by the Solaris Security Toolkit in either its standalone or JumpStart modes.
Sun Cluster Software Daemons and Services
The Sun Cluster 3.x software adds several daemons and services to a system. These include daemons running on the system and additional RPC services. The following daemons run on a default Sun Cluster 3.1 software installation:
# ps -ef | grep cluster root 4 0 0 Oct 25 ? 0:03 cluster root 416 1 0 Oct 25 ? 0:00 /usr/cluster/lib/sc/rpc.pmfd root 82 1 0 Oct 25 ? 0:00 /usr/cluster/lib/sc/clexecd root 83 82 0 Oct 25 ? 0:00 /usr/cluster/lib/sc/clexecd root 453 1 0 Oct 25 ? 0:01 /usr/cluster/lib/sc/rgmd root 426 1 0 Oct 25 ? 0:00 /usr/cluster/lib/sc/rpc.fed root 439 1 0 Oct 25 ? 0:00 /usr/cluster/bin/pnmd root 2260 1 0 Dec 12 ? 0:00 /usr/cluster/lib/sc/cl_eventd root 2356 1 0 Dec 12 ? 0:23 /var/cluster/spm/bin/scguieventd
The Sun Cluster 3.1 software installation installs the following additional RPC services in the /etc/inetd.conf file:
# Start of lines added by SUNWscu 100145/1 tli rpc/circuit_v wait root /usr/cluster/lib/sc/rpc.scadmd rpc.scadmd 100151/1 tli rpc/circuit_v wait root /usr/cluster/lib/sc/rpc.sccheckd rpc.sccheckd -S # End of lines added by SUNWscu
The following RPC services are required by the Sun Cluster 3.x software and must be present in the /etc/inetd.conf file:
# rpc.metad 100229/1 tli rpc/tcp wait root /usr/sbin/rpc.metad rpc.metad # rpc.metamhd 100230/1 tli rpc/tcp wait root /usr/sbin/rpc.metamhd rpc.metamhd
The qualified configuration uses Solstice DiskSuite software, which requires the following RPC services in the /etc/inetd.conf file:
# rpc.metamedd - DiskSuite mediator 100242/1 tli rpc/tcp wait root /usr/sbin/rpc.metamedd rpc.metamedd # rpc.metacld - DiskSuite cluster control 100281/1 tli rpc/tcp wait root /usr/sbin/rpc.metacld rpc.metacld
If you use VERITAS Volume Manager software instead of Solstice DiskSuite software, leave the appropriate VERITAS RPC entries in the /etc/inetd.conf file enabled and disable the unneeded Solstice DiskSuite software entries.
Terminal Server Requirements
Sun Cluster 3.x software does not require a specific terminal server as Sun Cluster 2.x software did. This change is a significant improvement from a security perspective. Terminal server connections frequently do not use encryption. Lack of encryption allows a malicious individual to sniff the network and "read" the commands being issued to the client. Frequently, these commands include an administrator logging in as root and providing the root password.
We strongly recommend that you use a terminal server that supports encryption. Specifically, we recommend a terminal server that implements Secure Shell (SSH).
If you cannot use a terminal server that supports encryption, then only connect terminal servers to a private management network. Although this helps isolate network traffic to the terminal servers, it is not as secure as a terminal server supporting SSH.
Node Authentication Options
Node authentication is how potential nodes must identify themselves before being allowed to join a cluster. Sun Cluster 3.x software provides several options for node authentication. Ensuring that all nodes are properly authenticated is a critical aspect of cluster security. This section describes the available node authentication options and provides recommendations on which level of node authentication to use.
The available node authentication options in Sun Cluster 3.x software are as follows:
-
None (for example, any system is permitted to join the cluster)
-
IP address
-
UNIX
-
Diffie-Hellman using DES
In addition, the scsetup command provides the following submenu under the New Nodes menu option::
*** New Nodes Menu *** Please select from one of the following options: 1) Prevent any new machines from being added to the cluster 2) Permit any machine to add itself to the cluster 3) Specify the name of a machine which may add itself 4) Use standard UNIX authentication 5) Use Diffie-Hellman authentication 6) New nodes ?) Help q) Return to the Main Menu
At a minimum, the node authentication should be set up to require that new cluster nodes be added manually rather than automatically. Select option 1 to restrict the ability of systems to add themselves, then use option 3 to specify the name of the new cluster node. These two options run scconf with the following commands, which you can run manually:
# scconf -a -T node=. # scconf -a -T node=phys-sps-1
The next consideration is how to validate that a node is who it says it is. The two alternatives are standard UNIX or Diffie-Hellman authentication.
The default is to use UNIX authentication. If a private interconnect connects the nodes and the scconf command is used to restrict new nodes from joining, this approach is probably adequate.
In environments where other systems may attempt to join the cluster, or if the data on the cluster is particularly sensitive, then we recommend using the Diffie-Hellman authentication method.
Diffie-Hellman authentication uses Secure RPC to authenticate the nodes in the cluster. This authentication requires that the public and private keys be properly set up on each of the nodes. The most effective way to do this task is through NIS+, because it simplifies the management and maintenance of these key pairs. However, it is possible to use Secure RPC without NIS+.
For additional information on Secure RPC and Diffie-Hellman authentication, refer to the keyserv(1M), publickey(4), and nis+(1) man pages.