Installing a Cluster Grid
In this section, the installation of the cluster grid software stack is discussed, covering major issues, options and salient points particular to the cluster grid environment. For detailed step-by-step installation instructions for the individual software elements of the stack, refer to the appropriate documentation that is supplied with the product.
The previous section described evaluating which services to employ and which applications to support. The impact of that information on the tiers of the cluster grid was evaluated. The cluster grid installation process involves making further decisions at a lower level, and might involve using some advanced installation options.
Solaris Installation Considerations
Solaris 8 operating environment installations should be performed according to the requirements of the software you plan to run on each node. If thin node compute hosts are used, the disk configurations are often configured simply to supply scratch space, space for spool files, and core dump and crash files. Disk partitioning, in such cases, might simply include /, /var file systems and swap.
Solaris Jumpstart software can be used to great advantage in installing a large cluster grid. In addition to installing the basic operating environment on each node, it can also configure prerequisites for the various components, for example, the services port and admin user for Sun Grid Engine software. Some software components, such as the SunVTS™ diagnostic software, can also be installed directly from the Jumpstart post-install script using the pkgadd command, while others would require the use of a custom post-install scripted procedure. The setup of Solaris Jumpstart environment is beyond the scope of this document, but this article provides guidelines for specific procedures in the sections that follow.
File Sharing
The topic of how to share files across the cluster grid is by nature extremely complicated and dependent on the particular environment and priority of considerations. It boils down to a balance between performance and manageability. In this article, this issue is divided into two parts: sharing of binaries, and sharing of data.
Sharing of Binaries
Two elements of the cluster grid software stack provide the installation options of centralized or distributed installs. For ease of management, a centralized install is preferable. In cases where minimizing network traffic is a priority, distributed installs should be considered.
Sun HPC ClusterTools 4.0 software provides the option at install time of performing a distributed install or installing all binaries on a single master host. However, the Sun Grid Engine install script only supports binaries to be installed on a file system that must be shared across the SGE master host, compute tier hosts, and access nodes. In the next section, a customized installation is outlined that results in a bare-minimum sharing of grid engine files. If a centralized installation for both tools is chosen, they can be combined in the same file system, reducing the number of NFS shares.
In addition to the components of the stack, the other important binaries are obviously the ones belonging to the applications which run on the compute hosts. If the applications will not change frequently, then installing them locally on each compute host is a possibility. However, this must be done with extreme care, since this can lead to management difficulties when it comes to patches, upgrades, and so on.
A Sun technology that can play an important role here is the Sun CacheFS™ software. This allows you to set up a local cache for an NFS-mounted file system, which is automatically and transparently updated anytime the remote file system changes. This must be used only for remote file systems that change infrequently (read-mostly), otherwise performance can degrade due to excessive network overhead. In this situation, there would be an application server on which all applications are placed. This directory would then be shared through NFS and locally mirrored with CacheFS software. In normal operation, the application binaries would be invoked from the cache, but anytime the binaries are updated, the local cache on each compute host is automatically updated on the next invocation. Refer to the Solaris Advanced Administration Guide for details on CacheFS software.
Sharing of Data
The essence of computing is in the data files, both input and output. Although one could come up with methods for compute hosts to access data exclusively through stage-in and stage-out, some form of shared storage is usually inevitable. Apart from using a dedicated SAN (Storage Area Network), NFS is the most feasible way to share data files.
One way to maximize performance is to use a dedicated network for NFS. This could simply be regular Fast Ethernet, or it could be higher-performing Gigabit Ethernet. This network should be isolated from all other traffic, including and especially ordinary non-compute traffic (email, internet access, and so forth).
An important point to note is that Sun Grid Engine software by default expects home directories to be shared by compute hosts and submit hosts. If the same physical files are not accessible by both, care must be taken so that no submitted job makes any assumptions about files in the home directory, but only makes reference to files that are known to be accessible to the compute hosts.
In specific instances, CacheFS software can be used to speed up access to shared data directories. For example, in many biotechnology applications, pattern matching is done repeatedly on a common set of database files that are updated infrequently (once a week or less). In this case, putting these database files into a single shared file system and using CacheFS software on the compute hosts can cut down dramatically on execution time. Again, if updates to the database file system becomes too frequent, performance can be worse than without CacheFS software. Obviously, care must be taken so that output files are never created in the cached directory, but rather in another location.
Sun Grid Engine Software Installation Considerations
For a detailed examination of the Sun Grid Engine installation process refer to the Sun Grid Engine documentation and man pages provided with the software. This article augments the installation documentation by discussing important options for SGE installation.
Installation of the Sun Grid Engine master host is interactive, requiring input to questions asked along the way. It is straightforward once the prerequisite conditions, such as existence of the admin user and registration of the services port for the commd daemon, have been met.
Installation of the compute hosts (exec hosts) is also done with a script, which is by default interactive. However, this can be time consuming for large clusters, and there are several options to hasten this procedure:
The install script install_execd can be invoked with the -auto flag. This causes all the installation questions to be automatically answered with default values, which is usually acceptable. This can further be incorporated into scripted install procedures, which do other things in addition to simply registering the host with the qmaster daemon.
In the util directory of the SGE distribution, there is a script called install_cluster.sh that can be used to automatically install the SGE on all hosts specified on the command line. This script requires remote root rsh or ssh access privileges from the current host to all the candidate hosts. Consult the script for more details.
An option to Sun Grid Engine software that should strongly be considered is the use of local spool directories for compute hosts. By default, each compute host uses a subdirectory in the SGE distribution root to read and write information about jobs as they are running. This can result in a considerable volume of network traffic to the single shared directory. By configuring local spool directories, all that traffic can be redirected to the local disk on each compute host, thus isolating it from the rest of the network and reducing the I/O latency. The path to the spool directory is controlled by the execd_spool_dir variable; it should be set to a directory on the local compute host that is owned by the admin user, and that ideally can handle intensive reading and writing (for example, /var/spool/sge).
By default, the Sun Grid Engine software distribution is installed on a file system that must be shared across the SGE master host, compute tier hosts, and access nodes. Usually, this does not cause a significant performance issue. However, in cases where extremely high simultaneous access to binaries occurs (such as when launching extremely large parallel jobs), or where NFS traffic needs to be kept to a minimum for some other reason, the SGE can be installed locally on each compute host. In this case, rather than sharing the entire SGE distribution directory, it is sufficient to share only the $SGE_ROOT/$SGE_CELL/common directory, where $SGE_ROOT is the path to the SGE root and $SGE_CELL is the name of the SGE cell specified during the qmaster installation (typically, the cell name is default). The files that are then shared are mostly static configuration files, and no NFS traffic is incurred when binaries are invoked.
NOTE
This setup should always be used in conjunction with the SGE local spool directories as explained above. For submit and admin hosts, you can choose to install the binaries locally on all, or else have a central location from which those hosts can share the files. In all cases, it is recommended that the path name be the same for all hosts, so that $SGE_ROOT is the same, regardless of the actual location of the files.
Sun Management Center Installation Considerations
SunMC agents introduce a minimal ambient computational load on the host system. Therefore, if SunMC software is to be installed in the cluster grid, the benefits of comprehensive health monitoring of execution hosts must be balanced against the inevitable reduction in CPU and memory resources available for user applications.
If it is decided that the SunMC agents would put too much of a burden on the execution hosts, you can still use SunMC software to monitor those hosts without agents. In this case, the monitoring would be a simple SNMP ping, and the only thing that is tracked is whether or not the host is alive (accessible). For many environments, this can be sufficient; the administrator can choose to ignore other problems, and simply inspect a malfunctioning system manually or use the testing suite to perform periodic tests.
MPI Runtime Environments
A heterogeneous compute tier can provide a platform for OpenMP, MPI, threaded and serial applications. Furthermore, it might be convenient to subdivide resources available for MPI applications through the use of partitions. An MPI job submitted to the Sun HPC ClusterTools CRE is launched on a predefined logical set of nodes or partition that is currently enabled, or accepting jobs. A job will run on one or more nodes in that partition, but not on nodes in any other enabled partition.
Partitioning a cluster allows multiple jobs to execute concurrently, without the risk that jobs on different partitions will interfere with each other. This ability to isolate jobs can be beneficial in various ways. For example, if a cluster contains a mix of nodes whose characteristics differsuch as having different memory sizes, CPU counts, or levels of I/O supportthe nodes can be grouped into partitions that have similar resources. Jobs that require particular resources then can be run on suitable partitions, while jobs that are less resource dependent can be relegated to less specialized partitions. The system administrator can selectively enable and disable partitions.