- Information Gathering
- Design Implementation Decisions
- Installing a Cluster Grid
- Managing a Cluster Grid
- Cluster Grid Example Implementations
- Related Resources
Design Implementation Decisions
In this section, the information gathered is translated into recommendations and requirements for the Sun Cluster Grid design. For each tier of the cluster grid, the impact of the information gathered is assessed.
Access Tier
At a minimum, the access tier has to be sized to support logins, telnet sessions, and job submissions (running simple binaries). Non-interactive jobs are executed through the queueing system by submitting a simple batch shell script which acts as a wrapper to the executable, and can be used to pass information to the queuing system, and to perform simple setup or cleanup tasks.
If users' own workstations are used to submit jobs, this represents a negligible load. However, if a single system is required to support hundreds of remote logins, it would be wise to ensure some dedicated resources.
Administrative access to the cluster grid is provided through the Sun Grid Engine and SunMC software. Both can be administered through a command line or GUI interface. Sun Grid Engine binaries and the GUI are supported on both Solaris™ and Linux operating environments. Binaries can also be obtained for other operating systems from the opensource site (see "Related Resources" on page 27).
The SunMC console, based on Java™, is supported on both SPARC systems running Solaris Operating Environment software (versions 8, 7 and 2.6) and Intel-based systems running Microsoft Windows 2000, NT 4.0 (with Service Pack 3 or 4), 98 and 95. For SPARC systems, the minimum system requirements for running the SunMC console are: Ultra 1 (or equivalent), 256 Mbyte RAM, 130 Mbyte Swap. For MS Windows Intel-compatible systems, the minimum requirements for running the SunMC Console are: 300 MHz Pentium, 256 Mbyte RAM, 35 Mbyte free disk space.
If web-based access is to be implemented, for example, using the Sun ONE portal server, the server must be sized appropriately for the expected load; this should take into account the sessions characteristics and headroom for future scalability.
Management Tier
The decisions to be made when designing the management tier depend on which services are to be provided, and the expectations for future scalability. For small cluster grids with a minimal service provision (DRM only), a single processor machine might suffice. As discussed in the previous article titled "Introduction to the Cluster Grid Part 1", the master host functionality for Sun Grid Engine software is provided primarily through two daemons. Moving beyond a dedicated dual processor machine for the SGE master, therefore, results in limited performance enhancement. If a multiprocessor machine (more than two processors) is employed in the cluster grid as the SGE master, it would be appropriate for this machine to provide other services. For example, an eight-way server could act as an SGE master host, a Sun MC server, NFS and backup server as well as supporting computational tasks.
The load from the SunMC server is caused by normal management operations, including periodic data acquisition, alarm rule processing, alarm annuciation, alarm action execution, and processing of client requests. The generated load is proportional to the rate at which data is gathered, the amount of data gathered, the number of alarms detected and the number of user requests. The percentage of CPU resources consumed depends on the number and type of modules loaded on the system, the configuration of these modules, and the computational capacity of the host system. In general, even on low-end machines with a comprehensive suite of modules loaded and high management activity, the agent should never consume more than a fraction of the CPU resources. As with CPU consumption, the memory consumed by an agent depends on multiple factors.
The primary considerations are the number of modules loaded and the amount of information being monitored by these modules. Loading many management modules on an agent inevitably increases its footprint requirement. Similarly, agents managing hosts with large disk arrays or other highly scalable assets probably require more virtual memory, as the sheer volume of management information passing through them increases. In general, a base agent with the default set of modules loaded will be under 10 Mbyte in size, and under typical operation will only require 5060% of this to be resident in physical memory.
NFS server sizing is a complex topic beyond the scope of this document. Obviously access to data files is a primary consideration, and the design is dictated by the application and type of work being done. The following list summarizes the other various elements of a cluster grid which might require a shared file system.
Sun Grid Engine softwareBy default, the SGE directory structure is shared across the cluster grid so that all execution, submit, and administration hosts access the same physical database. Non-default arrangements are discussed in "Sun Grid Engine Software Installation Considerations" on page 13.
User's Home DirectoriesAs with nearly all DRMs, the input files and executables are arranged by the user, usually in their home directory or some working directory. When the job is submitted, by default the execution host must be able to access the files over a shared file system. Methods to minimize file-sharing network traffic are discussed in"File Sharing" on page 11.
Sun HPC ClusterTools™ 4.0 binariesTwo installation methods are available involving either a distributed install or a centralized install.
License Key server, application serversAccessing application binaries from a shared location is beneficial because only one version of these files needs to be maintained for upgrades, bug fixes, and so forth.
Installation servers should have access to sufficient disk space to hold multiple Solaris images, software images, and flash archives, taking into account any RAID implementations.
Compute Tier
The decision of which hardware to implement in the compute tier is based primarily on maximizing performance/price. The overall hardware profile should closely match the user application profile. The hardware profiles pertaining to the throughput and highly parallel environments are at opposite extremes.
ThroughputLarge numbers of thin nodes. Often the key requirement is to maximize the number of processors per unit volume rather than individual processor performance.
Highly ParallelDepending on various attributes of the application, either a smaller number of large SMP nodes or a large number of thin nodes supported by a cluster runtime environment (CRE) will be appropriate.
A typical mixed load cluster grid for an academic site, for example, might consist of:
Distributed memory, network of workstations (NOW) interconnected with specialized high bandwidth low-latency interconnect and Ethernet.
A number of low- to mid-range independent servers for general tasks, interactive use, large memory serial jobs, and so on.
A large SMP system to support large OpenMP applications or other message passing applications, which benefit from ultra low-latency communications.
Workstations in student labs with standard fast Ethernet connections to be used at night, on weekends, and holidays for smaller one-CPU jobs.
Direct attached disk space on compute nodes that are part of a NOW cluster is usually used purely as scratch space and caching.
In a throughput environment, the compute tier should provide the resource to meet demand on some timescale. In FIGURE 1, the compute tier is sized to complete the submitted jobs on a daily basis. While the rate of job submission peaks during working hours, the available compute power enables jobs to be completed by the start of the next working day.
FIGURE 1 Example of Daily Profile for Cluster Workload
Memory requirements and cache Some applications benefit a great deal from large cache processors. In such cases, the aim is to maximize the proportion of the active data that is retained in cache, giving an order of magnitude lower access times than memory resident data.
Networking Hardware
Three interconnect types should be considered: serial, Ethernet and specialized low-latency interconnects.
Serial
A serial network allows the system administrator to gain console access to all the machines in a cluster. For large environments, this is a tremendous convenience, allowing almost complete control over all systems from a single remote location. The use of a terminal concentrator gives the administrator access to multiple console ports over the Ethernet network.
Ethernet
The network load in a cluster grid will originate from a number of activities:
MPI or PVM message passing communications at runtime for parallel applications in a distributed NOW
NFS traffic from various sources such as the following:
Cluster grid services accessing binaries, spool files, and so on
User directories being accessed at runtime for executables, input files, and so on
Sun Grid Engine communications
Data transfer from backups and installs
The load generated by this traffic (especially if no MPI traffic is involved) is handled satisfactorily with standard Ethernet or gigabit Ethernet. Techniques for reducing network load by minimizing file sharing are covered in the section, "File Sharing" on page 11". Ethernet capacity can be scaled through the use of multiple Ethernet cards. Shared file systems, for example, can be implemented through dedicated interfaces to separate NFS traffic from other network traffic.
Specialized Interconnects
Particular care should be made to avoid MPI traffic and other standard Ethernet traffic mixing if high communication intensity is a feature of the applications. One way to avoid this is to invest in a specialized low-latency interconnect for handling message passing communications. Alternatively, MPI applications may be run within large SMP machines where the interprocess communication takes place across the ultra-low latency specialized backplane.
Typical latencies for MPI communications between nodes over standard or gigabit Ethernet are over 100 microseconds. The maximum achievable bandwidth over gigabit Ethernet is around 700 Mbits per second. For many parallel applications, these latencies and bandwidths introduce a severe bottleneck to the calculations, and such specialized interconnects alleviate the bottleneck.
A Myrinet network is composed of interfaces, links, and switches. For Sun machines, the PCI interface is used with drivers tuned for Sun architecture, available from Myricom. Myricom supports a loadable protocol module (PM) for the Sun HPC ClusterTools 4.0 software. The PMs are used by Sun HPC ClusterTools software to carry the traffic between processes to exploit the low latency and high data rates of Myrinet. The Myrinet interconnect can reduce latencies by an order of magnitude, and can give higher aggregate bandwidths for compute clusters. Use of these specialized interconnects results in an added advantage of reduced load on the host processor as the message routing is processed in hardware on the Myrinet interface card.
FIGURE 2 Three Types of Cluster Grid Interconnects
Storage
Full treatment of storage options is beyond the scope of this document, so a summary of the major options is outlined here.
If a large SMP server is part of the cluster grid compute tier, it might be appropriate for this server to provide the NFS service to the cluster grid. This ensures that the applications running on the large SMP have access to fast, direct storage, and provides some headroom for scaling the NFS service.
Choices of storage arrays depend very much on budget, availability, future scalability, and expected access patterns. Top-of-the-line disk arrays feature large caches, hardware RAID, and are compatible with SAN implementations. In some cases, the applications perform large volumes of random reads and writes and are better suited to a JBOD array (non-RAID storage implementations) with maximum spindle count.