Introduction to the Sun Cluster Grid, Part 1
- What is Grid Computing?
- Cluster Grids on Sun Hardware
- Related Resources
This article is an introduction to the Sun Cluster Grid. The first section of this article provides a generic technical description of basic cluster grid architecture. The second section describes the functionality and architecture of the software components of Sun's Cluster Grid stack.
This article is intended for IT professionals, system administrators, and anyone interested in understanding the concepts of a cluster grid.
This article is as a primer for a subsequent Sun BluePrints™ OnLine article entitled "Introduction to the Cluster Grid Part 2", which discusses the process of designing and implementing a Sun Cluster Grid.
What is Grid Computing?
Traditionally, grid computing only existed in the realms of high performance computing and technical compute farms. Increasing demand for compute power and data sharing, combined with technological advances, particularly relating to network bandwidth increases, has extended the scope of the grid beyond its traditional bounds.
Grid computing provides an environment in which network resources are virtualized to enable a utility model of computing. Grid computing provides highly available services with a high degree of transparency to users. These services can be delivered through Quality of Service guarantees, auto-negotiated contracts, metered access, and so forth.
Types of Grid Environments
Grid computing can be divided into the following three logical levels of deployment:
- Global grids
- Enterprise grids
- Cluster grids
Global grids are collections of enterprise and cluster grids as well as other geographically distributed resources, all of which have agreed upon global usage policies and protocols to enable resource sharing.
Enterprise grids enable multiple projects or departments to share resources within an enterprise or campus, and don't necessarily have to address the security and global policy management issues associated with global grids.
A cluster grid is the simplest form of a grid, and provides a compute service to the group or department level. The class of software that enables this service might be a distributed resource management (DRM) system, job management system (JMS), or job scheduling system. The different terms hint at the extent of the functionality of the software. For example, a DRM system might have features beyond those of a job scheduling system such as catering for heterogeneous, distributed architectures. This article describes the cluster grid in terms of a DRM system.
The key benefit of cluster grid architecture is to maximize the use of compute resources, and increase throughput for user jobs. To achieve this, DRM software provides a virtualization of the compute resource.
In addition to the DRM software, cluster grids typically have system management software to collectively monitor and administer the infrastructure, and tools to facilitate application development for the grid.
The cluster grid is a superset of other technical compute resources such as Linux clusters, throughput clusters, midrange compute servers, and high-end shared-memory systems. As such, the cluster grid can operate within a heterogeneous environment with mixed server types, mixed operating environments, and mixed workloads.
A cluster grid can be implemented as a convenient provision of departmental compute resources, or it can be a precursor to the development of an enterprise grid, where multi-departmental access is desired. In addition, a cluster grid can be integrated with middleware, such as that developed by the Avaki Corporation and the Globus Project, to provide a compute service to a higher level grid.
Cluster Grid Architecture
The cluster grid architecture is divided into the following three nonhierarchical, logical tiers:
- Access
- Management
- Compute
Each tier, shown in FIGURE 1, is defined by the services it provides. The access layer provides the means to access the cluster grid for job submission, administration and so on. The management layer provides the major cluster grid services such as job management, health monitoring, NFS, and so on. The compute layer provides the compute power for the cluster grid, and supports the runtime environments for user applications.
FIGURE 1 Three Tiers of the Cluster Grid Architecture
Each layer can be considered independently to some extent. The three-tier definition enables consideration of the sizing, scalability, and availability, of each tier separately.
Access Tier
The access tier provides access and authentication services to the cluster grid users. Conventional access commands such as telnet, rlogin, ftp, and ssh, can be used to access the system. Web-based services can be provided to permit easy or tightly controlled access to the facility. Beyond the ability to configure, submit, and control compute jobs, web-based services provide accounting information or administrative functions. Any access method should be able to integrate with common authentication schemes such as NIS or LDAP.
The access tier can be enhanced for integration with a global grid. For example, the Globus Resource Allocation Management software of the Globus 2.0Toolkit implements a gatekeeper process that performs authentication for grid users, and allows job submission to the Sun™ Grid Engine (described later) through a job manager process.
Management Tier
The middle tier is responsible for providing the major cluster grid services: DRM, hardware diagnosis software, system performance monitoring, and so on. While the DRM is a required feature of a cluster grid, additional duties of this tier can also include:
File serviceProvides file-sharing service for user home directories, libraries, compilers, applications, etc.
License key serviceManages software license keys, such as compiler licenses for the cluster grid.
Backup ManagementProvides traditional or hierarchical storage management services.
Install serviceManages operating system and application software versioning, and patch application on other nodes in the cluster grid.
The size and number of servers in this tier vary depending on the type and level of services required. For small implementations with limited functionality, a single node hosts all management services for ease of administration. Alternatively, multiple servers provide these functions for greater scalability, flexibility, and availability.
Compute Tier
The compute tier supplies the compute power for the cluster grid. Jobs submitted through upper tiers in the architecture are scheduled to run on one or more nodes in the compute tier. Nodes in the compute tier run the client- or agent-side processes of the DRM software, daemons associated with message-passing environments (for multiprocessing), and agents for system health monitoring. The compute tier communicates with the management tier, receiving jobs, and reporting job completion status and accounting details.
The compute tier can be heterogeneous in terms of several characteristics as follows:
ServersThe hardware characteristics can be radically different across a cluster grid. Machines can be symmetric multiprocessing (SMP) or uniprocessor, with differing physical memory sizes, CPU cache sizes, and so on.
PlatformHosts might run the Solaris™ operating environment, Linux, Aix, Irix, or other operating systems on different processor architectures such as SPARC™, Intel, Alpha, and others.
FunctionGroups of nodes in the compute tier perform different functions, and support several functions, or change functions based on a calendar. Functions include interactive, batch, visualization, and parallel.
InterconnectSome compute hosts can be networked through a specialized low-latency interconnect, such as Myrinet, or Sun Fire™ Link.
The DRM software coordinates job requests with the appropriate compute hosts, taking these characteristics into account.