Qualifying an Application
There are a number of characteristics used to determine whether an application can be made highly available using the Sun Cluster 3 framework. If the application does not possess these attributes, then making it highly available may require modification to the application itself rather than simply creating start and stop methods within the framework.
Here are the key points to consider when making an application highly available.
- Data service access
- Crash tolerance
- Bounded recovery time after crash
- File location independence
- Absence of a tie to the physical identity of the node
- Ability to work on multi-homed hosts
- Ability to work with logical interfaces
- Client recovery
Let's look at each of these in detail to see what's required.
Data Service Access
Generally speaking, the only applications that can be effectively made highly available under the Sun Cluster framework are client-server applications that receive discrete queries from an IP-network-based client. Examples of this type of service include databases and Web (HTTP) servers.
Applications requiring a persistent network connection are slightly less-suited to the HA model, since continuing the service after a failover requires a reconnection, which usually means you lose any state that was created up to the point of failure. Examples of this type of service include terminal connections, telnet, and FTP.
This does not mean that persistent-connection data services cannot get any advantage from being included in the HA framework. Because the HA failover emulates a very fast reboot of the server, the application becomes available again more quickly than it would on a stand-alone server. It does mean, however, that the client software or end user must be able to cope with this kind of break in the service. For example, an FTP client that automatically retries a download if the connection is broken and continues from where it left off would overcome this problem.
Some applications do not use any kind of network connection for client access, or may not have any kind of client access at all. In these cases, there may be some difficulty in making the application highly available using the Sun Cluster framework, particularly if some sort of fixed line (such as a serial terminal connection) is required. This is because the Sun Cluster framework has built-in mechanisms for migrating IP network addresses between nodes, but does not have a similar facility for other access methods. Fortunately, in modern computing environments, most applications subscribe to the IP client/server model or have limitations that can be worked around quite easily (see "Getting Around Requirements" later in this chapter).
Crash Tolerance
Since the Sun Cluster HA model essentially emulates the fast reboot of a failed system, the application must be able to put itself into a known state when it starts up. Generally, this means that the application state should be committed to persistent storage (on disk) rather than being held in RAM. Depending on the application, it may be necessary for some sort of automatic rollback recovery or consistency check to be performed each time the application starts, to ensure any partially completed transactions are properly taken care of. If an application uses techniques such as a two-phase commit to write data to disk, then it is usually more easily able to recover after a system failure.
Regardless of how they write persistent data, some applications still require manual intervention to start (classic examples include applications utilizing some form of security, and that require a passphrase to be entered at startup). For these applications, it may be possible to achieve a work-around (such as piping responses in from a file). Care must be taken, however, to pay attention to the security and procedural implications of this sort of work-around.
Typically, if an application is automatically started at boot time (with a script in /etc/rc3.d, for example) then it is usually safe to run as a cluster-controlled application without much (or any!) modification.
Bounded Recovery Time
When applications are restarted by the Sun Cluster framework, there needs to be some reasonable (and predictable) limit on how long it will take to recover. Obviously, the amount of time required will depend on what sort of application is involved; for example, a large OLTP database will almost certainly take longer to return to a consistent state than a small Web server due to the relative frequency of data changes.
Some limit is needed because the cluster framework needs to be able to determine when an application is not going to restart, which is particularly important if the cluster is attempting to restart the application on the same node. After the limit has expired, the cluster can take appropriate action (such as failing the application over to a different node).
File Location Independence
The location of files and data used by the data service is very important, since this information has to be shared by each node that will potentially run the application. For this reason, application and configuration data locations (or file paths) should not be hard-coded into the application itself.
Applications should store any changeable data, including configuration information, on the shared storage of the cluster—either on a globally mounted filesystem, a failover filesystem, or on globally accessible raw devices. This ensures that the data and behavior of the application are consistent across nodes of the cluster. The upshot of this is that there must be some way of defining to the application where the data and configuration files are stored in the filesystem, such as a command-line argument to the application program binary.
If the paths to data or configuration files are hard-coded into a program, then you can sometimes use symbolic links to overcome the problem. However, be aware that if an application completely removes and recreates a given file that has been redirected using a symbolic link, the behavior may not be as expected. For example, if the directory /var/myapp/data is actually a symbolic link to a globally mounted directory /global/myapp/data, then an application accessing /var/myapp/data/foo will get the correct file (see Figure 4.1). However, if the application unlinks the directory /var/myapp/data and recreates it, the symbolic link may be destroyed. This means that new data will be created in a directory /var/myapp/data that is not accessible to any other nodes in the cluster (see Figure 4.2).
Figure 4.1 Using symbolic links to access global data
Figure 4.2 Symbolic links after directory recreation
You may also want to consider where to store your application binaries, and there are good arguments each way as to whether application binaries should be installed on shared storage or installed individually on the local storage of each node. With binaries stored locally, it is possible to perform rolling upgrades of the application software by performing the upgrade on the standby node and then manually switching control to that new node as the original node is upgraded. This maintains the availability of the data service to clients, but introduces possible management problems if a failure occurs partway through an upgrade or if the data format is different between application software releases. With binaries installed on shared storage, there is only one copy of the software to manage and an upgrade is performed by scheduling downtime for that service as the installation occurs. This consideration comes down to operational preference, rather than any technical reasoning.
Absence of a Tie to Physical Identity of Node
Central to the understanding of Sun Cluster systems is the concept that a highly available IP address (otherwise known as a logical host) may operate on one of a number of physical computers at any time, and may in fact move to another computer under certain circumstances. For this reason it is important that applications be able to use the logical hostname rather than the physical name of the host on which it is running. In short, the question to ask yourself is: Can the application provide its service using a hostname that is not the physical hostname of the node?
If an application's configuration is dependent upon the physical hostname, then it almost certainly cannot be made highly available, since failing the application over to a node with a different physical hostname would render it inoperable.
If a program binds its network connection to the special address INADDR_ANY—a wildcard term that means the application will bind to all available IP addresses on the node simultaneously (see in(3HEAD))—then this usually satisfies the requirement.
Ability to Work on Multi-Homed Hosts
A multi-homed host is a host that is connected to more than one public network. Each node may have multiple interfaces, allowing the cluster and its data services to appear on more than one network and also allowing for hardware redundancy. An application should not assume that it must bind itself to the first network interface it can find because this may not be the correct one. In short, the question to ask is: Can the application cope with a host that has more than one network interface?
A data service that binds to a host's IP address must be flexible enough to bind to any and all of the IP addresses specified by its related logical host resource. The simplest way to do this is to have the application bind to INADDR_ANY (most modern network applications do).
In some circumstances, however, this approach is not desirable. Consider the case where two data services provide some service on different IP addresses but on the same IP port. If at some point both services are mastered by the same physical node, having either service bind to all IP addresses would mean the other could not bind to that address and the service will fail. In this case, or in the case where binding to INADDR_ANY is not possible, the application must have some configuration option to specify which IP address or port to bind to.
Ability to Work with Logical Interfaces
In some instances, a cluster node may control more logical hosts (and therefore IP addresses) than it has physical network interfaces. To cope with this, the network interfaces are assigned more than one IP address, a technique sometimes called IP aliasing. IP addresses are dynamically added to and removed from physical hosts as they master the logical hosts, by adding logical interfaces to the physical network interfaces already on the node. Logical network interfaces are labelled the same as physical interfaces, but with an additional number part, for example:
hme1, qfe3 |
physical interfaces |
hme1:1, qfe3:2 |
logical interfaces |
The data service should be able to deal with a given physical interface having more than one IP address. Again, INADDR_ANY usually makes this an easy task, but occasionally an application will try to manipulate network traffic in particular ways that make it unable to manage more than one IP address per interface. In particular, an application may not recognize the logical portion of an interface (for example, :1), and incorrectly perform operations on the physical interface instead (for example, hme1). In these cases, it may not be possible to make the application highly available. In short, the question to ask is: Can the application cope with more than one IP address on a single network interface?
Client Recovery
As indicated in the previous discussion of the data service access, the most effective HA data services include some capacity in the client to automatically retry a query when the first one is cut off or times out. If an automated retry facility is not feasible, then the end user at least has to be comfortable with the concept of manually retrying a query, such as when an HTTP query from a WWW browser fails.
Requirements for Scalable Services
If you want to make your application into a scalable service (that is, a service that operates on multiple nodes at the same time), you will have to consider a number of additional requirements. We'll leave these until we investigate scalable services in detail in Chapter 11, "Writing Scalable Services."