- The Cookbook for Setting Up a Serviceguard Package-less Cluster
- The Basics of a Failure
- The Basics of a Cluster
- The "Split-Brain" Syndrome
- Hardware and Software Considerations for Setting Up a Cluster
- Testing Critical Hardware before Setting Up a Cluster
- Setting Up a Serviceguard Package-less Cluster
- Constant Monitoring
- Chapter Review
- Test Your Knowledge
- Answers to Test Your Knowledge
- Chapter Review Questions
- Answers to Chapter Review Questions
25.3 The Basics of a Cluster
A cluster is a collection of at least two nodes and up to 16 nodes. Supported cluster configurations include:
-
Active/Active: This is where all nodes are running their own application package but can run additional application packages if necessary.
-
Active/Standby: This is where a single node is not actively running any application packages but is waiting for a failure to occur on any of the other nodes in the cluster, whereby it will adopt responsibility for running that nodes application package.
-
Rolling Standby: This is similar to Active/Standby in that we have a node that is waiting for a failure to occur on any node in the cluster. The difference here is that when a failure occurs, the failed node becomes the standby node after the initial problem is resolved. Should a second failure occur, the second failed node becomes the standby. In a purely Active/Standby configuration, if a second failure occurred, the original standby node would be running two application packages.
Cluster monitoring is performed by a number of Serviceguard processes. Serviceguard has three main management functions:
-
Cluster Manager: The management and coordination of cluster membership.
-
Network Manager: Monitoring network connectivity and activating standby LAN cards when necessary.
-
Package Manager: The management of starting, stopping, and relocating application packages within the cluster.
The main Cluster Management Daemon is a process called cmcld. This process is running on every node in the cluster, and one of its main duties is to send and receive heartbeat packets across all designated heartbeat networks. One node in the cluster will be elected the cluster coordinator that is responsible for coordinating all inter-node communication. The cluster coordinator is elected at cluster startup time and during a cluster reformation. A cluster will reform due to one of four events:
-
A node leaves the cluster, either "gracefully" or because the node fails
-
A node joins the cluster
-
Automatic cluster startup
-
Manual cluster startup
When we set up our cluster, we discuss this "election" in a bit more detail. A critical feature of the cluster coordinator is the detection of a node failure; in this, we mean either total LAN communication failure or total system failure. For every HEARTBEAT_INTERVAL, nodes are transmitting a heartbeat packet on all prescribed heartbeat interfaces. If this heartbeat packet does not reach the cluster coordinator, after a NODE_TIMEOUT interval the node is determined to have failed and a cluster reformation commences. It goes without saying that maintaining heartbeat communication is vitally important to all nodes in the cluster.
-
Cluster HEARTBEAT and STANDBY LAN interfaces: Because this is such a crucial part in determining the health of a cluster, the more LAN interfaces you prescribe as being heartbeat interfaces, the better. The only time this is not the case is if you are intending to use VERITAS Cluster Volume Manager (CVM). The design of CVM allows the CVM daemon vxclustd to communicate over a single IP subnet. You will realize when you try to run cmcheckconf and cmapplyconf. If more that one heartbeat LAN is configured in a CVM configuration, both commands will fail. LAN interfaces can either be designated as a HEARTBEAT_IP (carries the cluster heartbeat) or a STATIONARY_IP (does not carry the cluster heartbeat). Even if a LAN interface is configured as a HEARTBEAT_IP, it can carry normal application data as well. The designation STATIONARY_IP simply means that no heartbeat packets are transmitted over that interface; it does not mean the IP address cannot be moved to a redundant, standby LAN card. The use of a redundant standby LAN interface for all interfaces is highly recommended. If you are going to use only one standby LAN card for all LAN interfaces, it must be bridged to all the networks for which it is being a standby. Figure 25-1 shows a good setup where we have a standby LAN card for each active network.
Figure 25-1 The use of standby LAN cards.
In Figure 25-1, you will also notice that the HEARTBEAT_IP is not being utilized by any clients for data traffic. This is an ideal scenario because HEARTBEAT packets are not contending with data packets for access to the network. You can use a HEARTBEAT_IP for data traffic as well, although you should note that if data traffic becomes particularly heavy, then the heartbeat packet may not reach the cluster coordinator, and this could cause a cluster reformation because some nodes "appear" to have "disappeared." You should also note that the standby LAN cards are bridged with the active LAN card. This is absolutely crucial. Serviceguard will poll standby/active LAN cards every NETWORK_POLLING_INTERVAL to ensure that they can still communicate. The bridge/switch/hub that is used should support the 802.1 Spanning Tree Algorithm (most of them do). The Quorum Server is currently attached to the main corporate data LAN. This is not a requirement. It just shows that all nodes in the cluster must be able to communicate with it, and it could be "any" machine in your organization running HP-UX. Many customers I know have the Quorum Server attached to the dedicated Heartbeat network. I think this is a good idea because all nodes in the cluster need access to the Heartbeat network and when we need to communicate with the Quorum Server, we are not competing with other users for access to our corporate data LAN.
When I first looked at Serviceguard I wondered, "How many LAN cards do I need?" The simple answer is two. Serviceguard is designed to fit into the whole philosophy of high availability. If Serviceguard "allowed" you to run with just one LAN card, it would be an immediate SPOF. So you need two LAN cards, with one acting as a STANDBY. Well, if I am really honest, you can get away with one LAN card. In a simple two-node cluster similar to the one you can see in Figure 25-1, you could use only one LAN card as long as you used an RS-232 null modem cable as a "serial heartbeat" between the two nodes. The one LAN card needs to be used as a HEARTBEAT_IP, i.e., in this case for data plus heartbeat packets. The serial heartbeat will be used as a last-ditch means for nodes to communicate in the event of network saturation. In that instance, both nodes will use the serial link to determine who was the "best candidate" to reform the cluster on their own. (Note: The use of serial heartbeats is being viewed as "less than ideal" and may be phased out in the near future.) In essence, the serial heartbeat is adding a little intelligence into the cluster reformation process, but only when we have a two-node cluster with only one LAN card in each node. This leads me to my next point about High Availability Clusters: the "split-brain" syndrome.