CCDP Self-Study: Designing High-Availability Services
- High-Availability Features and Options
- Designing High-Availability Enterprise Networks
- Summary
- Reference
- Product Summary
- Standards and Specifications Summary
- Review Questions
- Case Study: OCSIC Bottling Company
After completing this chapter, you will be able to
List and discuss high-availability features and options
Design high-availability enterprise networks. Downtime usually translates into significant productivity and revenue losses for many enterprises. Maximizing network uptime requires the use of operational best practices and redundant network designs in conjunction with high-availability technologies within network elements. Several high-availability technologies are embedded in Cisco IOS Software. Designers need to identify the necessary components of a high-availability solution and design high-availability solutions for the Enterprise Campus and the Enterprise Edge functional areas based on specific enterprise availability requirements. This chapter briefly reviews high-availability services; it then presents best practices and guidelines for designing highly available Enterprise Campus and the Enterprise Edge functional areas.
High-Availability Features and Options
Cisco IOS high-availability technologies provide network redundancy and fault tolerance. Reliable network devices, redundant hardware components with automatic failover, and protocols like Hot Standby Router Protocol (HSRP) are used to maximize network uptime. This section examines these topics.
Network Requirements for High Availability
An enterprise requires its network to be highly available to ensure that its mission-critical applications are available. Increased availability translates into higher productivity, and perhaps higher revenues and cost savings. Reliability implies that the system performs its specified task correctly; availability, on the other hand, means that the system is ready for immediate use. Today's networks need to be available 24 hours a day, 365 days a year. To meet that objective, 99.999 or 99.9999 percent availability is expected. Table 5-1 shows what each availability rate translates to, in terms of days, hours, and minutes; the bottom two rows (which are shaded), namely 99.999 percent and 99.9999 percent availability, represent highly available networks.
Table 5-1 Network Availability Percentage versus Actual Network Downtime
Availability |
Defects per Million |
Downtime per Year (24 * 365) |
99.000 |
10,000 |
3 days, 15 hours, 36 minutes |
99.500 |
5000 |
1 day, 19 hours, 48 minutes |
99.900 |
1000 |
8 hours and 46 minutes |
99.950 |
500 |
4 hours and 23 minutes |
99.990 |
100 |
53 minutes |
99.999 |
10 |
5 minutes |
99.9999 |
1 |
30 seconds |
NOTE
Number of defects in a million is used to calculate availability. For example, 5000 defects in a million yields 99.5-percent availability:
(1,000,000 _ 5,000) / 1,000,000 = 0.995 = 99.5%
And downtime over 1 year would be:
5000 / 1,000,000 = 0.005 year = 0.005 * 365 * 24 * 60 minutes
= 2628 minutes
= 43 hours, 48 minutes
= 1 day, 19 hours, 48 minutes
Enterprises implement high availability to meet the following requirements:
Ensure that mission-critical applications are availableThe purpose of an enterprise network is to facilitate operation of network applications. When those applications are not available, the enterprise ceases to function properly. Making the network highly available helps ensure that the enterprise's mission-critical applications are functional and available.
Improve employee and customer satisfaction and loyaltyNetwork downtime can cause frustration among both employees and customers attempting to access applications. Ensuring a highly available network helps to improve and maintain satisfaction and loyalty.
Reduce reactive information technology (IT) support costs, resulting in increased IT productivityDesigning a network to incorporate high-availability technologies allows IT to minimize the time spent fire-fighting and makes time available for proactive services.
Reduce financial lossAn unavailable network, and therefore an unavailable application, can translate directly into lost revenue for an enterprise. Downtime can mean unbillable customer access time, lost sales, and contract penalties.
Minimize lost productivityWhen the network is down, employees cannot perform their functions efficiently. Lost productivity means increased cost to the enterprise.
Availability is a measurable quantity. The factors affecting availability are mean time to repair (MTTR), which is the time it takes to recover from a failure, and mean time between failure (MTBF), which is the time that passes between network outages or device failures. Decreasing MTTR and increasing MTBF increase availability. Dividing MTBF by the sum of MTBF and MTTR results in a percentage indicating availability:
Availability = MTBF / (MTBF + MTTR)
A common goal for availability is to achieve 99.999 percent (called "five nines"). For example:
Power supply MTBF = 40,000 hours Power supply MTTR = 8 hours Availability = 40,000 / (40,000 + 8) = 0.99980 or 99.98% availability
As system complexity increases, availability decreases. If a failure of any one part causes a failure in the system as a whole, it is called serial availability. To calculate the availability of a complex system or device, multiply the availability of all its parts. For example:
Switch fabric availability = 0.99997 Route processor availability = 0.99996 System availability = 0.99997 * 0.99996 = 0.99992
Cisco IOS High-Availability Architecture
The following are the requirements for a Cisco high-availability solution:
Reliable, fault-tolerant network devicesHardware and software reliability to automatically identify and overcome failures.
Device and link redundancyEntire devices, modules within devices, and links can be redundant.
Load balancingAllows a device to take advantage of multiple best paths to a given destination.
Resilient network technologiesIntelligence that ensures fast recovery around any device or link failure.
Network designWell-defined network topologies and configurations designed to ensure there is no single point of failure.
Best practicesDocumented procedures for deploying and maintaining a robust network infrastructure.
High availability implies that a device or network is ready for use as close to 100 percent of the time as possible. Fault tolerance indicates the ability of a device or network to recover from the failure of a component or device. Achieving high availability relies on eliminating any single point of failure and on distributing intelligence throughout the architecture. You can increase availability by adding redundant components, including redundant network devices and connections to redundant Internet services. With the proper design, no single point of failure will impact the availability of the overall system.
Fault Tolerance and Hardware Redundancy
One approach to building highly available networks is to use extremely fault-tolerant network devices throughout the network. Fault-tolerant network devices must have redundant key components, such as a supervisor engine, routing module, power supply, and fan. Redundancy in network topology and provisioning multiple devices and links is another approach to achieving high availability. Even though these approaches are different, they are not mutually exclusive. Each approach has its own benefits and drawbacks.
Using Fault-Tolerant Devices
Utilizing fault-tolerant devices minimizes time periods during which the system is unresponsive. Failed components can be detected and replaced while the system continues to operate. Disaster protection can be optimized if redundant components were not interdependent. For example, it is best if redundant power supplies are on different electrical circuits. Figure 5-1 depicts a part of a campus network that uses fault-tolerant devices but has a single forwarding path.
Figure 5-1 Campus Network Utilizing Fault-Tolerant Devices, but Lacking Topological Redundancy
Achieving high network availability solely through device-level fault tolerance has the following drawbacks:
Massive redundancy within each device adds significantly to its cost, while at the same time reducing physical capacity by consuming slots that could otherwise house network interfaces or provide useful network services.
Redundant subsystems within devices are often maintained in a hot standby mode, in which they cannot contribute additional performance because they are only fully activated when the primary component fails.
Focusing on device-level hardware reliability might result in overlooking a number of other failure mechanisms. Network elements are not standalone devices, but they are components of a network system in which internal operations and system-level interactions are governed by configuration parameters and software.
Providing Redundancy in the Network Topology
A complementary way to build highly available networks is to provide reliability through redundancy in the network topology rather than primarily within the network devices themselves. In the campus network design shown in Figure 5-2, a backup exists for every link and every network device in the path between the client and server.
Figure 5-2 Campus Network with Redundant Paths, Links, and Devices
Provisioning redundant devices, links, and paths might have increased media costs and be more difficult to manage and troubleshoot, but this approach offers the following advantages:
The network elements providing redundancy need not be co-located with the primary network elements. This reduces the probability that problems with the physical environment will interrupt service.
Problems with software bugs and upgrades or configuration errors and changes can be dealt with separately in the primary and secondary forwarding paths without completely interrupting service. Therefore, network-level redundancy can also reduce the impact of nonhardware failure scenarios.
With the redundancy provided by the network, each network device no longer needs to be configured for optimal standalone fault tolerance. Device-level fault tolerance can be concentrated in the Campus Backbone and Building Distribution submodules of the network, where a hardware failure would affect a larger number of users. By partially relaxing the requirement for device-level fault tolerance, the cost per network device is reduced, to some degree offsetting the requirement for more devices.
With carefully designed and implemented resiliency features, you can share the traffic load between the respective layers of the network topology (that is, Building Access and Building Distribution submodules) between the primary and secondary forwarding paths. Therefore, network-level redundancy can also provide increased aggregate performance and capacity.
You can configure redundant networks to automatically failover from primary to secondary facilities without operator intervention. The duration of service interruption is equal to the time it takes for failover to occur. Failover times as low as a few seconds are possible. Fast and Gigabit Ethernet channeling technologies allow grouping a number of Fast or Gigabit Ethernets to provide fault-tolerant high-speed link bundles between network devices with a few milliseconds or better recovery times. Finally, as a data link layer feature, deterministic load distribution (DLD) adds reliability and predictable packet delivery with load balancing between multiple links.
Route Processor Redundancy
Route Processor Redundancy (RPR) provides a high system availability feature for some Cisco switches and routers. A system can reset and use a standby Route Switch Processor (RSP) in the event of a failure of the active RSP. RPR reduces unplanned downtime and enables a quicker switchover between an active and standby RSP in the event of a fatal error on the active RSP. When you configure RPR, the standby RSP loads a Cisco IOS image upon bootup and initializes itself in standby mode (but MSFC and PFC are not operational). In the event of a fatal error on the active RSP, the system switches to the standby RSP, which reinitializes itself as the active RSP, reloads all the line cards, and restarts the system; switchover takes 2 to 4 minutes. (Note that the 2- to 4-minute recovery is only possible without core dump. If core dump is performed, recovery might take up to XX minutes.)
NOTE
MSFC (Multilayer Switch Feature Card) is an optional supervisor daughter card for 6xxx Catalyst switches, and it provides routing and multilayer switching functionalities. PFC (Policy Feature Card) is also an optional supervisor daughter card for 6xxx Catalyst switches, and it adds support for access lists, quality of service (QoS), and accounting to the capabilities furnished by MSFC.
RPR+ allows a failover to occur without reloading the line cards. The standby route processor takes over the router without affecting any other processes and subsystems. The switchover takes 30 to 60 seconds (if core dump upon failure is disabled). In addition, the RPR+ feature ensures that
The redundant processor is fully booted and the configuration is parsed (MSFC and PFC are operational).
The IOS running configuration is synchronized between active and standby route processors.
No link flaps occur during failover to the secondary router processor.
The Cisco Catalyst 6500 offers software redundancy features that include Dual Router Mode (DRM) and Single Router Mode (SRM). These features provide redundancy between MSFCs within the device.
Network Interface Card Redundancy
Nowadays, dual-homing end systems is an available option for consideration. Most network interface cards (NICs) operate in an active-standby mode with a mechanism for MAC address portability between them. During a failure, the standby NIC becomes active on the new access switch. Other end-system redundancy options include NICs operating in active-active mode, in which each host is available through multiple IP addresses. Table 5-2 contrasts various aspects of active-standby NIC redundancy to its active-active counterpart.
Table 5-2 Comparison Between NIC Redundancy Methods
|
Active-Active |
Active-Standby |
Predictable Traffic Path |
Many |
One |
Predictable Failover Behavior |
More complex |
Simple |
Supportability |
Complex |
Simple |
Ease of Troubleshooting |
Complex |
Simple |
Performance |
Marginally higher |
Same as single switch |
Scalability |
Switch architecture dependent |
Same as single switch |
Either end-system redundancy mode requires more ports at the Building Access submodule. Active-active redundancy implies that two redundant switches in a high-availability pair are concurrently load balancing traffic to server farms. Because both switches are active, you can support the same virtual IP address on each switch at the same time. This is known as shared Versatile Interface Processor (VIP) address. However, the use of active-active schemes supporting shared VIP configurations is not recommended.
Active-standby redundancy implies an active switch and a standby switch. The standby switch does not forward or load balance any traffic. The standby switch is only active in participating in the peering process that determines which switch is active and which is on standby. The peering process is controlled by the redundancy protocol used by the content switches.
Options for Layer 3 Redundancy
HSRP and Virtual Router Redundancy Protocol (VRRP) enable a set of routers to work together to present the appearance of a single virtual router or default gateway to the hosts on a LAN. HSRP is a Cisco proprietary protocol and it was introduced before its standards-based counterpart VRRP. Protocols for router redundancy allow one router to automatically and transparently assume the function of another router should that router fail.
HSRP is particularly useful in environments where critical applications are running and fault-tolerant networks have been designed. From among a group of routers (their interfaces, to be exact) configured to belong to a common HSRP group, one is elected as the active router and will assume the responsibility for a virtual IP and MAC address. If this router (or its interface) fails, another router in the group (in fact, its interface) will take over the active routers role, being responsible for the virtual IP and MAC address. This enables hosts on a LAN to continue to forward IP packets to a consistent IP and MAC address, enabling the changeover of devices doing the routing to be transparent to them and their sessions.
Each router (its interface) participating in an HSRP group can be given a priority for the purpose of competing for the active router or the standby router role. Of the routers in each group, one will be selected as the active forwarder, and one will be selected as the standby router; other routers in this group will monitor the active and standby routers' status to provide further fault tolerance. All HSRP routers participating in a standby group will watch for hello packets from the active and the standby routers. From the active router in the group, they will all learn the hello and dead timer as well as the standby IP address to be shared. If the active router becomes unavailable because of an interface or link failure, scheduled maintenance, power failure, or other reasons, the standby router will promptly take over the virtual addresses and responsibility; an active router's failure is noticed when its periodic hello packets do not show up for a period of time equal to the dead interval (timer).
Multigroup HSRP (MHSRP) is an extension of HSRP that allows a single router interface to belong to more than one hot standby group. MHSRP requires the use of Cisco IOS Software Release 10.3 or later and is supported only on routers that have special hardware that allows them to associate an Ethernet interface with multiple unicast MAC addresses, such as the Cisco 7000 series.
VRRP defines a standard mechanism that enables a pair of redundant (1 + 1) devices on the network to negotiate ownership of a virtual IP address (and MAC address). The virtual address could, in fact, belong to one of the routers in the pair. In that case, the router whose IP address is used for the virtual address must and will become the active virtual router. If a third IP address is chosen, based on a configurable priority value, one device is elected to be active and the other serves as the standby. If the active device fails, the backup takes over. One advantage of VRRP is that it is standards based; another advantage is its simplicity. However, this scheme only works for n = 1 capacity and k = 1 redundancy; it will not scale above 1 + 1. RFC 2338 describes VRRP.
In addition to HSRP and VRRP, Cisco IOS Software provides additional network redundancy features:
Fast routing protocol convergence with IS-IS, OSPF, or EIGRPEIGRP provides superior convergence properties and operating efficiency for Layer 3 load balancing and backup across redundant links and Cisco IOS devices to minimize congestion.
OSPF and IS-IS, unlike EIGRP, are nonproprietary and are classified as link-state routing protocols, based on Dijkstra's Shortest Path First algorithm. OSPF and IS-IS protocols support large-scale networks, hierarchical addressing and architectures, classless interdomain routing, and they provide fast IP routing convergence.
EtherChannel technologyUses multiple Fast or Gigabit Ethernet links to scale bandwidth between switches, routers, and servers. Channeling a group of Ethernet ports also eliminates loops, simplifying spanning-tree's topology; hence, it reduces the number of STP blocking (discarding) ports.
Load sharingProvided across equal-cost Layer 3 paths and spanning trees (for Layer 2based networks through PVST+ or MST).
Cisco Express Forwarding (CEF)A topology driven route-caching technology that, unlike its traffic-driven route-caching predecessors, does not need to perform multiple lookups, and its maintenance overhead is less. CEF is the main prerequisite feature for the Multiprotocol Label Switching (MPLS) technology.
NOTE
Gateway Load Balancing Protocol (GLBP) is a new Cisco solution and alternative to HSRP. The main advantage of GLBP over its predecessors (HSRP and VRRP) is its ease of configuration and built-in capability for load sharing among the participating routers.
Redundancy and Spanning Tree Protocol
The Spanning Tree Protocol (STP) was designed to prevent loops. Cisco spanning-tree implementation provides a separate spanning-tree domain for each VLAN; hence, it is called per-VLAN spanning tree (PVST). PVST allows the bridge control traffic to be localized within each VLAN and supports configurations where the traffic between the access and distribution layers of the network can be load balanced over redundant connections. Cisco supports PVST over both Inter-Switch Link (ISL) and 802.1Q trunks. Figure 5-3 depicts a campus model with Layer 2 access switches and multilayer distribution layer switches running Cisco PVST. One distribution switch is the root for odd VLAN spanning trees, and the other is the root for even VLAN spanning trees. The distribution switches are multilayer switches, and belong to a common HSRP group in each VLAN. On odd VLANs, one distribution multilayer switch is made the active HSRP router and the other is configured as the standby HSRP router. The standby router on odd VLANs is configured as the active HSRP router on even VLANs, and the other is naturally configured as the standby HSRP router on the even VLANs.
Figure 5-3 PVST and HSRP in Campus Networks
ISL and 802.1Q VLAN tagging also play an important role in load sharing across redundant links. All the uplink connections between Building Access and Building Distribution switches are configured as trunks for all the access VLANs. Each uplink interface/port of an access switch is in forwarding state for half of the VLANs and in blocking (discarding) mode for the other half of the VLANs; or the link might be forwarding for all VLANs (see Figure 5-3). In the event that one of the uplinks or distribution switches has a failure, the other uplink starts forwarding the traffic of all VLANs. Workgroup servers might be connected with dual, high-speed, trunk connections to both of the distribution switches. (The servers, however, should not bridge traffic across their redundant links).
Rapid Spanning Tree Protocol (RSTP), as specified in IEEE 802.1w, supersedes STP specified in 802.1D, but remains compatible with STP. RSTP shows significant convergence improvement over the traditional STP. RSTP's advantage is most experienced when the inter-switch links (connections) are full-duplex (dedicated/point-to-point), and the access port connecting to the workstations is in port fast mode. In segments that older spanning-tree bridge protocol data units (BPDUs) are seen, Cisco devices switch to the traditional STP.
Multiple Spanning Tree (MST), as specified in IEEE 802.1s, allows you to map several VLANs to a reduced number of spanning-tree instances, because most networks do not need more than a few logical topologies. Figure 5-4 shows a topology with only two different final logical topologies, so only two spanning-tree instances are really necessary. There is no need to run 1000 instances. If you map half the 1000 VLANs to a different spanning-tree instance, as shown in the figure, the following is true:
The desired load-balancing scheme is realized, because half the VLANs follow one separate instance.
The CPU is spared by only computing two instances.
Figure 5-4 Multiple Spanning Tree Example
From a technical standpoint, MST is the best solution. From the network engineer's perspective, the only drawbacks associated with migrating to MST are mainly caused by the fact that MST is a new protocol; the following issues arise:
The protocol is more complex than the traditional CST (or the Cisco PVST+) and requires additional training of the staff.
Interaction with legacy bridges is sometimes challenging.
PortFast and UplinkFast
The STP (802.1D) was designed for robust, plug-and-play operation in bridged networks, or arbitrary connectivity (looping), and almost unlimited flatness. To improve spanning-tree convergence, Cisco offers a number of features, including PortFast and UplinkFast.
PortFast is a feature that you can enable on Catalyst switch ports dedicated to connecting single servers or workstations. PortFast allows the switch port to begin forwarding as soon as the end system is connected, bypassing the listening and learning states and eliminating up to 30 seconds of delay before the end system can begin sending and receiving traffic. PortFast is used when an end system is initially connected to the network or when the primary link of a dual-homed end system or server is reactivated after a failover to the secondary link. Because only one station is connected to the segment, there is no risk of PortFast creating network loops. In the event of a failure of a directly connected uplink that connects a Building Access switch to a Building Distribution switch, you can increase the speed of spanning-tree convergence by enabling the UplinkFast feature on the Building Access switch.
With UplinkFast, each VLAN is configured with an uplink group of ports, including the root port that is the primary forwarding path to the designated root bridge of the VLAN, and one or more secondary ports that are blocked. When a direct uplink fails, UplinkFast unblocks the highest priority secondary link and begins forwarding traffic without going through the spanning-tree listening and learning states. Bypassing listening and learning reduces the failover time after uplink failure to approximately the BPDU hello interval (1 to 5 seconds). With the default configuration of standard STP, convergence after uplink failure can take up to 30 seconds.