- Transport Network Failures and Their Impacts
- Survivability Principles from the Ground Up
- Physical Layer Survivability Measures
- Survivability at the Transmission System Layer
- Logical Layer Survivability Schemes
- Service Layer Survivability Schemes
- Comparative Advantages of Different Layers for Survivability
- Measures of Outage and Survivability Performance
- Measures of Network Survivability
- Restorability
- Reliability
- Availability
- Network Reliability
- Expected Loss of Traffic and of Connectivity
3.12 Availability
In this section we review the basic concept of system availability. In the design of the book, the overall material on availability is split between this section, which gives a generic introductory treatment, and Chapter 8, where the topic of availability is developed further in the specific context of determining the availability of paths through mesh-restorable networks.
To appreciate how availability differs from reliability, notice the "mission-oriented" nature of the definition above. Reliability is concerned with how likely it is for the system to operate for a certain time without a service-affecting failure occurring. There is no presumption of external repair or maintenance to recover from failures. A pump in the shuttle booster engine must run perfectly for three minutes during launch; there is no chance of repair, so the most relevant measure is reliability: what is the probability of operating flawlessly for three minutes?
This is a different orientation than required to characterize continuously operating systems which are subject to repair when failures occur. An Internet user does not care, for instance, about when the ISP had its first server or router failure, nor would he even directly be concerned with how often such failures occur. If such failures are promptly repaired the user will find the probability of being able to get access at any time suitably high enough to be satisfactory. In other words, availability can be high, even if failures are frequent. Availability is concerned with the steady state probability of finding the system in an operating state at any time we want its service. We are not concerned with whether its history of operation has been flawless to that point or not.
Availability is the probability of the system being found in the operating state at some time t in the future given that the system started in the operating state at time t=0. Failures and down states occur but maintenance or repair actions always return the system to an operating state. [BiAl92]
Note that finding a system in the up state at time t1 is quite different from requiring or expecting that it has stayed continuously in the operating state from t = 0 to t1. When considering availability of repairable systems, a statistical equilibrium is reached between the failure arrivals process and the repair process, both characterized by respective rates, and resulting in a fraction of total time that is "up" time. The fraction of all time that is up time is the system availability, more particularly the steady-state availability. In general a system is biased toward being in the up state for a short time after having a known start in that state. With time, however, failures arise, are repaired, and the system reaches its steady-state equilibrium. Figure 3-17 illustrates these relationships for the case of a single repairable component with a constant failure rate and repair rate. It is not the case that reliability is undefined for a repairable system. It continues to be the probability that the system operates failure-free for the interval [0,t]. Whether the system is repaired or not at the point of first failure only makes a difference to the availability. Without repair, however, reliability and availability are identical and both trend monotonically to zero with time. By far in practice it is the steady-state availability that is of interest. Nonetheless, touching on the concept of time-dependent and steady-state availability helps clarify the nature of the phenomenon and also its relationship to the reliability of the same system.
Figure 3-17. Relationship between reliability and steady-state and time-dependent availability for a single (non-redundant) repairable component or system.
Fundamentally, the steady-state (as opposed to time-dependent) availability is:
where Tobs is a total observation time. Henceforth, we refer to this simply as the system availabilty. It is the proportion of time that a system is providing its intended service or function observed in the limit as time increases toward infinity.
The most widely known equation for this limit is based on the intuitive picture of the life of a repairable system as a succession of cycles around a loop of operating-failure-repair-operating states. If we assume that each repair episode restores the system to its fully nominal operating condition, the time t in the reliability function is effectively reset to t=0. Consequently the expected time to the next failure is the MTTF (Equation 3.15). By definition at t=MTTF following each repair, another failure occurs on average. This is followed by another repair time whose average duration we denote MTTR. Thus the long-term life of a repairable system is comprised of repeated cycles as shown in Figure 3-18. The actual times to failure and actual repair times for each individual cycle are random variables, but for the limiting case we need to know only their averages. In other words, the failure-repair-operating cycle is not regular and repeatably timed as the diagram might seem to suggest. Rather, this is the conceptual limiting average failure cycle. Once this mental image of the "failure cycle" is obtained, it is easy to remember or derive the most widely used expression for availability whenever it is needed. This is:
Figure 3-18. Time-line illustration of the failure cycle of a repairable maintained system.
Often the term MTBF ("mean time between failures") appears in this expression instead of MTTF. As mentioned in the footnote of page 155, it rarely makes a significant numerical difference, but conceptually MTTF is the correct form. If repair times are a significant fraction of average operating times, this distinction can become important, however.
Note that Equation 3.18 applies on the means of the failure density and repair time functions regardless of their distributions as long as the system is statistically stationary (expectations and other moments are not time-varying). Under these conditions Equation 3.18 is also equivalent to
where μ = 1/MTTR is the "repair rate" and λ = 1/MTTF is the "failure rate."
3.12.1 Concept of "Unavailability"
The probabilistic complement of availability A is the unavailability U,
In a much availability analysis for communication networks, we work with unavailability quantities or expressions because of some simplifying numerical assumptions which we will now examine. These assumptions often make the analysis or modeling of complex systems feasible, with acceptable numerical accuracy, in cases where the exact availability model would be intractably complex. The most enabling simplification is the concept of adding unavailabilities instead of multiplying availabilities for elements in series. As long as we are considering sub-systems or components that each do have relatively high absolute availability, then from Equation 3.18 and Equation 3.20 it follows that
from which, for the many practical cases of interest in which MTTF » MTTR (e.g., years versus hours is typical), we can write
In other words, unavailability is approximated as simply the repair time times the frequency of failure, or the failure rate expressed in the appropriate inverse-time units.
FITS
The FIT is an internationally used unit for measuring or specifying failure rates. Because individual components or subsystems are generally highly reliable in their own right, the convention has arisen of using a period of 109 hours as a time unit or time scale on which to quantify failure rates (or conversely MTTFs):
Thus,
gives the MTTF in hours, if the FIT rate is known, and
is the unavailability if MTTR is given and the failure rate is given in FITs. The following examples give a feel for some typical failure rates, MTTRs, and common "constants" involved in typical communication network unavailability analyses7:
-
1 year = 8766 hours
-
1 failure/year = 114,155 FITs
-
1 FIT = 1 failure in 114,155 years
-
Typical FITs for a logic circuit pack of medium complexity = 1500, i.e., MTTF = 76 years
-
FITs for an optical Tx circuit pack = 10,867, => MTTF = 10.5 years
-
FITs for an optical receiver circuit pack = 4311
-
"Three nines availability" U = 10-4 -> A= 0.99900 => 8.76 hours per year outage
-
"Five nines availability" U = 10-6 -> A=0.99999 => 5.26 minutes per year outage
-
Typical cable cutting (failure) rate = 4.39/year/1000 sheath miles => 5000 FITs/ mi
-
Typical cable physical repair MTTR = 14.4 hours
-
Typical plug-replacement equipment MTTR = 2 hours
Note that while failure rates have "units" of FITs, A and U are inherently dimensionless as both are just time fractions or probabilities of finding the system up or down, respectively.
3.12.2 Series Unavailability Relationships
If a system is comprised of n components (or subsystems) in a "series" availability relationship then (like the proverbial Christmas tree lights) all components must be operating for the system to be available. Figure 0-1 shows the availability block diagram for elements in a pure series relationship. For elements in series the overall reliability function becomes:
Figure 0-1. Elements in a series availability relationship.
and the exact form for unavailability and availability are
As the term "series" implies, the reliability reflects the product of probabilities that any one might fail by time t, and the availability of the whole requires that every series element must also be available.
Adding Series Unavailabilities
Let us now show why, as mentioned above, one can numerically approximate Equation 3.27 by a corresponding sum of unavailabilibity values. Specifically the assertion is that:
where Ai is the availability of the ith set of N elements in a series relationship, and Ui is the corresponding unavailability, 1-Ai. A simple example with two elements in series will show the basis of this useful numerical approximation. Consider two identical elements A and B in a series availability relationship, each with elemental unavailability U. If we assume A and B fail independently, the exact result would be A = P(AB) = P(A)P(B). Or, therefore A = (1-U) (1-U) = 1- 2U + U2. In contrast, Equation 3.28 would give us A = 1-2U. Thus we see that in "adding unavailabilities" of two elements in series we are only ignoring the square of an already small number, U2. Moreover, the error in the approximation is toward being conservative (in an engineering sense) because to the extent we err numerically, it is in the direction of underestimating the actual availability. The typically high accuracy of this approximation is illustrated in [Free96b] with an example having six elements in series with Ui from 10-5 to 10-3. The accuracy of the approximation is better than 0.5% which Freeman notes is also "typically far more precise than the estimates of element Ai's".
3.12.3 Parallel Unavailability Relationships
To say that n elements or subsystems are arranged in a parallel availability relationship means that only one of n has to be working for the system to be available. As long as one element is working, the system as a whole is providing its intended service or function. Figure 3-19 shows the availability block diagram for elements in a pure parallel relationship.
Figure 3-19. Elements in a parallel availability relationship.
For elements in parallel the overall reliability function becomes
and the exact form for the unavailability is
In summary, two useful basics for simplified availability analysis using the unavailability orientation are:
-
Unavailabilities add for elements in series. This is an approximation (but numerically conservative) and quite accurate for Ui << 1.
-
Unavailabilities multiply for elements in parallel. This is exact.
3.12.4 Series-Parallel Reductions
The first step in more detailed evaluation of system availability is often to apply repeated series-parallel reductions to the availability block diagram of the system. This involves repeated application of the two main rules: unavailabilities add in series, unavailabilities multiply in parallel. For relatively simple problems, a suitable series of series-parallel reduction steps can completely solve the problem of computing system availability. Figure 3-20 shows an example of this type. As a convenient shorthand in Figure 3-20 we denote element unavailabilities simply by the element numbers and "A + B" means the addition of the unavailabilities of blocks A and B. Similarly the notation "A || B" means the unavailability of element A in parallel with element B, i.e., the product of their unavailabilities. In the example, three stages of reduction lead directly to a simple algebraic expression for the system unavailability as a function of the elemental unavailabilities. The problem is simple because there are no cross-coupling paths in the availability model. The approach to calculation of availability for more complex system availability block diagrams is facilitated by first introducing the topic of network reliability.
Figure 3-20. Example of series - parallel reductions.