- MPLS TE Introduction
- Basic Operation of MPLS TE
- DiffServ-Aware Traffic Engineering
- Fast Reroute
- Summary
- References
DiffServ-Aware Traffic Engineering
MPLS DS-TE enables per-class TE across an MPLS network. DS-TE provides more granular control to minimize network congestion and improve network performance. DS-TE retains the same overall operation framework of MPLS TE (link information distribution, path computation, signaling, and traffic selection). However, it introduces extensions to support the concept of multiple classes and to make per-class constraint-based routing possible. These routing enhancements help control the proportion of traffic of different classes on network links. RFC 4124 introduces DS-TE protocol extensions.
Both DS-TE and DiffServ control the per-class bandwidth allocation on network links. DS-TE acts as a control-plane mechanism, while DiffServ acts in the forwarding plane. In general, the configuration in both planes will have a close relationship. However, they do not have to be identical. They can use a different number of classes and different relative bandwidth allocations to satisfy the requirements of particular network designs. Figure 2-5 shows an example of bandwidth allocation in DiffServ and DS-TE for a particular link. In this case, the link rate equals the maximum reservable bandwidth for TE. Each class receives a fraction of the total bandwidth amount in the control and forwarding planes. However, the bandwidth proportions between classes differ slightly in this case.
Figure 2-5 Bandwidth Allocation in DiffServ and DS-TE
Class-Types and TE-Classes
DS-TE uses the concept of Class-Type (CT) for the purposes of link bandwidth allocation, constraint-based routing, and admission control. A network can use up to eight CTs (CT0 through CT7). DS-TE retains support for TE LSP preemption, which can operate within a CT or across CTs. TE LSPs can have different preemption priorities regardless of their CT. CTs represent the concept of a class for DS-TE in a similar way that per-hop behavior (PHB) scheduling class (PSC) represents it for DiffServ. Note that flexible mappings between CTs and PSCs are possible. You can define a one-to-one mapping between CTs and PSCs. Alternatively, a CT can map to several PSCs, or several CTs can map to one PSC.
DS-TE provides flexible definition of preemption priorities while retaining the same mechanism for distribution of unreserved bandwidth on network links. DS-TE redefines the meaning of the unreserved bandwidth attribute discussed in the section "Link Information Distribution" without modifying its format. When DS-TE is in use, this attribute represents the unreserved bandwidth for eight TE classes. A TE-Class defines a combination of a CT and a corresponding preemption priority value. A network can use any 8 (TE-Class) combinations to use out of 64 possible combinations (8 CTs times 8 priorities). No relative ordering exists between the TE-Classes, and a network can use a subset of the 8 possible values. However, the TE-Class definitions must be consistent across the DS-TE network.
Tables 2-3, 2-4, and 2-5 include examples of three different TE-Class definitions:
Table 2-3. TE-Class Definition Backward Compatible with Aggregate MPLS TE
TE-Class |
CT |
Priority |
0 |
0 |
0 |
1 |
0 |
1 |
2 |
0 |
2 |
3 |
0 |
3 |
4 |
0 |
4 |
5 |
0 |
5 |
6 |
0 |
6 |
7 |
0 |
7 |
Table 2-4. TE-Class Definition with Four CTs and 8 Preemption Priorities
TE Class |
Class Type |
Priority |
0 |
0 |
7 |
1 |
0 |
6 |
2 |
1 |
5 |
3 |
1 |
4 |
4 |
2 |
3 |
5 |
2 |
2 |
6 |
3 |
1 |
7 |
3 |
0 |
Table 2-5. TE-Class Definition with Two CTs and Two Preemption Priorities
TE-Class |
CT |
Priority |
0 |
0 |
7 |
1 |
1 |
7 |
2 |
Unused |
Unused |
3 |
Unused |
Unused |
4 |
0 |
0 |
5 |
1 |
0 |
6 |
Unused |
Unused |
7 |
Unused |
Unused |
- Table 2-3 illustrates a TE-Class definition that is backward compatible with aggregate MPLS TE. In this example, all TE-Classes support only CT0, with 8 different preemption priorities ranging from 0 through 7.
- Table 2-4 presents a second example where the TE-Class definition uses 4 CTs (CT0, CT1, CT2, and CT3), with 8 preemption priority levels (0 and 7) for each CT. This definition makes preemption possible within CTs but not across CTs.
- Table 2-5 contains a TE-Class definition with 2 CTs (CT0 and CT1) and 2 preemption priority levels (0 and 7). 2 third example defines some TE-Classes as unused In this case, preemption is possible within and across CTs. With this design, preemption is possible within and across CTs, but you can signal CT1 TE LSPs (using priority zero) that no other TE LSP can preempt.
DS-TE introduces a new CLASSTYPE RSVP object. This object specifies the CT associated with the TE LSP and can take a value ranging form one to seven. DS-TE nodes must support this new object and include it in Path messages, with the exception of CT0 TE LSPs. The Path messages associated with those LSPs must not use the CLASSTYPE object to allow non-DS-TE nodes to interoperate with DS-TE nodes. Table 2-6 summarizes the CLASSTYPE object.
Table 2-6. TE-Class Definition with Two CTs and Eight Preemption Priorities
TE-Class |
CT |
Priority |
0 |
0 |
7 |
1 |
1 |
6 |
2 |
0 |
5 |
3 |
1 |
4 |
4 |
0 |
3 |
5 |
1 |
2 |
6 |
0 |
1 |
7 |
1 |
0 |
Table 2-7. New RSVP Object for DS-TE
RSVP Object |
RSVP Message |
FRR Function |
CLASSTYPE |
Path |
CT associated with the TE LSP. Not used for CT0 for backward compatibility with non-DS-TE nodes. |
Bandwidth Constraints
A set of bandwidth constraints (BC) defines the rules that a node uses to allocate bandwidth to different CTs. Each link in the DS-TE network has a set of BCs that applies to the CTs in use. This set may contain up to eight BCs. When a node using DS-TE admits a new TE LSP on a link, that node uses the BC rules to update the amount of unreserved bandwidth for each TE-Class. One or more BCs may apply to a CT depending on the model.
DS-TE can support different BC models. The IETF has primarily defined two BC models: maximum allocation model (MAM) and Russian dolls model (RDM). These are discussed in the following subsections of this chapter.
DS-TE also defines a BC extension for IGP link advertisements. This extension complements the link attributes that Table 2-1 already described and applies equally to OSPF and IS-IS. Network nodes do not need this BC information to perform path computation. They rely on the unreserved bandwidth information for that purpose. However, they can optionally use it to verify DS-TE configuration consistency throughout the network or as a path computation heuristic (for instance, as a tie breaker for CSPF). A DS-TE deployment could use different BC models throughout the network. However, the simultaneous use of different models increases operational complexity and can adversely impact bandwidth optimization. Table 2-8 summarizes the BC link attribute that DS-TE uses.
Table 2-8. Optional BC Link Attribute Distributed for DS-TE
Link Attribute |
Description |
BCs |
BC model ID and BCs (BC0 through BC n) that the link uses for DS-TE |
Maximum Allocation Model
The MAM defines a one-to-one relationship between BCs and Class-Types. BC n defines the maximum amount of reservable bandwidth for CT n, as Table 2-9 shows. The use of preemption does not affect the amount of bandwidth that a CT receives. MAM offers limited bandwidth sharing between CTs. A CT cannot make use of the bandwidth left unused by another CT. The packet schedulers managing congestion in the forwarding plane typically guarantee bandwidth sharing. To improve bandwidth sharing using MAM, you may make the sum of all BCs greater than the maximum reservable bandwidth. However, the total reserved bandwidth for all CTs cannot exceed the maximum reservable bandwidth at any time. RFC 4125 defines MAM.
Table 2-9. MAM Bandwidth Constraints for Eight CTs
Bandwidth Constraint |
Maximum Bandwidth Allocation For |
BC7 |
CT7 |
BC6 |
CT6 |
BC5 |
CT5 |
BC4 |
CT4 |
BC3 |
CT3 |
BC2 |
CT2 |
BC1 |
CT1 |
BC0 |
CT0 |
Figure 2-6 shows an example of a set of BCs using MAM. This DS-TE configuration uses three CTs with their corresponding BCs. In this case, BC0 limits CT0 bandwidth to 15 percent of the maximum reservable bandwidth. BC1 limits CT1 to 50 percent, and BC2 limits CT2 to 10 percent. The sum of BCs on this link is less than its maximum reservable bandwidth. Each CT will always receive its bandwidth share without the need for preemption. Preemption will not have an effect on the bandwidth that a CT can use. This predictability comes at the cost of no bandwidth sharing between CTs. The lack of bandwidth sharing can force some TE LSPs to follow longer paths than necessary.
Figure 2-6 MAM Constraint Model Example
Russian Dolls Model
The RDM defines a cumulative set of constraints that group CTs. For an implementation with n CTs, BC n always defines the maximum bandwidth allocation for CT n. Subsequent lower BCs define the total bandwidth allocation for the CTs at equal or higher levels. BC0 always defines the maximum bandwidth allocation across all CTs and is equal to the maximum reservable bandwidth of the link.
Table 2-10 shows the RDM BCs for a DS-TE implementation with eight CTs. The recursive definition of BCs improves bandwidth sharing between CTs. A particular CT can benefit from bandwidth left unused by higher CTs. A DS-TE network using RDM can rely on TE LSP preemption to guarantee that each CT gets a fair share of the bandwidth. RFC 4127 defines RDM.
Table 2-10. RDM Bandwidth Constrains for Eight CTs
Bandwidth Constraint |
Maximum Bandwidth Allocation For |
BC7 |
CT7 |
BC6 |
CT7+CT6 |
BC5 |
CT7+CT6+CT5 |
BC4 |
CT7+CT6+CT5+CT4 |
BC3 |
CT7+CT6+CT5+CT4+CT3 |
BC2 |
CT7+CT6+CT5+CT4+CT3+CT2 |
BC1 |
CT7+CT6+CT5+CT4+CT3+CT2+CT1 |
BC0 = Maximum reservable bandwidth |
CT7+CT6+CT5+CT4+CT3+CT2+CT1+CT0 |
Figure 2-7 shows an example of a set of BCs using RDM. This DS-TE implementation uses three CTs with their corresponding BCs. In this case, BC2 limits CT2 to 30 percent of the maximum reservable bandwidth. BC1 limits CT2+CT1 to 70 percent. BC0 limits CT2+CT1+CT0 to 100 percent of the maximum reservable bandwidth, as is always the case with RDM. CT0 can use up to 100 percent of the bandwidth in the absence of CT2 and CT1 TE LSPs. Similarly, CT1 can use up to 70 percent of the bandwidth in the absence of TE LSPs of the other two CTs. CT2 will always be limited to 30 percent when no CT0 or CT1 TE LSPs exist. The maximum bandwidth that a CT receives on a particular link depends on the previously signaled TE LSPs, their CTs, and the preemption priorities of all TE LSPs. Table 2-11 compares MAM and RDM.
Figure 2-7 RDM Constraint Model Example
Table 2-11. Comparing MAM and RDM BC Models
MAM |
RDM |
1 BC per CT. |
1 or more CTs per BC. |
Sum of all BCs may exceed maximum reservable bandwidth. |
BC0 always equals the maximum reservable bandwidth. |
Preemption not required to provide bandwidth guarantees per CT. |
Preemption required to provide bandwidth guarantees per CT. |
Bandwidth efficiency and protection against QoS degradation are mutually exclusive. |
Provides bandwidth efficiency and protection against QoS degradation simultaneously. |