MPLS Network Design Elements
We can say that three broad areas affect the MPLS network design:
- Deployed services (service portfolio items sold to customers)
- Users’ special requirements (service level agreements)
- Specifics of the network (such as bandwidth restrictions)
The finished design uses the MPLS technologies to fulfill these requirements.
The "Generic" MPLS Core
The main assumption I’ll use is that the networks of interest have a generic MPLS-based core. This is made up of a large mix of switches and routers. Some of these devices may have been purchased as MPLS nodes, while others may be legacy ATM switches that have been upgraded to support MPLS.
The generic MPLS service provider core is illustrated in Figure 2, with the attached customers connected at its boundary. The magic of MPLS is that the protocol specifics of the traffic passed into the core are of no interest to the core devices. This is because all such traffic is tagged with MPLS labels, which are used to direct the traffic through the core. This is termed the separation between forwarding and control.
Figure 2 The MPLS network core.
Remember the customer option of applying DiffServ marking to the traffic? Well, those settings are made to the IP packet headers. Then the DiffServ data is written into the MPLS label; this is used as the traffic passes through the network core. This marking is applied to the Differentiated Service (DS) field illustrated in Figure 3.
Figure 3 DiffServ codepoint marking: Adding customer value.
The value in the DS field is then copied into the EXP field of the MPLS label, as shown in Figure 4. The MPLS-encapsulated packet is then pushed into the core by the PE device.
Figure 4 Copying the DiffServ setting into the MPLS EXP label field.
What’s so great about a generic MPLS core? Why not just run many different technologies in the core? The answer to both of these questions is that a single core technology helps to reduce operational/capital cost and facilitates a richer service portfolio. This principle applies on both the supply side (provider) and demand side (network user); the provider benefits by needing less technology in the network, and the customer benefits from not having to manage complex services such as VPNs—the customer view is basically a pipe that interconnects sites. In short, a generic MPLS core means that the provider reduces its network costs and the customer has easy access to the supported services.
The simple core depicted in Figures 1 and 2 contains all the MPLS protocols and technologies required to run the requisite customer services. The physical point where the customers and the provider network meet is called the edge. This is the boundary in the network where the customer traffic passes into and out of the network. In some cases, the service provider may manage the edge, or this task may be left to the customer. In either case, as shown earlier, the use of DiffServ technology allows value to be added to this important section of the network.
Network Requirements
The major requirement for a network is the service level agreement (SLA). The SLA and its fulfillment form the basis for judging the network! Let’s look at the elements of a typical SLA.
SLA Design
Customers use the service provider network as an integral part of their business processes. Along with the service, the customer can purchase an SLA that reflects the business criticality of the network services. If the SLA is not fulfilled, then the customer is due a rebate or service credits. So it’s in the interests of both parties to get the SLA right. The following list shows some generic SLA elements:
- Network availability. This can be on the order of 99.9%.
- Mean time to repair (MTTR). The time required by the provider to fix a service-affecting problem; for instance, 10 hours MTTR.
- Round trip time. From CE to CE; for example, in Figure 1 from HQ Site 1 to Branch Office Site 2.
- PE-to-PE delivery ratio. The allowed percentage of dropped packets.
- Traffic engineering. Moving traffic across predetermined paths; for example, LSP A in Figure 1.
- Quality of Service (QoS). The available bandwidth in a particular path may be restricted; for instance, LSP A in Figure 1 might have 10 Mbps while LSP B has 15 Mbps.
- Failure recovery. The availability of alternative network paths in the event of failure. In Figure 1, LSP A could have a backup in the form of LSP B.
The interesting thing about design considerations is the fact that they overlap with each other. The traffic engineering design can provide the basis for the failure recovery design; likewise, the traffic engineering design is likely to be limited by the QoS considerations.
Network Protocols
The configuration of the protocols is one of the key steps in the MPLS design. Typically, this configuration is carried out using network management software. I haven’t mentioned the important area of modeling yet. This is a software-assisted area that gives network designers the freedom to build virtual networks (such as the one in Figure 1) and then try out what-if analysis, such as loading up LSP A with traffic and seeing what happens to the SLA. Similar exercises can be carried out to determine failure recovery—perhaps breaking one of the links traversed by LSP A.
Protocols must be configured on all of the devices in Figure 1, for example:
- The CE devices might run OSPF, IS-IS, and/or BGP.
- The PE devices might run OSPF, IS-IS, and/or BGP and RSVP-TE and/or LDP.
- The P devices might run OSPF, IS-IS, and/or BGP and RSVP-TE and/or LDP.
OSPF, IS-IS, and BGP are routing protocols. OSPF and IS-IS have been extended to include traffic engineering information (such as available link bandwidth). This information can then be used by MPLS signaling protocols, such as RSVP-TE to create LSPs such as those in Figure 1.
An important MPLS protocol issue is that of label distribution. This is normally executed automatically via one of the signaling protocols, though it can be performed manually if required.