Introducing MPLS
The Multiple Protocol Label Switching (MPLS) working group was established in early 1997, with its first meeting held at the 38th IETF in April. The goal of the MPLS working group was to standardize protocol(s) that used label-swapping forwarding techniques to support unicast and multicast routing. Although multiple protocol support was feasible, much of the effort focused on IP.
At the time, few people realized that the rapid advances in semiconductor component technologies would soon make it possible to perform wire speed IPv4 routing without resorting to label-swapping techniques. As a result, the initial aim of MPLS was quickly rendered irrelevant.
However, the use of label-swapping techniques has a powerful advantage. It separates the routing problem from the forwarding problem. Routing is a global networking problem, and requires the cooperation of all participating routers. Forwarding, on the other hand, is a local problem; the router/switch should decide entirely on its own about which path the output would take.
Another advantage MPLS offers is the reintroduction of connection state into IP data flows. A connection state is imperative for the application of various policies. For example, if a certain bandwidth guarantee needs to be provided to a traffic flow, the router must remember how much bandwidth has been used. A stateless protocol cannot provide such service because it forgets everything about a packet after that packet has been processed.
Quality of Service (QoS)
It was logical to apply MPLS technology to the QoS problem in IP networks. Indeed, several Internet drafts were developed for the support of QoS in MPLS-based networks. The original experimental field of 3 bits in the MPLS shim header is reused to support differentiated service. The resource reservation protocol (RSVP) is extended to signal for establishment of an LSP with a defined traffic specification.
However, although MPLS provides a label-switched path (LSP) on which QoS policy can be applied, MPLS alone cannot guarantee any quality of service; additional efforts are still required.
First and foremost, the router/switch must have a sophisticated queuing strategy that will process the packet according to the different QoS requirements. Implementing a set of queuing strategies on an ultra-high-speed router/switch that can provide all kinds of QoS guarantees is not trivial, and has yet to be fully realized in today's high-speed routers/switches.
Second, to guarantee QoS requires the cooperation of all routers/switches along the transit path of the packet. If one router along the transit path cannot guarantee the QoS for the packet, all the guarantees from other routers are wasted. This aspect makes the full support of QoS especially difficult when the packet traverses multiple domains with different administrations.
For these reasons, the support of fine granular QoS in the IP network has not materialized. Most service providers today offer the simplest service level agreement (SLA) to their customersthe average transit delay between various nodes within the network.
Traffic Engineering
The departure from the destination-based routing paradigm also gives rise to another important application: traffic engineering. Traffic engineering seeks to control the traffic flow and network resources within a communications network, so that a predefined objective can be met. Usually the objective is to optimize the utilization of resources. Traffic engineering can also include QoS, when the predefined objective is the guarantee of QoS for certain traffic.
Traffic engineering has long been performed in the voice telephone network; traffic patterns are carefully analyzed and the network itself is engineered to accommodate the anticipated voice call traffic. The famous Erlang Formula was developed in 1917 by A. K. Erlang, and is still applied to the telephone network today.
However, the well-established traffic engineering theories for voice network are no longer valid for Internet traffic. For all practical purposes, Internet traffic doesn't follow a well-defined pattern. It is bursty and "fractal," regardless of the scale in time and space; the traffic pattern exhibits a similar behavior.
Such dynamic behavior makes the task of traffic engineering for the Internet difficult. In fact, the adjustment in network capacity toward a certain traffic pattern may well change the traffic pattern itself. The nature of TCP is to adjust to the network condition dynamically.
Because Internet routing is topology-driven, traditional Internet traffic engineering relies on Layer 2 circuits (ATM, frame relay) to create a virtual topology, with the virtual topology designed to accommodate the traffic pattern within the network. When MPLS is applied, the Layer 2 circuits are replaced by the LSPs. A set of protocols and tools are designed to measure traffic within the LSPs and provide feedback to the routing mechanisms so that traffic can be adjusted.
The advantage of MPLS with respect to the traditional Layer 2 traffic engineering approaches lies with its closer integration with the IP networking stack. For example, OSPF extensions for traffic engineering are designed with the MPLS LSP in mind. Nevertheless, it's important to keep in mind that MPLS doesn't solve the problem of lacking network capacity and prudent topology design. After all, the virtual topology is still bounded by the physical topology of the network.
VPN
Yet another important application that could make use of the MPLS technology is virtual private networks (VPNs). The addition of a "shim" header in front of the native IP packet satisfies an important requirement for VPNthe separation of traffic belonging to different VPNs. In addition, the establishment of LSP tunnels satisfies another criterionthe formation of a virtual topology.
Various approaches for MPLS-based VPNs were proposed in the IETF shortly after the establishment of the working group. Best known among them is the RFC 2547 approach, where multi-protocol BGP is used as the principal mechanism for managing VPN routing and membership information. Separate approaches using virtual routers were proposed later. Currently, there is a proliferation of proposals on how to use MPLS to establish both Layer 3 and Layer 2 VPNs across service provider networks. A new working groupProvider Provisioned VPN (PPVPN)was established in early 2001 to coordinate the efforts.
The principal advantage of using MPLS versus Layer 2 circuits is, again, the close integration of MPLS with the IP networking stack. In addition, MPLS can run over many different Layer 2 technologies. Therefore, the provisioning and management of VPN would need to deal only with MPLS, rather than multiple Layer 2 technologies at the same time. One drawback of MPLS is the lack of data authentication and encryption capabilities.
It's also interesting to observe that neither traffic engineering nor VPN were among the stated objectives of the IETF MPLS working group.
Generalized MPLS
A further departure from the original stated goal of MPLS is the creation of generalized MPLS (GMPLS). Here, the MPLS control plane protocols are generalized to control many types of switchesin particular, the optical cross-connect switches. Sometimes, when the optical cross-connect is accomplished by wavelength switching, it's called Multiple Protocol Lambda Switching. In order to support many different switching methods, the label definition is expanded to include time slots, wavelength, and port numbers. Signaling protocols (RSVP and CR-LDP) are extended to support other methods of switching. The idea behind GMPLS is produce a unified control mechanism for many kinds of data-switching methods.