Java Patterns for MPLS Network Management, Part 1
- MPLS Nuts and Bolts
- MPLS Network ManagementFCAPS
- SNMP
- MIBs
- NMS
- Java Patterns and MPLS Network Management
- Conclusion
- References
The networking industry often reminds me of the 1980s pre-IBM PC software sector—you can have anything as long as it’s a proprietary solution! Characterized by many competing vendors, the networking industry now labors under the burden of non-standard, multivendor architectures. This is seen in service provider and enterprise networks in the form of an overly rich mix of software and hardware cobbled together to provide a growing range of services. Traditional service revenues are shrinking as demand for bandwidth and the demand for new real-time services is growing.
Cisco Systems is emerging as the dominant vendor, but its products are still de facto standards. The lack of a standard platform is complicating the migration to converged IP-based networks. As in the 1980s software industry, the problem is the need for a convergence technology that provides a standard platform (just as the IBM PC and DOS operating system did back then).
Today, MPLS has moved beyond the hype and is still a good candidate for providing such a platform; MPLS is being deployed worldwide by hundreds of service providers. So, why is MPLS so special compared to its predecessors ATM and Frame Relay (FR)? In a nutshell, ATM and FR have scalability problems and they don’t provide easy integration with IP. MPLS succeeds by leveraging proven IP protocols and separating control and forwarding into distinct components.
Componentizing control and forwarding means that the former can be made arbitrarily complex without compromising the packet forwarding mechanism. The control component can be used to perform complex algorithms on incoming IP traffic, such as queue assignment and path selection while leaving the forwarding component untouched. This separation means that forwarding can be performed in hardware if required. Let’s now take the dime tour of MPLS.
MPLS Nuts and Bolts
MPLS provides the following major elements:
- A virtual circuit-based model (rather than IP hop-by-hop) are called label switched paths (LSPs). One of the Java patterns I use illustrates virtual circuits.
- Nodes that understand IP and MPLS are typically called label edge routers (LERs). LERs encapsulate traffic from the outer domain. This traffic can be either layer 2 (Ethernet, ATM, FR, etc.) or layer 3 (IP).
- Core nodes inside the MPLS domain are called label switching routers (LSRs).
- Traffic engineering (TE) allows traffic to be explicitly directed through the core.
- Quality of service (QoS) allows resource reservation for different traffic types—e.g., bandwidth, queues, colors, etc. IP offers just one QoS level: Best Effort.
- Migration from legacy technologies, such as ATM and FR.
- Differentiated Services allows specific traffic to enjoy better service—e.g., real-time voice packets versus email packets.
- Deployment of IP-based services such as layer 2 and layer 3 VPN.
We’ll see most of these in the following discussion. Figure 1 illustrates a corporate HQ with a remote branch office interconnected by a service provider network. The HQ site enterprise architecture supports a range of applications, including voice-over-IP (VoIP), video-over-IP, email, etc. Access to these applications is available over the MPLS-based service provider network.
Figure 1 illustrates two LSPs (LSP 1 and LSP 2). Both LSPs have been configured with explicit route objects (EROs): LSP 1 follows the path made up of the interfaces { d, e, f, g, h, i} on nodes { LER A, LSR A, LSR B, LER B }.
LSP 2 follows the path made up of the interfaces { c, j, k, l } on nodes { LER A, LSR C, LER B }. Typically, the above interfaces would be recorded as IP addresses (e.g., d = 10.81.1.1)—I use symbols just for simplicity. Selecting paths that optimize network resource utilization in advance of circuit creation is called traffic engineering. One of the Java patterns I’ll use illustrates TE.
Figure 1 Multisite enterprise using IP/MPLS service provider.
LSP 1 has also been configured to reserve bandwidth (in a process called QoS provisioning) along its path of 2Mbps (i.e., 2 million bits/second). This means that the real-time VoIP and video-over-IP traffic can be MPLS-encapsulated and pushed onto this path. LSP 1 terminates on LER B where any MPLS information is stripped from the packets. At this point, a normal IP lookup occurs, and the real-time traffic is forwarded to either the adjacent transit service provider or the branch office via CE2.
LSP 2 has no bandwidth resources reserved; it offers a Best Effort (or standard IP) QoS. This LSP is used to forward the SMTP (email) traffic across the core to LER B. Again, at LER B, the MPLS information is stripped away and normal IP lookup occurs. The traffic is then forwarded to CE Router 2 in the direction of the branch office site.
Figure 1 illustrates three different types of nodes: customer edge (CE), provider edge (PE), and provider core (P). CEs reside on the customer premises and can be basic IP routers. PEs reside at the edge or point of ingress of the provider network, and function as an on-ramp to the MPLS core. Ps are found inside the core and may be basic ATM/FR switches that are running MPLS protocols.
A major strength of MPLS is that it uses proven IP protocols to replace existing legacy technologies, such as ATM and Frame Relay. Network management (NM) is a key element of this evolution.