- Bringing the Managed Data to the Code
- Scalability: Today's Network Is Tomorrow's NE
- MIB Note: Scalability
- Light Reading Trials
- Large NEs
- Expensive (and Scarce) Development Skill Sets
- Linked Overviews
- Elements of NMS Development
- Expensive (and Scarce) Operational Skill SetsElements of NMS Development
- MPLS: Second Chunk
- MPLS and Scalability
- Summary
MPLS and Scalability
In the standard MPLS MIBs, the tunnels on a given NE reside in the mplsTunnelTable. Figure 3-6 illustrates an extract from the MPLS Traffic Engineering MIB [IETF-TE-MPLS].
Figure 3-6. The MPLS tunnel table.
Figure 3-6 shows the objects contained in the mplsTunnelTable. The mplsTunnelTable is made up of instances of MplsTunnelEntry, as seen by arrow 1 in Figure 3-6.
Each entry in this table should be seen as a column in a row; for example, mplsTunnelIndex can be considered a key value (in the relational database sense). This is depicted in Table 3-1, where some of the columns are arranged and assigned specific values. The exact meanings of the entries in Table 3-1 are explained in Chapter 8. For the moment, a short description is given.
Table 3-1. MPLS Tunnel Table Excerpt
MPLSTUNNEL INDEX |
MPLSTUNNELHOP TABLEINDEX |
MPLSTUNNEL INGRESSLSRID |
MPLSTUNNEL NAME |
---|---|---|---|
1 |
1 |
LER A |
TETunnel_1 |
2 |
1 |
LER A |
TETunnel_2 |
3 |
1 |
LER A |
TETunnel_3 |
5 |
1 |
LER A |
TETunnel_5 |
The column mplsTunnelIndex provides a unique key value for each tunnel on the node in question, starting at 1 and increasing with each entry added to the table (tunnel instances are described in Chapter 8). The column mplsTunnelHopTableIndex provides an index into a hop table that describes the path taken through the MPLS cloud by the tunnel. The column mplsTunnelIngressLSRId is the designated ingress node for the tunnel and has the value LER A for all the tunnels listed in Table 3-1. This column would most likely be either an IP address or a router ID, but a name is chosen here for simplicity. The column mplsTunnelName is simply a text descriptor for the tunnel. One notable feature of Table 3-1 is that there is no entry for index 4. This can occur when the user deletes the fourth entry. The table entries are not then moved up to fill the gap.
This table can typically include millions of rows (as mentioned earlier in the Light Reading Trials). Let's assume that each row is roughly 300 bytes in size. That means the overall size of the mplsTunnelTable for an SNMP agent containing 3 million LSPs is 3,000,000 * 300, or just under 9MB. This would assume a network containing possibly tens or hundreds of thousands of MPLS nodes. It is not practical to try to read or write an object of this size using SNMP. Unfortunately, such an operation might be necessary if a network is being initially commissioned or rebalanced after adding new hardware. Also, many NMS provide a connection discovery feature that must retrieve all virtual circuits (ATM/MPLS) from the network and figure out details for each circuit, such as traffic resource allocations and links traversed. One way to assist in improving scalability is to indicate in the MIB which objects have changed. A scheme for this would be to provide a second table called a tunnel-change table linked to the tunnel table. The tunnel-change table could have summarized boolean entries for a block of tunnel table entries. Let's say we have 1,000,000 tunnels and we assign a block size of 10,000 to the tunnel-change table. This means that any changes in the first 10,000 tunnels are reflected in the first entry in the change table. Any change in the next 10,000 tunnels are reflected in the second change table entry. With a block size of 10,000 we would then have 100 entries in the change table, or 100 * 10,000 = 1,000,000 tunnels. The NMS could then consult the change table in order to see which blocks in the tunnel table have changed. This would help avoid the problem of reading back all the tunnel table entries.