- Simplicity versus Flexibility versus Optimality
- Knowing the Problem You're Trying to Solve
- Overhead and Scaling
- Operation Above Capacity
- Compact IDs versus Object Identifiers
- Optimizing for the Most Common or Important Case
- Forward Compatibility
- Migration: Routing Algorithms and Addressing
- Parameters
- Making Multiprotocol Operation Possible
- Running over Layer 3 versus Layer 2
- Robustness
- Determinism versus Stability
- Performance for Correctness
- In Closing
18.8 Migration: Routing Algorithms and Addressing
It's hard enough to get one distributed algorithm right, but to get two of them to interoperate is far more difficult. One strategy, for example, when you mix RIP with a link state protocol such as OSPF or IS-IS, is for a router that connects to both a RIP neighbor and an OSPF neighbor to translate back and forth. This is very complicated because the maximum RIP metric is 15. Metrics must be scaled down, and then when they are reintroduced into another portion of the network you can wind up with loops. It doesn't matter what a metric means, but the cost to a destination should always increase as you get further away.
Another strategy for deploying a new routing protocol in an existing network (in which most of the routers don't support the new routing protocol) is tunnels, which is used by DVMRP. People configure DVMRP routers to know about other DVMRP routers that they should be neighbors with. These DVMRP routers are not actually neighbors, so they must communicate by tunneling their packets—routing messages as well as data messages—to each other.
The ARPANET went from a distance vector protocol to a link state protocol by running the two protocols in parallel. There were four stages a router went through:
-
Distance vector
-
Distance vector + link state, but route based on distance vector database
-
Distance vector + link state, but route based on link state database
-
Link state only
So first the network was running distance vector only. One by one, routers started sending link state information but not actually using it for routing packets. After all routers were running both protocols, and the link state database seemed to compute the same routes as the distance vector protocol, the routers were configured, one by one, to switch over to using the link state database for forwarding. Only when all of them were using the link state database could the routers, one by one, be configured to turn off distance vector.
Another strategy when running two protocols is to treat the network like two separate networks. This was done in bridging so that havoc was not created when transparent and source routing bridges were mixed. Transparent bridges were not allowed to forward data packets with the multicast bit in the source address set, and source routing bridges were allowed to forward only such packets.
The most complex strategy is to try to mix routers. An example is multicast protocols, in which there are several protocols deployed. Because none of the current protocols scales to be interdomain, the assumption is that a new interdomain protocol, compatible with all currently deployed protocols, must be designed. An alternative is to keep the protocols separate, as was done with the bridges, by somehow making sure that only one protocol handles a particular packet. This can be done by using a different packet format or by having ranges of multicast addresses handled by each protocol.
There are similar issues with changing addresses. Originally the strategy when moving from DECnet Phase IV to DECnet Phase V addresses was to have Phase-IV-compatible Phase V addresses and translate between them. However, this is very complicated and results in a world that has none of the advantages of the larger address space because all the Phase V nodes need Phase IV-compatible addresses until all the Phase IV nodes go away. A far simpler strategy is dual stack, which means that you implement both. You talk to a Phase IV node with Phase IV, and you talk to a Phase V node with Phase V.
For IPv6, many complicated migration strategies were discussed before it was concluded that dual stacks was the one most likely to work.