- Simplicity versus Flexibility versus Optimality
- Knowing the Problem You're Trying to Solve
- Overhead and Scaling
- Operation Above Capacity
- Compact IDs versus Object Identifiers
- Optimizing for the Most Common or Important Case
- Forward Compatibility
- Migration: Routing Algorithms and Addressing
- Parameters
- Making Multiprotocol Operation Possible
- Running over Layer 3 versus Layer 2
- Robustness
- Determinism versus Stability
- Performance for Correctness
- In Closing
18.3 Overhead and Scaling
We should calculate the overhead of the algorithm. For example, the bandwidth used by source route bridging increases exponentially with the number of ports per bridge and bridges per LAN. It is usually possible to choose an algorithm with less-dramatic growth, but most algorithms have some limit to the size network they can support. Make reasonable bounds on the limits, and publish them in the specification.
Sometimes there is no reason to scale beyond a certain point. For example, a protocol that was n2 or even exponential might be reasonable if it's known that there would never be more than five nodes participating.
Scaling also applies to the low end; it is also good if the technology makes sense with very few nodes. For example, one of the strengths of Ethernet is that if you have only a few nodes, you don't require an expensive specialized box such as a router.
Real-World-Protocol
Post-toast wineglass clicking: This is an n2 protocol. And some of us do not have arms long enough to reach across a banquet table to the people on the other side. Surely someone can invent something more efficient. I'd do it if I could figure out what problem is supposed to be solved by that protocol. My son claims the purpose is to make sure everyone drinks at the same time in case the wine is poisoned. Is that a good thing? And does this protocol accomplish that?