- Introduction
- Why Do We Need Naming and Addressing?
- How the Problem Arose
- Background on Naming and Addressing
- Conclusions
Background on Naming and Addressing
The problems of naming and addressing remained an interesting side issue for the Net, not a problem crucial to survival for many years. There weren't too many places to learn about naming and addressing. In the early days of computer science, there was considerable emphasis on mathematical logic, the predicate calculus and related subjects. Some aspects of naming are taken up there in some detail. As previously mentioned, there had been some work done in the context of operating systems. The postal system and the telephone system solved this problem on a global scale; and although both are large systems, they are also simpler in significant ways. Most of the network is hierarchical, and the part that isn't was strongly geographical with a single provider. They didn't have to consider multicast, migrating applications, multihoming, or until recently, mobility.
Foundations of Mathematics and Naming
As we have said, the problems of naming and addressing have a tendency to get philosophical. What to name, the relation among various names and the objects they refer to, and the structure that such names should have and what constructs they can support are all issues to be considered. It doesn't take long before it can begin to sound like counting angels on the head of a pin. However, experience has shown that subtle distinctions can often make the difference between a simple but rich and efficient naming scheme and a scheme that becomes complex and cumbersome and may not even work. So, perhaps we should consider those aspects before we go too much further. Because we are concerned with naming and addressing in computers and networks of computers, we will not discuss the full scope of naming issues that have been taken up by philosophy. We will only provide a taste of these issues and limit ourselves to those aspects of the mathematics that apply most directly to our problem.
Modern considerations of naming derive from the work on the foundations of mathematics and symbolic logic. This work got significant attention in the late 19th century with the interest in the foundations of mathematics and the work of Gottlieb Frege, with major contributions coming from the work of Bertrand Russell and Alfred North Whitehead, Ludwig Wittgenstein, Rudolf Carnap, and others who became known as the Vienna Circle. Primarily, they were concerned with two problems: 1) creating a strictly axiomatic basis for all of mathematics and 2) the means to create purely logical language to describe the world. Both projects failed. The first because Kurt Gödel proved the "incompleteness theorem," or in essence "no matter where you start, there is some place you can't get to from here." And the second by Wittgenstein, who in his Tractatus Logico-Philosophicus made it clear that most of what philosophy had been talking about for the past 2,000 years could not be stated with sufficient precision to prove any conclusions. And all those things that could were tautologies, which say nothing. However, in the process of getting to these conclusions, considerable insights were made into the nature of language, the foundations of mathematics, symbolic logic, and so on.
Much of this work related to constructing a precise logical language. Consequently, one of the major considerations was precisely determining the relation of names to their meanings and how these meanings came about. Frege, in his essay "On Sense and Meaning" (1892) defined a name as follows:
- A proper name (word, sign, sign combination, expression) expresses its sense, means or designates its meaning. By employing a sign we express its sense and designate its meaning.
Here and in the Basic Laws of Arithmetic (1884), Frege goes on to develop the concept of a name to correspond closely to what one intuitively thinks of as a noun clause. As alluded in the definition, a name can be an expression. Frege also introduced variables into these expressions and the concept of bound and unbound variables, although the use of these terms did not come until later. Frege distinguishes simple and complex complete names. Simple names are what we would term constants; complex names are expressions. A complete name has all of its variables bound to constants. For Frege, an incomplete name (i.e., one with unbound terms) is a function. Frege uses these concepts and a unique notation in an attempt to derive the fundamental rules of arithmetic. However, he only came close. As his book went to press, Frege received what is now a famous letter from Russell advising him of a problem Russell had encountered in his own attempt with Whitehead to put mathematics on a completely logical footing (the set of all sets that do not contain themselves, leading to the Russell paradox). Frege had missed the paradox that stumped Russell for quite awhile and whose solution is still debated by mathematicians. Although the damage was not irreparable, Frege never revised his book to fix the problem.
Twenty some years later, the young Ludwig Wittgenstein took issue with Frege and to some extent Russell in his work that revolutionized mathematics and philosophy, the Tractatus Logico-Philosophicus (1922). We have already touched on the Tractatus in Chapter 1, "Foundations for Network Architecture," but here let's look more closely at what it says about names. Right off the bat, Wittgenstein takes issue with Frege:
- 3.142 Only facts can express a sense, a set of names cannot.
- 3.143 Although a propositional sign is a fact, this is obscured by the usual form of expression in writing or print. For in a printed proposition, for example, no essential difference is apparent between a propositional sign and a word. (This is what made it possible for Frege to call a proposition a composite name.)
- 3.144 Situations can be described but not given names.
An early 20th-century flame, W goes on to give a much restricted definition of a name, which corresponds to what we will call here a primitive name:
- 3.202 The simple signs employed in propositions are called names.
- 3.203 A name means an object. The object is its meaning. ('A' is the same sign as A.)
- 3.22 In a proposition a name is the representative of an object.
- 3.26 A name cannot be dissected any further by means of a definition: it is a primitive sign.
- 3.261 Every sign that has a definition signifies via the signs that serve to define it; and the definitions point the way.
- Two signs cannot signify in the same manner if one is primitive and the other is defined by means of primitive signs. Names cannot be anatomized by means of definitions. (This cannot be done to any sign that has a meaning independently and on its own.)
W is nailing things down pretty tight, defining a name as essentially a label for an object. This is a denotative approach to naming. He goes on to point out that names by themselves say very little:
- 3.3 Only propositions have sense; only in the nexus of a proposition does a name have meaning.
- 3.314 An expression has meaning only in a proposition. All variables can be construed as propositional variables. (Even variable names.)
- 3.3411 So one could say that the real name of an object was what all symbols that signified it had in common. Thus, one by one, all kinds of composition would prove to be unessential to a name.
- 4.0311 One name stands for one thing, another for another thing, and they are combined with one another. In this way the whole group—like a tableau vivant—presents a state of affairs.
- 4.23 It is only in the nexus of an elementary proposition that a name occurs in a proposition.
So, W comes full circle or would seem to. The meaning of a name can only be determined when it occurs in a proposition (i.e., in context). Further, all expressions must reduce to a primitive name, and these expressions do not affect the name. Where is W headed with all of this? Right here:
- 5.526 We can describe the world completely by means of fully generalized propositions, i.e., without first correlating any name with a particular object.
- 6.124 The propositions of logic describe the scaffolding of the world, or rather they represent it. They have no 'subject-matter'. They presupposed that names have meaning and elementary propositions sense; and that is their connection with the world. It is clear that something about the world must be indicated by the fact that certain combinations of symbols-whose essence involves the possession of a determinate character-are tautologies. This contains the decisive point. We have said that some things are arbitrary in the symbols that we use and that some things are not. In logic it is only the latter that express: but that means that logic is not a field in which we express what we wish with the help of signs, but rather one in which the nature of the natural and inevitable signs speaks for itself. If we know the logical syntax of any sign-language, then we have already been given all the propositions of logic.
The hope had always been that logic could resolve important questions in philosophy. What W has done here and will wrap up between here and the famous statement 7 says that names are arbitrary labels and all statements in logic are tautologies. They say nothing about the real world.
Although there had been suspicions to the contrary before this point, mathematics had always been considered a science. There was a belief that it was a universal language with which the world could be completely and precisely described, which would in turn lead to answering many long-standing questions, including some outside the traditional realm of science and mathematics. After all, much of its use was in the service of science, and science made many statements and solved many problems about the real world with mathematics. W has now slammed the door on this view. Logic and, by the constructions of Frege and Russell, mathematics say nothing about the real world and can't. Mathematics is not a science. Mathematicians were operating in the world of Platonic ideals, believing that these truths that they derived were independent of human thought. Although refined by other logicians and mathematicians in the intervening 80 years, the structure and limitations erected by W have remained, circumscribing how far mathematics can go in answering questions that affect people.
But although this was a failure on one level, it was precisely what was required 30 years later when it became possible to build logic machines and get the fledging field of computer science off the ground. The concepts of primitive name, simple and complex, complete and incomplete names were precisely the foundations necessary for constructing the logical languages required for computers, where now these languages could be used in propositions that said something real about a virtual world. It also provides the basis for a theory of naming for networks and distributed system, but provides little help with any fundamentals for addressing. We need a mathematical characterization of "locating" objects.
Naming and Addressing in Telephony
Addressing in the telephone system developed from the bottom up. Initially, telephone systems were isolated islands. Telephone numbers corresponded to numbers on the switchboard, which corresponded to the wires that ran to the phones.4 Enumeration worked again. The scope of the address space was limited to the island or central office called an exchange; that is, telephones in different exchanges might have the same number. When a phone system outgrew what could be handled by a single central office, trunks were used to link central offices. Each exchange was given a unique identifier, and this number was tacked on the beginning of the number for the telephone: the beginning of hierarchical addressing. Connections between islands required an operator.5 With the advent of automatic dialing and long distance, it was necessary to add another level to the hierarchy, and area codes were created. But the fundamental semantics of the phone number never changed: It was the number of the wire that ran to the phone. There was really no attempt at structuring the assignment of numbers within an exchange, there might be some similarity in the exchanges used for a single city, but overall the structure of the address space was roughly geographical. This had more to do with conserving the amount of relay equipment than attempting to logically structure the phone numbers.
Over time, as telephone engineers found ways to hack the system to provide specialized services, the semantics of the telephone number got confused. There are strong indications that the phone companies didn't quite understand what they were getting in to. Although normal phone numbers were physical layer addresses, the label of a wire, the definition began to get confused: 800 numbers are application addresses being location independent, whereas 411 and 911 are simply well-known names for specific applications. (Most in phone company circles did not realized this, of course; they were still just phone numbers.) Initially, cellular phone numbers were network addresses, a good unique identifier as the phone was handed off from cell tower to cell tower. But as soon as roaming was provided, they became application addresses (because they were now location independent). Customers had become familiar that when they moved within a city their phone number did not need to change. Although exchanges had begun as exclusively geographical, this began to break down over time with improvements in switches and customer demand. Roaming just served to convince customers that they could move anywhere in the country and not change phone numbers. Because 800 numbers and initially cell phones were such a small population, the mapping from the application address to a network or physical layer address could be a special case. As Signaling System 7 was deployed in the 1980s, it enabled these changes during the 1990s, and the telephone system moved to rationalize its addressing architecture.
Naming in Operating Systems
Much more theoretical work has been done on naming than on addressing. As luck would have it, we are much more interested in addressing than naming. Almost everything in computer science is addressing of one form or another, not naming. There has been very little theoretical work done exploring the properties of addresses, no systematic exploration of addressing. Much of this was because computing systems were so resource constrained. Most of the work has been very pragmatic in the context of solving a specific problem. So, we have some idea of what works or under what conditions it works or what doesn't, but we have very little idea if this is the best we can do.
One of the few theoretical treatments of this subject tempered by implementation of a production system (i.e., it satisfies our philosophical triangulation) is the work of J. H. Saltzer on Name Binding in Computer Systems (1977).6 This is what university-level computer science should be and isn't much of the time. This work develops the theory of naming and addressing in operating systems and programming languages in a general and implementation-independent manner. It is does the "algebra" first. Although space does not allow a detailed review of the paper, we do see that roughly three levels of naming are required in operating systems. Saltzer provides a framework for the sharing of data and programs in a computing environment. Although Saltzer does not consider the problems of naming and addressing in computer networks, many of the concepts that will be needed are discussed. These might be characterized as follows:
- A name space that allows sharing among independently running programs
- A name space that allows programs to logically refer to their variables regardless of where they are in memory
- A name space that represents the program in memory
- A path from the processor to the memory
The first has a "universal" scope of the whole computer system and encompasses all files (program or data) that are executing or may be executed on that system. This name space allows one to unambiguously refer to any programs and data files on the computer and in some systems, such as Multics, objects within these. The second provides a name space that allows the programmer to logically construct programs independent of memory size and location. This space creates a virtual environment that may assume resources that exceed those of the underlying real computer. This logical environment is then mapped to a real computer where the operating system provides the facilities that create the illusion of the virtual environment. (For example, virtual memory provides location independence and the illusion of greater memory than actually exists, and processor scheduling gives the illusion of a multiprocessor system.) The hardware then provides a path from the processor to the appropriate memory location.
For the naming of files and programs, a hierarchical approach was adopted rather quickly, consisting of a root directory, subdirectories, and finally primitive names. This was called a pathname because it defined a path through the directory structure. If a file was moved in this structure, its primitive name remained the same, but its pathname changed.
X.25 and the ITU
In the mid-1970s, the PTTs rushed to get in the packet-switching business. Mostly to defend their turf because organizations that weren't telephone companies were building networks than because they thought it was a good business opportunity. After all, data traffic would never come close to the kind of volumes as voice traffic! The PTTs proposed a network design along the lines of the ARPANET or NPLnet using a new protocol, X.25, as their answer to the research networks. X.25 addresses have the same semantics as a telephone (no surprise). The structure of an X.25 address is similar to that for telephones, consisting of a country code, followed by a network number and DTE (host) number. But the allowances for growth were very small, allowing only ten networks per country. A distinct "group-id" field in the X.25 header identifies particular connections from this DCE. The address is the name of the interface over which all connections with that DTE pass.
The "East Coast elite" screwed up the ARPANET addressing because they were from Boston. In Boston, there is only one way to get anywhere, and so it is easy to confuse that a route and an address are the same thing. If they had been from the Midwest where everything is on a grid and there are many paths between two points, they would have known that a route and an address are two entirely different things.
It isn't true, but it makes a good story!
The Evolution of Addressing in the Internet: Early IP
As previously discussed, the origin of the Internet's convention that addresses name interfaces derives from the implementation of the original IMPs. Although this was common practice for the small data networks of the time, it is basically the same as the telephone company. Using the telephone example was a reasonable first approximation, and it wasn't at all obvious how the street address example contributed anything to the solution (although there was a nagging sense that it should). Unlike telephone addresses, ARPANET addresses were only route dependent for the last hop. (In the phone system, there were multiple routes above the exchanges, although automatic rerouting is relatively recent.) It was clear that computers would have different requirements than telephones. We have already seen the problem of dual homing. But it was realized the problems of naming applications that were seen in operating systems would be more complex in networks.
The development of TCP and IP began in the mid-1970s to fix problems with the original Host-to-Host Protocol. As far as addressing was concerned, the only immediate problem that had to be dealt with was that there weren't enough of them. So, the IP specification expanded the address to 32 bits and slightly generalized the semantics of the address so that it named the "interface" rather than an IMP port.
The problem continued to be discussed. John Shoch published an important paper (Shoch, 1978). (Shoch's paper had been circulating within the ARPANET community for over a year before it appeared in print.) Shoch recognized (as so often scoffed at) that
- Taxonomies and terminologies will not by themselves, solve some of the difficult problems associated with the interconnection of computer networks; but carefully choosing our words can help us to avoid misunderstanding and refine our perceptions of the task.
Shoch posited that three distinct concepts were involved: names (of applications that were location independent), which were "what we seek"; addresses (that were location dependent), which indicated "where it was"; and routes (which were clearly route dependent), which were "how to get there." Shoch made clear what many had been thinking but didn't know quite how to say. At the time, Schoch was working at Xerox PARC with Robert Metcalfe on the development of Ethernet and related projects. Shoch points out in his paper how the naming in networks parallels what is found in computing systems: Namely, that applications had names that were independent of memory location and made sense to human users, whereas programs used virtual memory addresses that allowed their code to be placed anywhere in memory and were mapped to the actual physical memory location (routing) by the hardware. It seemed to make a lot of sense.
A few years later (1982), the other most often cited paper on network addressing appeared, Jerry Saltzer's (RFC 1493) "On the Naming and Binding of Network Destinations." This is a most curious paper. Saltzer sets out to apply to networks the same principles he applied to operating systems and makes a major contribution to the problem. Saltzer notes that there are four things, not three, in networks that need to be named (just as there were in operating systems): services and users, nodes, network attachment, and paths. Saltzer carefully lays out the theoretical framework, defining what he means by each of these. After noting some of the issues pertinent to the syntax of names, Saltzer observes:
- The second observation about the four types of network objects listed earlier is that most of the naming requirements in a network can simply and concisely be described in terms of bindings and changes of bindings among the four types of objects. To wit:
- A given service may run at one or more nodes, and may need to move from one node to another without losing its identity as a service.
- A given node may be connected to one or more network attachment points, and may need to move from one attachment point to another without losing its identity as a node.
- A given pair of attachment points may be connected by one or more paths, and those paths may need to change with time without affecting the identity of the attachment points."
It would appear that Saltzer is suggesting that we name the objects and track the mappings (i.e., the bindings) between them. Notice the parallel between this list and Saltzer's list for operating systems earlier in this chapter.
- Each of these three requirements includes the idea of preserving identity, whether of service, node or attachment point. To preserve an identity, one must arrange that the name used for identification not change during moves of the kind required. If the associations among services, nodes, attachment points and routes are maintained as lists of bindings this goal can easily be met.
Again Saltzer is pointing out a very important property (i.e., that the names given to objects must be invariant with respect to some property across the appropriate scope). In particular, service or application names do not change with location, node names do not change for attachment points within the scope of their location, and attachment points do not change as the ends of their routes.
This expands a bit on Saltzer's words, but it seems reasonable to assume that Saltzer recognized that names would not be assigned once and for all. And if they could change, there must be rules for when and how they could change. In fact, he states quite rightly that even if a name is made permanent, this "should not be allowed to confuse the question of what names and bindings are in principle present." He then reviews that "to send a data packet to a service one must discover three bindings" [given the name of a service]:
- Find a node on which the required service operates
- Find a network attachment point to which that node is connected
- Find a path from this attachment point to that attachment point
From Saltzer's description, there is a name for each of these four and tables that maintain the bindings between the names:
- Service name resolution, to identify the nodes that run the service
- Node name location, to identify attachment points that reach the nodes found in 1
- Route service, to identify the paths that lead from the requestor's attachment point to the ones found in 2
Saltzer then illustrates his points with a couple of examples that for Saltzer present problems in applying his model. He then concludes that regardless of what one may think of his analysis, "it seems clear that there are more than three concepts involved, so more than three labels are needed...." And finally, in his summary, he points out there is a strong analog between what he has described and the concepts found in operating systems.
This seems to answer our first question of what has to be named: Applications require location-independent names. This is Schoch's what. This allows the application to be moved without changing its name. That name maps to a node address that indicates where the node is and the application can be found, with each router maintaining a forwarding table that maps an address to a "next hop" (i.e., next node address). But then Saltzer lumps the next step in with routing. He clearly knows that a point of attachment address is needed, but he doesn't clearly distinguish how it differs from a node address. As noted previously, it was obvious that the solution to the multihoming problem was that a logical address space was needed over the physical address space. But then Saltzer follows the operating system model too closely and notes that there is a mapping of applications to nodes, a mapping of nodes to points of attachment, and then a mapping to routes as a sequence of points of attachments and nodes.
Saltzer misses a case that is unique to networks and key to understanding: In networks, there can be multiple paths (links) between adjacent nodes. Saltzer can't be faulted for missing this. Multiple paths to the next hop were rare or nonexistent when he was writing. Let's supply the answer.
After selecting the next hop, the router must know all the node address to point of attachment address mappings of its nearest neighbors so that it can select the appropriate path to send PDUs to the next hop.
Routes are sequences of node addresses from which the next hop is selected. Then the router must know the mapping of node address to point of attachment address for all of its nearest neighbors (the line in Figure 5-2) so that it can select the path to the next hop.
Figure 5-2 Addressing for a network requires at least an application name, a node address, and a point of attachment address. Directory maps application names to node addresses, routes are sequences of node addresses, and multiple paths between adjacent nodes require mappings between node addresses and point of attachment addresses.
"Routing" is a two-step process. A route is a sequence of node addresses. The next hop is chosen to the next node address. Then the mapping of local point of attachment addresses to the point of attachments of nearest neighbors for the next hop is needed to select which path to the next hop is selected. Looking at the figure, we see these bindings:
- Directory, mapping of application names to node addresses to find where the application is. This is an example of the name-resolution or directory protocols discussed in Chapter 4, "Stalking the Upper-Layer Architecture."
- Routes, as a sequence of node addresses calculated by the routing algorithms to generate the next hop
- Paths, selected from the mapping node address to point of attachment address of the nearest neighbors (i.e., next hops)
Interesting! 1 and 3 are the same mapping! The path is also an example of a name-resolution service, just like the directory. The path database is smaller than the directory database, and the syntax of the names are a bit different, but the same mapping nonetheless. They both track name mappings that are "one hop" from each other (relative to their layer).
It was clear that a network address (i.e., node address) needed to be location dependent and application names should be able to be location independent. What about point-of-attachment (PoA) addresses? Traditionally, the PoA corresponds to the data link layer address. From the point of the view of the nodes, it doesn't matter. All the nodes (routers) require is that PoA addresses of nearest neighbors are unambiguous. All PoA addresses don't have to come from the same address space and probably won't. Different protocols in different layers of less scope are possible and allowable. Any two connected nearest neighbors will have addresses from the same address space. (They have to because both ends of the communication use the same protocol, by definition.) But not all PoAs on the same router or host must be from the same address space. Whether a PoA address space will be flat or location dependent will depend on the protocols and scope of the PoA layers. Location dependence is a property that facilitates scaling within a layer by reducing the complexity and combinatorial properties of routing.
But what is curious about this paper is that Saltzer lays out the answer very clearly. When addressing is discussed in networking meetings, this paper is cited by almost everyone. The paper is almost revered. But the Internet architecture has no application names and no node addresses (a well-known socket is at best a suffix for a network address, and URLs show signs of backing into being a form of application name within http). The Internet has only PoA names, and routes. Saltzer says clearly that PoAs and routes are not enough. It is clear that the fundamental problem with Internet addressing is that it is missing half the necessary addressing architecture. Why then has the Internet not taken Saltzer's advice, especially given how Saltzer lays out the principles so clearly?
The XNS architecture developed at Xerox PARC for networks of LANs, and later used by IPX for Novell's NetWare product, had a network address that named the system, not the interface. This was the first commercial architecture to fix the addressing problem created by the IMPs. But, Xerox's decision to keep the specifications proprietary limited its early influence. At the same time, the decreasing cost and increasing power of hardware reduced the need to fix the problem in IP.7 Later this same solution would be picked up and used by OSI.
The deployment of IP overcame the address space problems of NCP. Thirty-two bits of address space was more than enough. However, IP retained the semantics of the IMP port address and named the interface (see Figure 5-3). The primary reason for this is unclear. IP was first proposed in about 1975 and changed very little after that first draft. The only known problem at that time was with the semantics of the address, as exemplified by the dual-homing problem described earlier. The Saltzer analysis shows that multihoming isn't supported for routers, let alone hosts. But because the Net was small enough without multiple paths between adjacent nodes, it wasn't a problem that Moore's law couldn't solve. (And when multiple paths did arise, it caused problems but band-aids were found for them.) The problems of multicast and mobility were many years off. It was understood that a change would be necessary, as was our repeated caution about the importance of getting addressing right. No one felt they really understood addressing well enough. It seemed prudent that a more complete understanding was necessary before making the change. We still didn't understand what location dependence meant in a network. It seemed prudent not to do anything until there was a better understanding of what to do. Even in the early 1980s, when NCP was removed and IP became the only network layer protocol, the Internet was still for the most part a network of universities and R&D organizations, so such a major change was still something that could be contemplated.
Figure 5-3 Mapping Saltzer's concepts to the Internet shows that half the required identifiers are missing (application names and node addresses) and one is named twice (point of attachment).
When IP was defined, some structure was imposed on IP addresses by dividing the address space into blocks of Class A, B, and C (Figure 5-4). (As other authors do, we will ignore the existence of Class D and E addresses for now.) The classes of IP addresses are intended to be assigned to networks with different numbers of hosts: Class A for the really big ones, Class B for the middle-size ones, and Class C for the really small ones. And of course, within a Class A network, Classes B and C can be used to provide a rudimentary form of location dependence.
Figure 5-4 IP address format.
But these were allocations of size, and although they might be used to impose location dependence within a given network, no consideration was given to doing it across networks. Blocks of IP addresses were for the most part handed out in the order requested. 128.89 might be on the East Coast of the United States, and 128.90 might be in Hong Kong. So in fact, IP addresses were more like names than addresses. There was no structure or plan to assigning the network part of an IP address. It was assumed that addresses would be assigned in a location-dependent manner within the networks (an assumption made unnecessary by Moore's law) and that the number of networks would remain relatively small. There was no planning for tens of thousands of networks organized into tiers of providers.
As the problems of configuring networks for large organizations grew, subnetting was introduced. Subnetting takes part of the host-id portion of the address and uses it to represent subnets within the Class A or B address (or Class C, but they are pretty small for subnetting). This provides topological-dependent addresses within an organization; outside the organization, however, it is of no help.
OSI and NSAPs
Using the experience from the ARPANET and early Internet, OSI made some major strides in working out the theory of naming and addressing. It also made some major mistakes. (Although there are several interesting aspects to the OSI addressing concepts.) The amount written on it is fairly voluminous and impenetrable. We will consider the basics as briefly as we can and only elaborate on concepts or lessons that we need to carry forward. First, let's dispense with what OSI got wrong: The Europeans were intent on making X.25 the OSI answer to the network layer and not using any experience from the United States, even if it was improving on the lessons learned in the Internet. Consequently, they forced into the OSI architecture fundamental constructs to reflect X.25. As an example, in OSI an (N)-connection is defined to be shared state among (N+1)-entities, not the shared state among (N)-entities. But in spite of such fundamental problems, it was possible to resurrect the beginnings of a fairly reasonable addressing architecture, even if the errors did cause the definitions to get a bit convoluted at times.
OSI spent considerable time developing a theoretical framework for the architecture. This was not the "seven-layer model." But an earlier section of the reference model defined the common elements that all layers would have. The understanding was that there were common elements but different functions in each layer, in line with the Dijkstra concept of a layer. This effort was beneficial because it was an attempt at an "algebra" that clarified the nature of the problem provided insight into the solutions. It is unfortunate that politics could not be kept out of it. However, it seldom helped those who tried to use the standards because the standards seldom reflected the insights that had been gained. (The U.K. delegation insisted that any "tutorial material" should not be included. It seemed that they were intent on making the documents as difficult to use as possible.) There are two aspects of this theory: the general architecture as it relates to addressing and the specifics of addressing in the network layer.
The general OSI architecture consists of (N)-layers. (Of course, in the specific architecture constructed from this theory, the maximum value of N was 7.) Each system in the network contains elements of these (N)-layers, from 1 to 7. The intersection of an (N)-layer with a system is called an (N)-subsystem. Within each (N)-subsystem, there is one or more (N)-entities (Figure 5-5). An (N)-entity is the protocol machine for that layer. A (N)-subsystem could contain more than one (N)-entity (e.g., different groups of users) or (N)-entities of more than one kind (i.e., different protocols). In other words, an (N)-subsystem is all the modules in a system relating to a particular layer, protocol machines, management, buffer management, and so on. Having a term for everything in a system associated with a given layer proves to be quite useful.
Figure 5-5 Entities, service access points, and identifiers.
As mentioned, an (N)-connection was defined to be "an association requested by an (N+1)-entity for the transfer of data between two or more (N+1)-entities." In other words, an (N)-connection went from one (N+1)-entity (in an (N+1)-layer) down to an (N)-entity across to an (N)-entity in another system and up to the (N+1)-entity in the remote system. (Pushing this definition were the Europeans attempting to legislate the X.25 view.) This tightly binds the shared state in the (N)-entities to the shared state in the (N-1)-entities. But it is important that it be possible to decouple the two, so that the shared state at (N-1) can be lost without affecting the shared state at layer N. This definition makes that difficult.
Later realizing that they needed a name for the relation between the (N)-entities (what the definition of a connection should have been), they defined an (N)-association as "a cooperative relationship among (N)-entity-invocations."8 Yes! In OSI, associations were connections, and connections were what association should be. But then I have never known a standards organization yet whose arrogance didn't get it into this sort of doublespeak.
The (N)-connection crossed the boundary between an (N+1)-layer and an (N)-layer at an (N)-service access point or (N)-SAP. (N)-SAP-address identifies an (N)-SAP. (This is why one encounters the term SAP in other standards. Notice how a SAP tries to be a port or interface.) An (N)-SAP was bound to one and only one (N)-entity at a time. If an (N)-entity needed to have an identifier, it was called an (N)-entity-title. (The pedants said it couldn't be called a "name" because addresses were also names.) An address was a location-dependent name. So, the term title was used for location-independent names. Associated with an (N)-SAP-address were one or more (N)-connection-endpoint-identifiers whose scope was the (N)-subsystem. An (N)-CEP corresponded to a single connection to an (N)-entity. The (N)-SAP-address was supposed to be an X.25 DTE address. The (N)-CEP-identifier corresponds to what many protocols or IPC facilities call port-ids, whereas for the PTTs it was the X.25 group-id. (Group-ids are similar to ATM virtual path-ids or MPLS tags. All three of these derive from the same telephony lineage). So, an (N)-SAP was really a port, an interface.
This constraint along with the definition of connection caused a number of problems. It implied that all the bindings between (N)-entities in a system had to be preallocated before a connection request was made. This, of course, makes dynamic assignment and resource allocation essentially impossible. By 1983, it was already believed that the reference model was too far along to be changed. So rather than simply fix the definition of connection and make the structure simpler, a level of indirection was created9: An (N)-address was defined as a set of (N)-SAP-addresses. But worse, the OSI "address" also identifies the interface. The one thing that most were trying to avoid. (In a committee, consensus never means that issues are resolved, only that progress can continue until someone finds a reason to raise the issue again.)
Another problem was discovered in how we thought we would build addresses. Initially, it was assumed that an (N)-address would be formed from an (N-1)-address and (N)-suffix, allowing addresses from a higher layer to infer addresses at lower layers. This was a fairly common approach found in operating systems. It can be found in early versions of the OSI reference model see, for example, ISO TC97/SC16/N117 (1978) or N227 (1979) and in the Internet today. It is a bad idea in networks. And why it is a bad idea is clear from its use in operating systems. Constructing names in this manner in operating systems has a name. They are called pathnames, and therein lies the problem. It defines a path. It defines a single static path within the system and then to the application when, in fact, there may be multiple paths that it should be possible to choose dynamically. It can be done, but essentially one must ignore that it has been done. Recognizing that it is a lot of redundancy for very little gain and may compromise security. It works in an operating system because there is only one path within the operating system from one application to another. This is exactly what we wanted to avoid from our analysis of Saltzer. Hence, any addressing scheme that, for instance, creates a network address by embedding a MAC address in it has thwarted the purpose of the addressing architecture. There can be a relation, but the relation cannot be tied to the path. This is still considered a quite normal approach to take to forming addresses.
However, all was not lost. Or more to the point, the problems in the network layer were much more complicated. The U.S. delegation was insistent that there would be a connectionless network protocol that built on the experience of IP, and the Europeans were intent that the future of networking would be a connection-mode protocol (i.e., X.25) and that connectionless would as limited as possible. They attempted to work out an architecture of the network layer that could accommodate both. The resulting standard, called the Internal Organization of the Network Layer (IONL), shed considerable light on what the two warring factions were wanting and provided technical insights (ISO 8648, 1987). Although the language of the document can be quite impenetrable to the uninitiated, every configuration described in it has since turned up in one form or another. The IONL was a very useful exercise in working out how real-world situations would be handled within an architecture. The Europeans had to admit that X.25 was only an interface to the network (after all, it was the title of the Recommendation) and as such only provided access to a subnetwork. It was finally worked out that the primary function of the network layer was to make the transition between the subnetwork-dependent protocols and provide a service that was independent of the subnetwork technology. To do this could require up to three sublayers depending on the configuration and the underlying media:
- A Subnetwork Access Protocol (SNACP) is a protocol that operates under constraints of a specific subnetwork. The service it provides may not coincide with the network layer service.
- A Subnetwork Dependent Convergence Protocol (SNDCP) operates over a SubNetwork Access protocol and provides the capabilities assumed by the SNICP or the network layer service.
- A Subnetwork Independent Protocol (SNICP) operates to construct the OSI network layer service and need not be based on the characteristics of any particular subnetwork service.
Although a lot of this structure may seem (and was) politically motivated, there were several major technical insights. For our purposes, the most important of which was that there was a "subnetwork PoA" (an SNPA or "the wire") that had an address with a scope that had to span only the particular subnet. A system might have several of SNPAs that mapped to an NSAP address. The NSAP address as constructed by the IONL was, in fact, the (N)-entity-title. The (N)-directory, or in the this case the N-directory (N for network) (i.e., the routing information) maintained a mapping between the SNPA-addresses and the NSAP-address. This mapping provides a level of indirection between the physical addressing of the wire and the logical addressing of the network. This level of indirection provides the flexibility required for addressing to accommodate all the configurations and services necessary. This is repeated later, but it is worth observing now:
A network address architecture must have at least one level of indirection.
Like operating systems, there needs to be a transition between logical and physical addressing. As we have seen earlier from our interpretation of Saltzer in a network, two transitions are required: one in the network layer between SNPAs and NSAPs, between route dependence and route independence but both location dependent; and again between NSAPs and application entity titles, between location dependent and location independent.
The NSAP addressing structure attempted to solve two problems: accommodate a wide variety of existing address formats and set out a location-dependent address space. The address format of an NSAP is shown in Figure 5-6.
Figure 5-6 OSI NSAP format for the United States
The address space is organized by countries. The country codes are assigned by an ISO standard. Each country is then allowed to organize its own space. In the United States, a rather elegant solution was found that avoids a requirement for an active centralized authority. There is an existing ANSI standard of organization identifiers. These are used after the country code. To get an assignment of NSAP addresses, one merely has to get an organization-id (which many companies would already have for other purposes), the organization-id goes after the country code the rest of address space can be used by the organization. This creates a provider independent address.
The AFI specifies the format of the IDI and the addressing authority responsible for the IDI. The AFI could select X.121, ISO DCC, F.69 (telex), E.163 (PSTN), E.164 (ISDN), ISO 6523-ICD, or Local. The DFI contains the country code; Org is the ANSI organization identifier. Routing Domain and Area are the topological routing information. The Reserved field was to allow for another level of the routing hierarchy if it was required. The System field is six octets so that an Ethernet address can be used. If this is interpreted too literally it will force the NSAP to name the interface, not the network entity as intended. (Groan. In a committee, it is sometimes difficult to keep people from wanting to do it wrong.) Although this format incorporates location-dependent elements, it does not indicate where in the topological structure of the network the address is. It doesn't help determine "which way" to send a PDU or if two destinations are "near" each other. This address is location dependent more in the sense of Boston than Chicago!
This address space reflects the growing understanding of addressing. The IP address space was mostly concerned about identifying networks and hosts without much concern for their relative position in a topology. At this point, although it was understood that something analogous to a "Chicago address" would be useful, no one had any idea how to do such a solution. It really wasn't understood that addresses needed to be topological (in the mathematical sense). With the NSAP address space, there is more concern that a topology is reflected in the address space by including the DFI or country identifier and organization identifier. However, this topology is not completely satisfactory either. This scheme assumes that the routing domains are below the level of organizations. This would be the case for large companies but hardly for smaller ones. Similarly, there are cases where being able to group several small countries under a single regional domain would be useful and conversely, breaking up larger countries into multiple domains would also be useful. Or was the address format the result of a compromise between the "X.25 faction" and the "IP faction"? This raises the question of what is the relation between provider-based addresses and provider-independent addresses. Clearly, provider-based addresses reflect the topology of the provider's network. What does a provider-independent address space reflect? The usual reaction is to immediately leap to a geographic approach. But is this the only one? Are there others that are not totally geographic in nature?
There were other minor problems: The format assumes that organizations are a proper subset of countries. (Although one could assume that a company's presence in another country has a different value for these fields.) The only other problem with the address format is the selector field, which supposedly identifies the protocol in the layer above. The OSI Architecture group had taken the position that it was counter to the architecture for an (N)-protocol to identify an (N+1)-protocol. A horrid layer violation. At the time, this was seen as relating to addressing. So rather than a field in the PCI, the Network Layer group made it a field in the address. Neither solution actually can be used to identify the upper-layer protocol, regardless of whether it is a layer violation. Such a field can only identify one occurrence of a protocol in the layer above bound to that address. (Admittedly, this does not happen often, but as with many other "rare" events, when it does it can make things cumbersome if the addressing has not been done right.) There are configurations where more than one instance of the same type of protocol bound to the same network address is necessary. As we saw in Chapter 3, "Patterns in Protocols," one could argue that we weren't seeing the problem correctly, that the field identifies the syntax of the protocol. However, we will find later that both interpretations are incorrect and such a field is unnecessary.
But all in all, OSI progressed the state of the art and tried to take Saltzer's advice, even if the ill informed stuck a MAC address in the NSAP. It recognizes PoA addresses, node addresses, and as we shall see later, application names extending Saltzer's scheme in an important way.
Communism is the longest most torturous path from capitalism to capitalism.
- —Joke that circulated in Eastern Europe at the end of the 1980s
Addressing in IPv6
So let's consider the addressing architecture for this new IP in some detail. The IPv6 addressing specification is very emphatic: "IPv6 addresses of all types are assigned to interfaces, not nodes." However, it then observes that since any interface belongs to a single node, a "unicast address may be used as an identifier for the node"—a painful example of having heard the words but not understanding their implication. We will assume that a node is synonymous with a system and assume an interface is generalized from the IMP port from which it originated; that is, an interface is the path from the bottom of the IP layer through any lower-layer protocols to the physical media connecting to another system.
One exception to this model is granted to allow multiple physical interfaces to be assigned the same address as long as the implementation treats these as a single interface when presenting it to the IP layer. In other words, parallel interfaces or spares can be treated as a single interface. This would seem to indicate that this is a degenerate form of anycast address—and another kludge to make up for not having node and PoA addresses.
The Various Address Types
Although IPv6 supports a number of address formats, the format we are most interested in will be the Aggregatable Global Unicast Address. This is what most people will think of as an IPv6 address. But before we do that, let's dispense with anycast and multicast addresses and a couple of other address types that are unique to IPv6, the link-local and site-local addresses.
There are three types of IPv6 addresses (RFC 2373, 1998):
- Unicast. An identifier for a single interface. A packet sent to a unicast address is delivered to the identified by that address.
- Anycast. An identifier for a set of interfaces (typically belonging to different nodes). A packet sent to an anycast address is delivered to one of the interfaces identified by that address.
- Multicast. An identifier for a set of interfaces (typically belonging to different nodes). A packet sent to a multicast address is delivered to all interfaces by that address.
Anycast addresses
Anycast addresses are syntactically indistinguishable from unicast addresses. According to RFC 2373, a unicast address is turned into an anycast address by having multiple interfaces assigned to it. This is not quite the case. The nodes to which the interfaces belong must be explicitly configured to be aware of this. So, in fact, it is not multiple assignment that makes it an anycast address, but configuring the nodes to know that it is multiply assigned (an enrollment phase function). The RFC imposes two constraints on the use of anycast addresses: They cannot appear as the source address in any IP packet (reasonable); and they cannot be assigned to hosts, only to routers (less so). This latter constraint is perhaps the most odd because considerable use could be made of anycast addresses in applications. The subnet prefix of an anycast address is the longest prefix that identifies the smallest topological region of the network to which all interfaces in the set belong.
How this is supposed to work is not quite clear. For different nodes to be configured to be aware that multiple interfaces have the same address requires protocol to be exchanged. No such protocol has yet been defined. Clearly, any use of this facility must be stateless because successive uses may not yield PDUs being delivered to the same destination. This is another kludge to get around not having node and PoA addresses.
Multicast addresses
Multicast addresses include two subfields: A flags subfield that has 3 unused bits and a single bit that indicates whether this group address is permanently assigned; and a scope field that currently defines whether the scope of this group address is the local node, the local link, the local site, the local organization, or global. Permanently assigned multicast addresses have global scope; that is, the scope field is ignored. IPv6 defines a multicast address as "an identifier for a set of interfaces." There will be more to say on the nature of anycast and multicast "addresses" in Chapter 9, "Multihoming, Multicast, and Mobility."
Link- and site-local addresses
A link-local address essentially consists of the 10-bit format identifier in the high-order bits and a 64-bit interface identifier in the lower-order bits, and 59 bits of nothing in the middle. This address form is for "local" use only. The RFC suggests that link local addresses "are designed to be used for addressing on a single link for purposes such as auto-address configuration, neighbor discovery, or when no routers are present." The use of the term link implies that they are intended to be used on, for example, a single LAN segment (i.e., within a single subnet).
A site-local address, although similar to the link-local form, was to correspond to what private address space was in IPv4 (e.g., net 10). The subnet identifier distinguishes the multiple subnets within the same "site."
In 2003, there was a movement within the IPv6 working group, over considerable objections, to delete site-local addresses from the specification. There were strong feelings against the use of private address space within the IETF. Some believed that this "balkanized" the Internet, which it does, and contradicted some mythic ideal of the "spirit of the Internet." Engineering on belief rather than empiricism is always dangerous. As we have seen, NAT and private address space only break protocols in an incomplete architecture and primarily indicate bad design choices. Or to paraphrase Buckminster "Bucky" Fuller, NATS only break broken architectures.10 As it turns out, private address space is a natural part of any complete architecture and poses no dangers and, in fact, has many benefits.
However, the removal of private address space from IPv6 would seem to represent a very large deterrent for corporate adoption. Although NATs do not provide complete security, they are an important element in securing and exercising control over a subnet. It is hard to imagine corporate IT directors giving up this simple measure to be replaced by elaborate and as yet unproven IPv6 security mechanisms. Once again, the IETF seems to have cut off its nose to spite its face.
In addition, address formats are defined for carrying NSAP and IPX addresses. (Although there is little expectation that these will ever be used.)
IPv6 also allocates two special addresses: 0 and 1 (or to be precise in the IPv6 notation, 0:0:0:0:0:0:0:0 and 0:0:0:0:0:0:0:1). The unspecified address is 0 and "indicates the absence of an address." The unspecified address can never be used as a destination but may appear as the source address for a sender who does not have an address yet. (It is not clear what you do with such a PDU (you can't respond to it), but that is not important. The loopback address is 1 and is used by a system to send a PDU to itself. It may only be used as a destination address and then must be sent back to the sender. It should never be relayed to an address other than the sender, and the loopback address must not appear as a source address in a PDU.
IPv6 Unicast Addresses
It is the aggregatable unicast address over which there has been the greatest amount of debate. This debate has evolved around the decision that the IP address will continue to label an interface. This was complicated by the politics surrounding IP and OSI. By the time IPv6 was proposed, some had realized that addresses had to be topological. But they thought topology meant the graph of the network. Mainly, they were concerned that the addresses had to be aggregatable. As discussed in this chapter, the problem with the IPv4 address space is not so much the lack of address space but the growth of the routing tables. To reduce the number of routes that must be stored requires the ability to aggregate them. For example, the post office aggregates routes based on the hierarchy of the address (i.e., country, state/province, city, street, street number, and so on). When a letter is mailed, the first post office has to look at only the first couple of levels of the hierarchy to know where to send it. It does not need to figure out precisely where the destination is; it merely has to send the letter in the right direction. Similarly, some sort of hierarchy was required for IPv6 addresses. As we saw, CLNP adopted such a hierarchy based on countries and organizations within them.
The Internet had the same problem that had faced OSI: a flawed architecture and a reactionary group of traditionalists who opposed any change to the concept that an address labels an interface. However, the Internet architecture was also weak in another area. The Internet architecture really only covered the network and transport layers (or in terms of the seven-layer model, the top third of the network, SNIC, and transport and only had an address for the bottom third). Above and below network and transport, there was not really any structure, so there was no convention for names or routes, as proposed by Saltzer. This led to a tendency to try to solve everything in the network and transport layer.
The IPv6 effort determined the PDU header format and the size of the address field years before they determined what an address was to look like ("arithmetic before the algebra"). Also, most of the people involved in IPv6 were initially working under the misconception that the number of addresses was the major problem to be solved. There were some initial proposals that were similar to the NSAP address. But because the IPv6 address had to name an interface, to be aggregatable the addresses had to be provider-based. This had the unacceptable consequence that if one changed providers all hosts on your network would have to be re-addressed. (It is significant that the term commonly used in Internet circles is renumbering rather than re-addressing, which indicates that they think of it as enumeration or naming rather than addressing or changing location.)
As noted previously, a network architecture must make a transition from logical to physical at least once. The Internet architecture has no such transition. OSI had been "fortunate" enough that its traditionalist faction was X.25. That forced (or created the opportunity) to separate the physical address or subnetwork PoA from the network address. The Internet architecture did not really address the layers below network, and there was no X.25 faction. (Its traditionalists hung on to the IP of the "good old days.") Furthermore, the political climate was such that if OSI had done something, the Internet would either not do it or do the opposite and convince themselves there was a good technical reason to codify the old ways.11
This meant the possible solutions were severely limited. Therefore, any solution had to have an appearance of not doing what was most reasonable (i.e., a separation of logical and physical in different layers). Even though the idea and the solution had originated during the early development of the Internet and had been used by the, at least politically correct, XNS, it had last been used by OSI and was therefore unacceptable. (And yes, there are many rationalizations why this was not the reason.)
The developers working on the Internet had for many years realized that something needed to be done. But in the Internet, the "host" had always been the focus of attention. There had been several proposals (Curran, 1992; Chiappa, 1995) to name "endpoints." Chiappa defined an endpoint to be "one participant of an end-to-end communication, i.e., the fundamental agent of end-to-end communication. It is the entity which is performing a reliable communication on an end-to-end basis." Chiappa et al. saw this as mapping fairly directly to the concept of "host." However, the use of one and an in the definition would seem to imply more a single protocol machine than a collection of them. This was definitely on the right track. Replacing the traditional semantics of an IP address with the semantics of an endpoint in the protocol would have gone a long way to solving the problems confronting IP. However, this did not meet with much acceptance, probably because the implications of continuing to name an interface with an aggregatable address had not yet dawned on many of the members of the Internet community. To replace the semantics of an IP address with the semantics of an endpoint smacked too much of OSI. This situation existed for several years, and then Mike O'Dell (O'Dell, 1997) made a valiant effort to separate the IPv6 address into "routing goop," which would change when the host moved and an invariant globally unique "end system designator" that identified "a system invariant of its interfaces as in the XNS architecture" (emphasis added). This led to an addressing format (Figure 5-7) where the interface-id was the end-system identifier and the rest was the "routing-goop," as follows:
Where:
FP |
The format prefix |
TLA ID |
Top-level aggregation identifier (13 bits) |
Res |
Reserved (8 bits) |
NLA ID |
Next-level aggregation identifier (24 bits) |
SLA ID |
Site-level aggregation identifier (16 bits) |
Interface ID |
Interface identifier (64 bits), probably an EUI-64 identifier |
Figure 5-7 Format of an aggregatable IPv6 address.
The TLA, NLA, and SLA form the routing hierarchy of the address to the level of subnet, and the interface-id represents a completely independent globally unambiguous identifier. But, it does precisely what we found earlier that we didn't want to do: make it into a pathname.
This proposal came four years after the initial decision to develop IPv6 was made. By this time, memories had faded, there had been considerable turnover in the people involved, and the ramifications of the decision had finally become clearer to many. So with a little artful prose that did not open old wounds, O'Dell's proposal was able to thread the needle between the technical requirements and the political climate for a solution with only a moderate level of additional complexity. However, this was also unacceptable. The routing part of the IPv6 address is a path through a hierarchy of subnets, while the end-system designator has the same semantics as an IPv4 address. It names the interface (or to put it in other terms, the data link protocol machine). Here again, the IPv6 group found a way to take on the trappings of the solution without taking its substance to solve the problem. So although the form of O'Dell's proposal may be discernable in the IPv6 address format, the substance of it is not, and the problems remain.
At arm's length, an IPv6 address is similar to an NSAP in form. (...the longest, most torturous path....) It was common with NSAPs to use an IEEE 802 MAC address as the system-id, analogous to the use of an EUI-64 address as the interface-id. This was a case where the OSI architecture figured out something but the OSI Network Layer group, in a different committee, stayed with their intuitions. And as so often is the case in science, our intuitions were wrong. The NSAP format had four levels of hierarchy, whereas the IPv6 has three levels. OSI did not require "endpoints" or anything like them because it had application names. Because the IETF had no common application naming, it had, or thought it had, to solve everything in either the network or transport layer.
With IPv6, the routing part is not sufficient alone to distinguish a node. It can only distinguish the subnet but requires the interface-id to distinguish the node, whereas the interface-id alone can distinguish the interface. There are roughly 32 bits of redundancy in an IPv6 address (or enough for a couple of more levels in the routing hierarchy).
This approach will not support multihoming and mobility for the same reasons that IPv4 does not, and it greatly exacerbates the scaling problems in IP. The impact of these problems have been known about for a decade and a half, and now at this writing, with IPv6 barely deployed, they are already showing signs that are causing problems that are somewhere between severe and catastrophic. ("But she didn't do it and....")
Looking Back over IPv6
IPv6 has not instilled a lot of confidence among the cognoscenti. In fact, fear and trepidation is closer to the case. But deployment is beginning in fits and starts. There are still strong debates going on relating to the architecture of its addressing. For example, until very recently, some still argued that multihoming is being overly stressed. They contend that only a few hosts will need it and that a solution to multihoming is not really required; or because so few hosts need it, its cost should not be incurred by those who don't. This essentially ensures that any solution will be asymmetric and consequently will appear and be cumbersome and hence unacceptable.12
Superficially, it might appear that only a small percentage of all hosts require multihoming; that is, there are many more individuals connected to the Net than servers. However, even a small percentage of a large number can be a large number. But the real reason is that the ones that do need multihoming are very important to all the others. This is changing. As more companies come to rely on the Internet, the more they see multihoming as a necessity, and it is becoming more of a problem. Why is there an assumption that a solution must cost more, when in fact it actually costs less? It makes one wonder why people would argue that it is not very important. Why should there be so much debate over not doing multihoming? Redundant connections to the network would seem to be an "apple pie" issue. Of course, redundancy is a good thing, but not for the traditionalists. A simple solution to multihoming requires changing the semantics of the address. If multihoming is not important, there is no need for a change. So, the argument that multihoming is not important is actually more political than technical.
The concern over the addressing situation was sufficiently great that in 1999 that the IAB created an Internet Research Task Force (IRTF), the research side of the IETF) working group independent of the IPv6 work to consider namespace issues. This group met several times. There was a lot of discussion of endpoints as opposed to naming, but without a strong architectural model it was impossible to establish precisely what was required. Consequently, there was no consensus on the conclusions. But this effort seemed to focus the discussion on what has become known as the locator/identifier split. Many see the problem with the IP address is that its semantics have been overloaded with both locator meaning and identifier meaning, and if we simply separate them all the problems will be solved. Notice that they do not see that the IP address naming the interface is naming the same thing the MAC address does, but they also rely on the fact that the MAC address has greater scope than the IP address to make certain mobility-related capabilities work.
However, referring back to the Saltzer paper, this approach will give us an application name and a PoA address. Once again, it addresses the symptom but not the problem. The Internet's focus on the transport and network layer has led to attempts to solve these problems in one of those two places. But, there is no such thing as a transport address. This is creating a "beads-on-a-string in disguise" model, not an operating system or distributed systems model. Consequently, efforts such as Host Identifier Protocol (HIP) (RFC 4423) and SHIM6 (Nordmark and Bagnulo, 2006) are simply more stopgaps that fail to address the whole problem and apply yet another band-aid to one aspect of the problem. As many in the Internet rightly realize, all of these myopic band-aids are creating a system that is more and more unwieldy.
Many prominent members of the Internet technical community have not expected wide deployment of IPv6. The biggest problem is that IPv6 offers very little to those who have to pay for its adoption. The removal of link-local (private) addresses provides one more reason not to adopt IPv6 in the enterprise, but to only use it externally. All new facilities, such as security, multicast, QoS-related developments, and so on, are designed to work equally well with IPv4 or IPv6. Thus, all statements in the recent trade press that IPv6 is necessary and has better QoS, security, and such are simply spin. The only new capability provided by IPv6 is a longer address, and that in and of itself may create more problems than it solves. In early 2003, figures were published that around 50% of the IPv4 address space had been assigned and less than 29% was actually being used (Huston, 2003). A cursory inspection shows that between 25-30 Class A address blocks could and should be re-claimed. This would seem to indicate (and is supported by recent government reports) that there is no rush to move to IPv6.
The only advantages to IPv6 are the bigger address space, the loss of isolation with no equivalent to private addresses, and the knowledge that you are a good network citizen—hardly the basis for a large capital expense to make the transition. This is not going to impress corporate budget committees. However, the possibility of IPv6 failing to be adopted has so alarmed certain factions that an immense PR campaign has been initiated to drum up interest in IPv6. (The possibility that IPv6 may fail for technical reasons does not seem to bother them.) An IPv6 forum was created and many trade journal articles written advocating advantages to IPv6 for security, QoS, and so on, which, in fact, are unrelated to IPv6. Trade journals go out of their way to put a positive spin on even the bad news. The European Union and the U.S. government have endorsed IPv6 in much the same way they endorsed OSI two decades earlier. IPv6 advocates point to this as proof of IPv6's pending success, just as they ridiculed the same statements by OSI advocates. Others see this as the kiss of death as it was for OSI. India, Japan, and China have embraced IPv6 mostly because they cannot get large IPv4 address blocks from IANA to support their huge populations. However, as we have seen, more than enough v4 address space exists. IPv6 may happen as much because the IETF has not been able to come up with anything that solves real problems, rather than on its own merits. This does not bode well.
But what contribution can we say that IPv6 has brought to our problem of trying to gain a deeper understanding of the nature of addressing? Unfortunately, not much. There is really nothing new here that has not been done before. As we have seen, IPv6 is simply a more cumbersome form of IPv4.
However, it does provide further confirmation of the social behavior of standards committees. (OSI provides earlier confirmation.) Another example of how a vocal conservative (dare I say ill-informed) faction can slow progress, and the lengths that a minority with greater technical understanding must go to find a way to bend the position of conservatives to get some sort of solution that solves real problems,13 not to mention that this direction benefits the vendors: Not only does the iterative increase in complexity keep a steady stream of new products to buy, but it also serves as a barrier to entry to new competitors and keeps customers tied to the vendor because their personnel can't understand the interactions of all the incremental improvements. CLNP had been only a slight improvement over IPv4. But it had been a bigger step than IPv6 represents and had been at least a move in the right direction. All of this contributes to the feeling that the concepts had run out of steam. After about 1975, there was very little new or innovative thinking going on. The only significant development one can point to is the development of link-state routing algorithms, which primarily was done in OSI, which stimulated similar efforts in the IETF.
If there is anything to learn from the IPv6 experience, it probably has more to do with the dynamics (or lack thereof) of consensus. It was James Madison (1787) who was the first to realize the inherently conservative nature of such groups. And human nature hasn't changed in 200 years. In his case, it led to the creation of mechanisms to stabilize an otherwise unstable system. In this environment, the lack of understanding of this dynamic has merely undermined innovation in a fast-moving technology. OSI started out as a "revolutionary" group intending to promulgate the packet network connectionless model. But the European tendency toward centralism and fear of the PTTs expanded the participation in the effort to include the opposition that saw X.25 as the answer to all network layer issues. This irresolvable conflict so severely split the OSI attempt that it ultimately failed. We have already discussed how the minority had to contort that architecture to achieve a semblance of a reasonable addressing architecture for the network layer, only to have it botched by the implementers. The fundamental lesson here is that the old paradigm can never be invited to collaborate with the new paradigm.
In the IETF, the conservatives have been a similar drag on innovation and good engineering. But here the stakes are much higher. OSI basically never had wide deployment. Businesses the world over now depend on the Internet. The IETF is now more concerned that the Internet architecture should not deviate from the old ways—that the architecture of 1972 has been given to it on stone tablets handed down from on high. When in reality, it was done by a group of engineers who were struggling to understand a new field and just to get something that worked. The conservatives now read deep meaning into what were expedient hacks, the authors of which knew they were hacks and knew they would need to be replaced "when there was time." The keepers of the flame are protecting an unfinished demo, rather than finishing it in the spirit in which it was started.
So if we have learned anything from IPv6, it is that all committees behave pretty much the same and will try to avoid deviating from the status quo. The problem within the IETF is compounded by the "demokratic" organization, rather than a "representative" or republican organization. It has been well understood for 250 years that democracies don't work and are susceptible to just this kind of long-term behavior. But, mechanisms can be created in a republican form of organization that will work; this was Madison's innovative discovery in system design. Representative forms have the potential to adopt new results not yet fully understood by the larger group. However, it remains that the only time a committee will do something innovative is when the majority perceives it as unimportant. Not exactly a result that is terribly helpful or encouraging.
"Upper-Layer" or Application Addressing in OSI
From our previous discussion, we would expect addressing for upper layers to involve some unique problems. According to Shoch and Saltzer, applications are supposed to have names, whereas lower-layer protocols have addresses. We must consider the problem of naming applications and relating that to addressing. Let's consider how the Internet and OSI dealt with upper-layer addressing.
As noted earlier, the early ARPANET had its hands full demonstrating a resource-sharing network and created "well-known sockets" as a stopgap so that it could demonstrate the usefulness of the network. The need for a directory was well understood at the time, but there were other priorities. Because there were no new applications in the Internet for another 20 years, there was no reason to change. (And by this time, there was a new generation of engineers who now argued that well-known sockets were a gift from the gods, divine insight, not a kludge that should be fixed.)
The first impetus for change was not required by applications and all the resource sharing that had been expected, but by the proliferation of hosts. Since the beginning, each host had maintained its own table of hostnames and their corresponding network address (NCP or IP). Only a few hosts might be added per month, and not all hosts found it necessary to keep a complete table. However, as the rate of new hosts increased in the late 1970s, this fairly informal approach was no longer practical. The result was the development of DNS or the Domain Name Server (RFC 881, 882). DNS defined a database structure not only for mapping hostnames to addresses, but also for distributing the database to servers around the network. Later, DNS was used to also distribute URLs for HTTP.
URLs are not the same as well-known sockets. A well-known socket identifies a special transport layer port identifier that has a particular application protocol bound to it. There is an implicit assumption that there is only one instance of this protocol per host. A connection to a well-known socket will create a distinct connection or flow to the requestor. A URL identifies an application (i.e., a particular Web page that uses that protocol [HTTP]), and an arbitrary instance of that application is created. We must be careful when talking about URLs. What they were defined for and how they are used in combination with other conventions make them several things at once. This is fine and perhaps even advantageous for human use, but for architecture we need to understand the different objects being named and their relation.
As discussed in Chapter 4, OSI created problems for itself by getting the upper layers upside down. Applications sat on top of two layers (session and presentation) that had addressing (a general property of a layer). These layers were constrained to not allow mapping between connection and connectionless and to have no multiplexing. Consequently, mappings between two layers were required to be one-to-one. There was no need for addressing in these two layers. Another indication that these were not layers.
We saw that for the lower layers it was not a good idea to create addresses for a layer by concatenating it with the address of the layer below because it formed a pathname. For the upper layers of OSI, there was no multiplexing and, hence, no multiple paths. However, this would create very long addresses with considerable redundant information as one moved up from the network layer. For example, because a transport address would be NetAddr.suffixT, the session address to be carried in protocol would be TrptAddr.suffixS or NetAddr.suffixT.suffixS, and the presentation address would beNetAddr.suffixT.suffixS.suffixP. This creates a lot of unnecessary overhead in the PDUs. To avoid this, an (N)-address for the transport, session, and presentation was defined as a tuple consisting of a network address and the appropriate number of (N)-selectors. Thus, a presentation address was defined as follows:
- (Network address, T-sel, S-sel, P-sel)
The PCI in each layer above the network layer only carried the selector. If an implementer was smart, the P-selector and S-selector were null. Consequently, the only addressing above the network layer was that transport protocol had to carry a T-sel of 16 bits.14
Because there was no addressing in the session and presentation layers, the interesting aspect of OSI addressing for the upper layers was the addressing architecture of the application layer. In Chapter 4, we saw how the distinction between the application process and application entity came about. Now we have to consider how the naming of them works.
To recap from Chapter 4, OSI distinguished the "application entity" (AE), which was within the OSI architecture and consisted of the application protocols. Databases, file systems, the rest of the application, and so on were outside of OSI. (This was somewhat political because it meant that the OSI committee was not going to tread on the turf of other committees.) Thus, the protocols an application used were part of the network architecture but everything else was outside. This is exactly the distinction we noted in the Web page example earlier. The application that constitutes the Web page and everything it needs is outside the communication architecture, but the HTTP protocol (and any other application protocols it uses, such as FTP or a remote query protocol) is within the architecture.
Thus, the Web application is an AP, and HTTP is the AE; and in this case, the AP may have several AE instances, for the simultaneous HTTP connections. Each must be distinctly identifiable. An application could have multiple protocols associated with it. For example, a hotel reservation application might use HTTP to talk to the customer and a remote database protocol to make the reservation. Similarly, an application could have multiple instances of each protocol and different dialogs with different customers. So, there could be application entity instances. Of course, the designer might choose to instantiate a different process for each customer so that there are multiple instances of the application process but single instances of the AEs. Clearly, there could be applications where there were instances of both processes and entities. The AEs were the only part of the application process inside the OSI architecture.
We can see in hindsight that the early Internet applications were special cases and hence not good examples to generalize from. Not only were the protocol and the application essentially synonymous, but there was only one per system. This is where our operating system experience was not sufficiently rich and we needed insight from the users' world. Our first real-life example of this application structure was the Web.
Once this structure was recognized, the application naming architecture was straightforward. OSI defined naming that allowed AEs and their instances as well as APs and their instances to be addressed. Addressing in the lower layers had never bothered to address to the level of instances. There is no reason to connect to a specific transport or TCP connection. They are all the same. However, for applications this is not the case. Recovery and other mechanisms would need to be able to establish or reestablish communication to an existing invocation of a protocol (AE) or to the invocation of an application (AP) using it. This leads to the addressing structure shown in Table 5-1.
Table 5-1. Summary of OSI Application Naming
Item (Identified by AE) |
APT |
APII |
AEQ |
AEII |
Appl Process |
+ |
|||
Appl Process Invocation |
+ |
+ |
||
Appl Entity |
+ |
+ |
||
Appl Entity Invocation |
+ |
+ |
+ |
+ |
Scope |
||||
APT = Application-Process-Title |
Application layer |
|||
APII = Application-Process-Invocation-Identifier |
Application process |
|||
AEQ = Application Entity Qualifier |
Application process |
|||
AEII = Application Entity Invocation Identifier |
(API, AE) |
Before one balks too much at the apparent complexity of this naming structure, a couple of things need to be observed. First of all, most applications don't need most of this. But the ones that do, really need it. Second, the complex forms, when they are needed, are generally needed by processes, not humans. Third, it is not at all clear that any "naming" at this level should be intended for human use. In the days of command language–driven operating systems, application names and filenames were intended for human use. However, today this is much less clear. What we used to think of as "user-friendly" (e.g., www.cnn.com) is not considered so today.
In the early days of networking, it was believed that applications had names and hosts had addresses. But this was an artifact of the implementation (and sloppy thinking); it turns out that when one carefully analyzes the problem, the host never appears (another surprise). Processes on a host appear but not the host. As we saw, this concept was brought over from operating systems. As understanding improved, it became clear that the important property of addresses is that they are used to "locate" objects; that is, that they be topologically significant. But application "names" are not just labels. They are used to locate applications and are just as topological as addresses, although admittedly in a very different topology. The structure of application names is used just as much to locate the application in the space of applications as the structure of network addresses locates in the space of network nodes. (This might be close to what some call the "semantic Web.")
In most incarnations, this leads to proposals for a hierarchical name structure. However, more recently this has been challenged by a more brute-force approach relying on searching. The role in the 1980s and early 1990s that many saw a system like the X.500 Directory or URNs playing now seems to be supplanted by Google, Yahoo!, and so on. Even within our systems, we have relied on search rather than richer structures. It remains to be seen whether searching can scale or whether other mnemonic or more structured methods may be necessary. But the question remains, that some form of common name that humans can exchange among themselves for use with computers is needed. How do we make this user friendly when a Macintosh might be a red apple, a computer, a stereo amplifier, or a raincoat. Or do the humans have to learn how to be friendly with the names computers use? For our purposes, we are less concerned with how these interface to people and are more concerned with what needs to be named, the properties of the names, and their relation.
URI, URL, URN, and So On: Upper-Layer Addressing in the Internet
As noted in Chapter 4, there has been very little work in the Internet space on upper-layer architecture and consequently also on naming and addressing issues in the upper layers. Everything derives from the host-naming convention. Originally, the convention was simply <hostname>, as the number grew it became necessary to move to a multilevel structure:
- <local domain-id>.†<host/site name>.<TL-domain>
This structure was generally used to name hosts within a site or subnet. In fact, if one looks closely at the URL syntax, one gets the impression that it is more a character-oriented syntax for specifying network layer constructs or a one-line macro facility not unlike the UNIX or Multics command line.
The work on the Universal Resource Name moves to a more sophisticated level of directory functions but does not really give us any insight in to the architecture of application naming requirements. The URN work in essence defines a syntax for names of resources and its interaction with a database defining various mechanisms to search the database and return a record. What the record contains is left to the designer of the specific URN. The URN syntax defines the top level of a hierarchy and conventions of notation and then allows specific communities to define the specific syntax to fit their application.
This would lead us to look at the applications to perhaps find some insights into application architecture naming issues. Unfortunately, most applications have not reached a level of complexity that requires more structure than a simple pathname hierarchy.