- The Telephone Business
- The Internet Business
- The Confrontation
- References
The Internet Business
While the telephone companies of the world were busy selling service and incorporating incremental technological innovations into their networks, the Internet and its new technology was born and hardened in a very different environment. ARPANET (Advanced Research Projects Agency Network) was conceived and deployed during the 1970s through colleges, universities, and research institutions. The culture of the Internet was one of innovation and research, with the overall attitude that they were playing in the world's greatest sandbox. Some of the craziest technical ideas imaginable have been attempted on the Internet. And many, perhaps most, have been successful in one way or another. Built on the concept that "all ideas are good ideas until proven otherwise" and on open deliberation of technical concepts through the Internet Engineering Task Force (IETF), the Internet was born in a collegial and entrepreneurial incubation think tank.
Two key facets of the Internet that define its operational properties are that it has a single unit of delivery, the packet, and that it makes no guarantees whatsoever about any aspect of packet delivery. This is in stark contrast to the thinking of Bell System engineers, who sought every means possible to ensure that information would arrive in a timely and uncorrupted manner. These two fundamentally different perspectives are due to the nature of the media being transported. Voice can tolerate distortion and noise but not delay, whereas data has precisely the opposite requirements. Internet engineers believed that the lowest level systems should be as simple as possible and should not depend on the capability of lower layers. To this end, they restricted the packet-switching function primarily to the straightforward task of forwarding packets to the next appropriate location based on an address provided in the packet. This scheme, called connectionless routing, provides a single, low-cost transport mechanism with no guarantees about quality of service. Any quality mechanisms, such as guaranteed delivery of packets, must be implemented outside the network in the user's equipment or applications.
Restricting the network to such a simple least common denominator has allowed many different protocols and applications to be implemented and tested. All that was required was a common definition of the packet and its addressing. Computers could simply drop a packet addressed to another computer into the network, and the network would deliver it. Applications that simply wanted to poll some device for a piece of information could just send a message requesting the information to the desired device and start a timer. The polled device would return a message (the sender's address is also in the packet) and include the requested information. And if the network did not deliver one or the other of these messages, or if it was too slow, the application's timer would expire and the application could decide what to do. It might send a second request, or it might wait longer, or it might give up and tell the user the information cannot be retrieved.
Notice that even in this simple application, it is the application or the user that makes these choices, not the network, and no particular outcome is mandated by the network's operation. Internet engineers consider this the most appropriate way of dealing with problems. After all, it is the application that knows what is being done, not the network. If the application can take the time to retransmit the information, then do that. If it has some critical real-time need that does not allow retransmission, then it can take some other more appropriate action. This approach enables a broad host of applications to be formulated and used without placing any additional requirements on the network. This is one of the great powers of the Internet.
In order for the network to route packets to the proper destination, a series of protocols have been defined that enable the automatic discovery of the presence of systems attached to the network and their addresses. This has allowed the network's user base to rapidly expand without substantial human intervention in databases or routing procedures. Indeed, if it were necessary to manually administer the addressing and routing of the Internet, growth would be very slow. Because of the assumption that nothing can be trusted, each routing system will periodically re-examine its routing tables and will discover alternate routes to use should a primary route be unavailable due to a computer failure or should it be working slowly due to a heavy traffic burden.
As any user would say, the early Internet had very low quality standards. It was subject to frequent and lengthy outages and proved to be quite unreliable. These problems were not due to the underlying protocol for delivering packets; they were due to the low reliability of the computers doing the forwarding. The early Internet made use of normal computers in university laboratories. Many of these were used for other experiments and for studies that made them unavailable to the Internet. Also, early computer systems were not extremely reliable and had frequent hardware and software problems. Sometimes, it took days for a packet to travel from coast to coast, and often it would not arrive at all. But for applications like email, an early and continuing Internet success, these delays were not of major importance: The email application would just keep re-sending the packet and would inform the user if it eventually gave up.
So, over time, the software-based computer version of the Internet evolved to far more reliable hardware based systems. As performance increased, so did the number of users and the applications they attempted over the Internet. Because these concepts were conceived and executed by university faculty and graduate student researchers, a lot of new state-of-art concepts in networking and applications were reduced to practice. Industrial labs were involved, some from the very outset, and these brought concrete practical ideas to the implementations as well as some interest in moving to commercial systems. This blend of ARPA-funded university and industry research brought the Internet to its current state of success.
Once the World Wide Web and browser technology were defined, commercial use of the Internet began to dramatically increase and involvement by non-experts rapidly expanded. The WWW made it possible for users to "surf the net" and for companies to exploit commercial ideas with a large base of potential customers. The Internet took off like a rocket, initially doubling in capacity every 100 days, the so-called "Internet Year," and continues to grow at a very high rate. Modern, ultra-high-speed, large-scale routing systems now dominate the backbone of the Internet, and as the user base has grown, more high-speed modern computers have been connected at each user's site. Electronic commerce has forced corporations onto the Internet, and to accomplish this, their large computing base has been forced to modernize and integrate with the online Internet systems. All in all, the thrust for online electronic commerce has required modernization of information technology at every level. The consequence has been rapid evolution of all parts of the Internet itself and of the clienthost computers at its edges.
The companies now participating most completely in the Internet as operators include Internet Service Providers (ISP), Backbone Providers (BP), and Network Access Providers (NAP). The companies in these segments are to be considered apart from the normal "dotcom" companies known on various stock exchanges. These companies, the Netcos, are the providers of the Internet, just as the Telcos are the providers of the PSTN. These are the companies that enable the Internet and are the candidates for taking over the voice business, just as those companies that make up the Telco segment wish to become the new owners of the Internet. They currently have small but rapidly growing revenues, and there is enormous growth potential in their future, especially if they successfully capture the revenues of voice telephony.
Today's successful carriers in the Internet may not be the ultimate winners. The Internet has taken on a life of its own, and the open nature of the Internet makes it possible for nearly anyone to become successful. An acquisition strategy executed by a party with deep financial strength could make them a leading carrier overnight.