1.2 Types of Packet-Switched Networks
Packet-switched networks are classified as connectionless networks and connection-oriented networks, depending on the technique used for transferring information. The simplest form of a network service is based on the connectionless protocol that does not require a call setup prior to transmission of packets. A related, though more complex, service is the connection-oriented protocol in which packets are transferred through an established virtual circuit between a source and a destination.
1.2.1 Connectionless Networks
Connectionless networks, or datagram networks, achieve high throughput at the cost of additional queuing delay. In this networking approach, a large piece of data is normally fragmented into smaller pieces, and then each piece of data is encapsulated into a certain “formatted” header, resulting in the basic Internet transmission packet, or datagram. We interchangeably use packets and datagrams for connectionless networks. Packets from a source are routed independently of one another. In this type of network, a user can transmit a packet anytime, without notifying the network layer. A packet is then sent over the network, with each router receiving the packet forwarding it to the best router it knows, until the packet reaches the destination.
The connectionless networking approach does not require a call setup to transfer packets, but it has error-detection capability. The main advantage of this scheme is its capability to route packets through an alternative path in case a fault is present on the desired transmission link. On the flip side, since packets belonging to the same source may be routed independently over different paths, the packets may arrive out of sequence; in such a case, the misordered packets are resequenced and delivered to the destination.
Figure 1.6 (a) shows the routing of three packets, packets 1, 2, and 3, in a connectionless network from point A to point B. The packets traverse the intermediate nodes in a store-and-forward fashion, whereby packets are received and stored at a node on a route; when the desired output port of the node is free for that packet, the output is forwarded to its next node. In other words, on receipt of a packet at a node, the packet must wait in a queue for its turn to be transmitted. Nevertheless, packet loss may still occur if a node’s buffer becomes full. The node determines the next hop read from the packet header. In this figure, the first two packets are moving along the path A, D, C, and B, whereas the third packet moves on a separate path, owing to congestion on path A–D.
Figure 1.6 Two models of packet-switched networks: (a) a connectionless network and (b) a connection-oriented network
The delay model of the first three packets discussed earlier is shown in Figure 1.7. The total transmission delay for a message three packets long traversing from the source node A to the destination node B can be approximately determined. Let tp be the propagation delay between each of the two nodes, tf be the time it takes to inject a packet onto a link, and tr be the total processing delay for all packets at each node. A packet is processed once it is received at a node. The total transmission delay, Dp for nb nodes and np packets, in general is
Figure 1.7 Signaling delay in a connectionless network
In this equation, tr includes a certain crucial delay component, primarily known as the packet-queueing delay plus some delay due to route finding for it. At this point, we focus only on tp and tf, assume tr is known or given, and will discuss the queueing delay and all components of tr in later chapters, especially in Chapter 11.
Example. Figure 1.7 shows a timing diagram for the transmission of three (instead of two) packets on path A, D, C, B in Figure 1.6 (a). Determine the total delay for transferring these three packets from node A to node B.
Solution. Assume that the first packet is transmitted from the source, node A, to the next hop, node D. The total delay for this transfer is tp + tf + tr. Next, the packet is similarly transferred from node D to the next node to ultimately reach node B. The delay for each of these jumps is also tp + tf + tr. However, when all three packets are released from node A, multiple and simultaneous transmissions of packets become possible. This means, for example, while packet 2 is being processed at node A, packet 3 is processed at node D. Figure 1.7 clearly shows this parallel processing of packets. Thus, the total delay for all three packets to traverse the source and destination via two intermediate nodes is Dp = 3tp + 5tf + 4tr.
Connectionless networks demonstrate the efficiency of transmitting a large message as a whole, especially in noisy environments, where the error rate is high. It is obvious that the large message should be split into packets. Doing so also helps reduce the maximum delay imposed by a single packet on other packets. In fact, this realization resulted in the advent of connectionless packet switching.
1.2.2 Connection-Oriented Networks
In connection-oriented networks, or virtual-circuit networks, a route setup between a source and a destination is required prior to data transfer, as in the case of conventional telephone networks. In this networking scheme, once a connection or a path is initially set up, network resources are reserved for the communication duration, and all packets belonging to the same source are routed over the established connection. After the communication between a source and a destination is finished, the connection is terminated using a connection-termination procedure. During the call setup, the network can offer a selection of options, such as best-effort service, reliable service, guaranteed delay service, and guaranteed bandwidth service, as explained in various sections of upcoming chapters.
Figure 1.6 (b) shows a connection-oriented - network. The connection set-up procedure shown in this figure requires three packets to move along path A, D, C, and B with a prior connection establishment. During the connection set-up process, a virtual path is dedicated, and the forwarding routing tables are updated at each node in the route. Figure 1.6 (b) also shows acknowledgement packets in connection-oriented networks initiated from destination node B to source node A to acknowledge the receipt of previously sent packets to source node. The acknowledgement mechanism is not typically used in connectionless networks. Connection-oriented packet switching typically reserves the network resources, such as the buffer capacity and the link bandwidth, to provide guaranteed quality of service and delay. The main disadvantage in connection-oriented packet-switched networks is that in case of a link or switch failure, the call set-up process has to be repeated for all the affected routes. Also, each switch needs to store information about all the flows routed through the switch.
The total delay in transmitting a packet in connection-oriented packet switching is the sum of the connection set-up time and the data-transfer time. The data-transfer time is the same as the delay obtained in connectionless packet switching. Figure 1.8 shows the overall delay for the three packets presented in the previous example. The transmission of the three packets starts with connection request packets and then connection accept packets. At this point, a circuit is established, and a partial path bandwidth is reserved for this connection. Then, the three packets are transmitted. At the end, a connection release packet clears and removes the established path.
Figure 1.8 Signaling delay in a connection-oriented packet-switched network
The estimation of total delay time, Dt, to transmit np packets is similar to the one presented for connectionless networks. For connection-oriented networks, the total time consists of two components: Dp, which represents the time to transmit packets, and Dc, which represents the time for the control packets. The control packets’ time includes the transmission delay for the connection request packet, the connection accept packet, and the connection release packet:
Another feature, called cut-through switching, can significantly reduce the delay. In this scheme, the packet is forwarded to the next hop as soon as the header is received and the destination is parsed. We see that the delay is reduced to the aggregate of the propagation times for each hop and the transfer time of one hop. This scheme is used in applications in which retransmissions are not necessary. Optical fiber transmission has a very low loss rate and hence uses cut-through switching to reduce the delay in transmitting a packet. We will further explain the concept of cut-through switching and its associated devices in Chapters 2 and 12.