- Objectives
- Key Terms
- Introduction (1.0.1.1)
- LAN Design (1.1)
- The Switched Environment (1.2)
- Summary (1.3)
- Practice
- Class Activities
- Packet Tracer Activities
- Check Your Understanding Questions
The Switched Environment (1.2)
One of the most exciting functions of networking is the switched environment because businesses are always adding devices to the wired network, and they will do so through a switch. Learning how switches operate is important to someone entering the networking profession.
Frame Forwarding (1.2.1)
On Ethernet networks, frames contain a source MAC address and a destination MAC address. Switches receive a frame from the source device and quickly forward it toward the destination device.
Switching as a General Concept in Networking and Telecommunications (1.2.1.1)
The concept of switching and forwarding frames is universal in networking and telecommunications. Various types of switches are used in LANs, WANs, and the public switched telephone network (PSTN). The fundamental concept of switching refers to a device making a decision based on two criteria:
- Ingress port
- Destination address
The decision on how a switch forwards traffic is made in relation to the flow of that traffic. The term ingress is used to describe a frame entering a device on a specific port. The term egress is used to describe frames leaving the device through a particular port.
When a switch makes a decision, it is based on the ingress port and the destination address of the message.
A LAN switch maintains a table that it uses to determine how to forward traffic through the switch.
In the animated example:
- If a message enters switch port 1 and has a destination address of EA, then the switch forwards the traffic out port 4.
- If a message enters switch port 5 and has a destination address of EE, then the switch forwards the traffic out port 1.
- If a message enters switch port 3 and has a destination address of AB, then the switch forwards the traffic out port 6.
The only intelligence of the LAN switch is its capability to use its table to forward traffic based on the ingress port and the destination address of a message. With a LAN switch, there is only one master switching table that describes a strict association between addresses and ports; therefore, a message with a given destination address always exits the same egress port, regardless of the ingress port it enters.
Cisco LAN switches forward Ethernet frames based on the destination MAC address of the frames.
Dynamically Populating a Switch MAC Address Table (1.2.1.2)
Switches use MAC addresses to direct network communications through the switch to the appropriate outbound port toward the destination. A switch is made up of integrated circuits and accompanying software that controls the data paths through the switch. For a switch to know which port to use to transmit a frame, it must first learn which devices exist on each port. As the switch learns the relationship of ports to devices, it builds a table called a MAC address table, or content addressable memory (CAM) table. CAM is a special type of memory used in high-speed searching applications.
LAN switches determine how to handle incoming data frames by maintaining the MAC address table. A switch builds its MAC address table by recording the MAC address of each device connected to each of its ports. The switch uses the information in the MAC address table to send frames destined for a specific device out the port, which has been assigned to that device.
An easy way to remember how a switch operates is the following saying: A switch learns on “source” and forwards based on “destination.” This means that a switch populates the MAC address table based on source MAC addresses. As frames enter the switch, the switch “learns” the source MAC address of the received frame and adds the MAC address to the MAC address table or refreshes the age timer of an existing MAC address table entry.
To forward the frame, the switch examines the destination MAC address and compares it to addresses found in the MAC address table. If the address is in the table, the frame is forwarded out the port associated with the MAC address in the table. When the destination MAC address is not found in the MAC address table, the switch forwards the frame out of all ports (flooding) except for the ingress port of the frame. In networks with multiple interconnected switches, the MAC address table contains multiple MAC addresses for a single port connected to the other switches.
The following steps describe the process of building the MAC address table:
Step 1. The switch receives a frame from PC 1 on Port 1 (Figure 1-13).
Figure 1-13 Building a MAC Address Table: PC1 Sends Frame to Port 1
Step 2. The switch examines the source MAC address and compares it to the MAC address table.
If the address is not in the MAC address table, it associates the source MAC address of PC 1 with the ingress port (Port 1) in the MAC address table (Figure 1-14).
Figure 1-14 Building a MAC Address Table: S1 Adds MAC Address Heard Through Port 1
- If the MAC address table already has an entry for that source address, it resets the aging timer. An entry for a MAC address is typically kept for five minutes.
Step 3. After the switch has recorded the source address information, the switch examines the destination MAC address.
If the destination address is not in the MAC table or if it’s a broadcast MAC address, as indicated by all Fs, the switch floods the frame to all ports, except the ingress port (Figure 1-15).
Figure 1-15 Building a MAC Address Table: S1 Broadcasts the Frame
Step 4. The destination device (PC 3) replies to the frame with a unicast frame addressed to PC 1 (Figure 1-16).
Figure 1-16 Building a MAC Address Table: PC3 Sends a Reply Frame
Step 5. The switch enters the source MAC address of PC 3 and the port number of the ingress port into the address table. The destination address of the frame and its associated egress port is found in the MAC address table (Figure 1-17).
Figure 1-17 Building a MAC Address Table: S1 Adds the MAC Address for PC3
Step 6. The switch can now forward frames between these source and destination devices without flooding because it has entries in the address table that identify the associated ports (Figure 1-18).
Figure 1-18 Building a MAC Address Table: S1 Sends the Frame to Port 1
Switch Forwarding Methods (1.2.1.3)
Commonly, in earlier networks, as they grew, enterprises began to experience slower network performance. Ethernet bridges (an early version of a switch) were added to networks to limit the size of the collision domains. In the 1990s, advancements in integrated circuit technologies allowed for LAN switches to replace Ethernet bridges. These LAN switches were able to move the Layer 2 forwarding decisions from software to application-specific-integrated circuits (ASICs). ASICs reduce the packet-handling time within the device, and allow the device to handle an increased number of ports without degrading performance. This method of forwarding data frames at Layer 2 was referred to as store-and-forward switching. This term distinguished it from cut-through switching.
As shown in the online video, the store-and-forward method makes a forwarding decision on a frame after it has received the entire frame and then checked the frame for errors.
By contrast, the cut-through switching method, as shown in the online video, begins the forwarding process after the destination MAC address of an incoming frame and the egress port has been determined.
Store-and-Forward Switching (1.2.1.4)
Store-and-forward switching has two primary characteristics that distinguish it from cut-through: error checking and automatic buffering.
Error Checking
A switch using store-and-forward switching performs an error check on an incoming frame. After receiving the entire frame on the ingress port, as shown in Figure 1-19, the switch compares the frame-check-sequence (FCS) value in the last field of the datagram against its own FCS calculations. The FCS is an error checking process that helps to ensure that the frame is free of physical and data-link errors. If the frame is error-free, the switch forwards the frame. Otherwise, the frame is dropped.
Figure 1-19 Store-and-Forward Switching
Automatic Buffering
The ingress port buffering process used by store-and-forward switches provides the flexibility to support any mix of Ethernet speeds. For example, handling an incoming frame traveling into a 100 Mb/s Ethernet port that must be sent out a 1 Gb/s interface would require using the store-and-forward method. With any mismatch in speeds between the ingress and egress ports, the switch stores the entire frame in a buffer, computes the FCS check, forwards the frame to the egress port buffer and then sends the frame.
Store-and-forward switching is Cisco’s primary LAN switching method.
A store-and-forward switch drops frames that do not pass the FCS check, therefore it does not forward invalid frames. By contrast, a cut-through switch may forward invalid frames because no FCS check is performed.
Cut-Through Switching (1.2.1.5)
An advantage to cut-through switching is the capability of the switch to start forwarding a frame earlier than store-and-forward switching. There are two primary characteristics of cut-through switching: rapid frame forwarding and invalid frame processing.
Rapid Frame Forwarding
As indicated in Figure 1-20, a switch using the cut-through method can make a forwarding decision as soon as it has looked up the destination MAC address of the frame in its MAC address table. The switch does not have to wait for the rest of the frame to enter the ingress port before making its forwarding decision.
Figure 1-20 Cut-Through Switching
With today’s MAC controllers and ASICs, a switch using the cut-through method can quickly decide whether it needs to examine a larger portion of a frame’s headers for additional filtering purposes. For example, the switch can analyze past the first 14 bytes (the source MAC address, destination MAC, and the EtherType fields), and examine an additional 40 bytes in order to perform more sophisticated functions relative to IPv4 Layers 3 and 4.
The cut-through switching method does not drop most invalid frames. Frames with errors are forwarded to other segments of the network. If there is a high error rate (invalid frames) in the network, cut-through switching can have a negative impact on bandwidth; thus, clogging up bandwidth with damaged and invalid frames.
Fragment Free
Fragment free switching is a modified form of cut-through switching in which the switch waits for the collision window (64 bytes) to pass before forwarding the frame. This means each frame will be checked into the data field to make sure no fragmentation has occurred. Fragment free mode provides better error checking than cut-through, with practically no increase in latency.
With a lower latency speed advantage of cut-through switching, it is more appropriate for extremely demanding, high-performance computing (HPC) applications that require process-to-process latencies of 10 microseconds or less.
Switching Domains (1.2.2)
Two commonly misunderstood terms used with switching are collision domains and broadcast domains. This section tries to explain these two important concepts that affect LAN performance.
Collision Domains (1.2.2.1)
In hub-based Ethernet segments, network devices compete for the medium, because devices must take turns when transmitting. The network segments that share the same bandwidth between devices are known as collision domains, because when two or more devices within that segment try to communicate at the same time, collisions may occur.
It is possible, however, to use networking devices such as switches, which operate at the data link layer of the OSI model to divide a network into segments and reduce the number of devices that compete for bandwidth. Each port on a switch is a new segment because the devices plugged into the ports do not compete with each other for bandwidth. The result is that each port represents a new collision domain. More bandwidth is available to the devices on a segment, and collisions in one collision domain do not interfere with the other segments. This is also known as microsegmentation.
As shown in the Figure 1-21, each switch port connects to a single PC or server, and each switch port represents a separate collision domain.
Figure 1-21 Collision Domains
Broadcast Domains (1.2.2.2)
Although switches filter most frames based on MAC addresses, they do not filter broadcast frames. For other switches on the LAN to receive broadcast frames, switches must flood these frames out all ports. A collection of interconnected switches forms a single broadcast domain. A network layer device, such as a router, can divide a Layer 2 broadcast domain. Routers are used to segment both collision and broadcast domains.
When a device sends a Layer 2 broadcast, the destination MAC address in the frame is set to all binary ones. A frame with a destination MAC address of all binary ones is received by all devices in the broadcast domain.
The Layer 2 broadcast domain is referred to as the MAC broadcast domain. The MAC broadcast domain consists of all devices on the LAN that receive broadcast frames from a host.
When a switch receives a broadcast frame, the switch forwards the frame out each of the switch ports, except the ingress port where the broadcast frame was received. Each device connected to the switch receives a copy of the broadcast frame and processes it, as shown in the top broadcast domain in Figure 1-22. Broadcasts are sometimes necessary for initially locating other devices and network services, but they also reduce network efficiency. Network bandwidth is used to propagate the broadcast traffic. Too many broadcasts and a heavy traffic load on a network can result in congestion: a slowdown in the network performance.
Figure 1-22 Broadcast Domains
When two switches are connected together, the broadcast domain is increased, as seen in the second (bottom) broadcast domain shown in Figure 1-22. In this case, a broadcast frame is forwarded to all connected ports on switch S1. Switch S1 is connected to switch S2. The frame is then also propagated to all devices connected to switch S2.
Alleviating Network Congestion (1.2.2.3)
LAN switches have special characteristics that make them effective at alleviating network congestion. First, they allow the segmentation of a LAN into separate collision domains. Each port of the switch represents a separate collision domain and provides the full bandwidth to the device or devices that are connected to that port. Second, they provide full-duplex communication between devices. A full-duplex connection can carry transmitted and received signals at the same time. Full-duplex connections have dramatically increased LAN network performance and are required for 1 Gb/s Ethernet speeds and higher.
Switches interconnect LAN segments (collision domains), use a table of MAC addresses to determine the segment to which the frame is to be sent, and can lessen or eliminate collisions entirely. Table 1-2 shows some important characteristics of switches that contribute to alleviating network congestion.
Table 1-2 Switch Characteristics That Help with Congestion
Characteristic |
Explanation |
High port density |
Switches have high-port densities: 24- and 48-port switches are often just 1 rack unit (1.75 inches) in height and operate at speeds of 100 Mb/s, 1 Gb/s, and 10 Gb/s. Large enterprise switches may support hundreds of ports. |
Large frame buffers |
The ability to store more received frames before having to start dropping them is useful, particularly when there may be congested ports to servers or other parts of the network. |
Port speed |
Depending on the cost of a switch, it may be possible to support a mixture of speeds. Ports of 100 Mb/s and 1 or 10 Gb/s are common. (100 Gb/s is also possible.) |
Fast internal switching |
Having fast internal forwarding capabilities allows high performance. The method that is used may be a fast internal bus or shared memory, which affects the overall performance of the switch. |
Low per-port cost |
Switches provide high-port density at a lower cost. For this reason, LAN switches can accommodate network designs featuring fewer users per segment, therefore, increasing the average available bandwidth per user. |