3.2 Architecture Overview
When Internet routers were first introduced, the amount of traffic processed was much less than it is now. The router's brain, its CPU, was robust enough to handle its own tasks, the tasks of creating and managing the routing tables, and the task of forwarding the packets. Some routers on the market today still use this processor-based technology.
Juniper Networks has created a device that segregates the tasks and assigns them to different parts of the router, sort of like an assembly line. You can see this process in Figure 31. Because Juniper Networks routers are designed to serve the busy core of the network, the number of packets processed per second is in the millions. If the router were required to manage everything from its CPU, throughput would suffer, as would the ability to provide service guarantees.
Figure 31 Juniper Networks Routing Architecture
Most routers today are beginning to follow this model of separate processes within the routerrouting and forwarding. Either through the use of ASICs on the line cards containing the interfaces or through the use of separate processors within the routing unit containing the CPU, the router manufacturers have all started to turn towards this technology. It simply makes sense. Let's examine the way a Juniper Networks router attacks this issue.
Juniper Networks routers consist of a simple architecture containing two basic components: a routing engine and a PFE. The routing engine handles the more mundane tasks, such as routing protocol calculations, control packet processing, and so on, while the PFE is allowed to move packets out of the router as quickly as possible. After all, the main duty of an Internet router is to handle the packets as little as possible as they move through its realm. To achieve flawless, wirespeed performance, Juniper Networks ensured that certain processes would be physically and logically segregated on their routers. Figure 32 shows the segregation of processes between the two main components of a Juniper Networks router. The rest of this chapter explains these components and processes further.
Figure 32 Routing Engine and PFE Processes
3.2.1 Routing Engine
The Juniper Networks routing engine is an integral part of the router architecture and provides for all of the central processing and route processing requirements. As you will learn in this section, the routing engine also provides storage for the operating system and provides the CLI through which the operating system is configured.
Although there are differences between the various Juniper Networks router models, which are outlined in the Section 3.2.1.3, the design of the routing engine is similar in all of them. The routing engine is primarily responsible for running the routing protocols, keeping the routing tables up-to-date, sending routing updates to the PFE, and performing system management. The routing engine communicates directly with the PFE through a 100Mbps connection.
3.2.1.1 Function of the Routing Engine
Functions provided by the routing engine include the following:
Handling of routing protocol packets
Management interface
Configuration management
Accounting and alarms
Modular software
Scalability
Routing protocol packets that arrive on the network are not handled on the PFE itself, but are passed directly to the routing engine. This effectively reduces the amount of work that the PFE has to do, enabling it to process packets to be forwarded efficiently. An example of a routing protocol packet would be a link-state advertisement (LSA) from an OSPF router. The LSA would be received on an ingress interface and sent directly to the routing engine. The routing engine would then perform the shortest-path-first (SPF) route calculations and update its OSPF routing table, which, in turn, would send LSAs to its neighbors. (For more information on the functions of OSPF, please refer to Section 8.4.)
The routing engine also provides several ways to manage the router. First, it provides the CLI, which allows the system operator to interact with the JUNOS software, the PFE, and the interfaces through configuration, modifications, and monitoring. The routing engine also runs SNMP, permitting management of the system from a network management station running software, such as Hewlett-Packard's Network Node Manager (HP-NNM), through a framework, such as Hewlett-Packard's OpenView. Finally, the craft interface, discussed in Section 3.2.1.4, provides more information for management of the router. Accounting functionality and alarms provides further manageability of the router. Alarms, seen via the craft interface, provide information to the system administrator about the condition of the router or functions of the router. Accounting of packets is done in the routing engine, thus negating any impact on wirespeed routing taking place in the PFE.
For change and configuration management, the routing engine allows for the storage of configuration files, microcode, and system images in one primary and two secondary locations. A unique rollback feature, provided by JUNOS, also allows the system administrator to return to a previous configuration quickly, should the new configuration prove problematic.
Finally, Juniper Networks routers use modular software that cleanly separates processes from each other. The problems of one process will not impact other processes that may be running. Additionally, the software is designed to scale well to the needs of tomorrow's Internet routing demands.
3.2.1.2 JUNOS
JUNOS is the operating system currently used on all Juniper Networks routers. JUNOS is not just an operating system providing a CLI for configuration, but also a feature-rich platform providing troubleshooting tools and advanced feature sets. The operating system also incorporates an application programming interface (API) system for external program calls and scripting capabilities. JUNOS, in conjunction with the Internet Processor II, comprises the industry's most advanced BSD-based router operating system in today's marketplace. The routing engine is based on an Intel PCI platform running JUNOS. JUNOS runs in flash memory with an alternate copy stored on the router's hard disk. As you can see in Figure 33, the operating-system kernel is layered on the PCI platform and establishes communication between the PCI platform and the system processes. The kernel is also responsible for making sure that the forwarding tables in use by the PFE are in sync with those in the routing engine.
There are five essential functions provided by JUNOS:
The routing protocol process provides all routing and routing control functions within the platform. The modularity of this package allows for the addition and removal of protocols and functions, providing both flexibility and scalability.
The interface process performs configuration of the physical interfaces and encapsulation.
The SNMP and management information base (MIB) II processes allow SNMP-capable systems to communicate with the router platform. This also allows the platform to provide necessary SNMP information to external agents. JUNOS is SNMP I and II compliant. The management process starts and monitors all other software processes in JUNOS. If a particular management function stops, this process will attempt to restart it.
The routing kernel process controls everything else. In addition to providing the underlying infrastructure to all JUNOS software processes, the routing kernel process provides the link between the routing engine and the PFE.
Figure 33 Routing Engine Components
One or more routing tables are maintained by the operating system. Routing policy is maintained on JUNOS within the routing engine.
Let's take a closer look at the processes running on the routing engine. Figure 34 gives a visual representation of how these processes all work together to carry out the business at hand. You'll notice that nearly all of the processes communicate directly with the kernel. The exceptions are the user process, which must access the kernel through the CLI, and the routing table, which is simply a product of the routing protocol processes. Any communication between the routing engine and PFE originates from the kernel itself.
Figure 34 Routing Engine Processes
3.2.1.3 Routing Engine Specifications
Routing engine specifications will depend upon the router model. The differences between the Juniper Networks M-Series router models are listed in Table 31. Note that all of the routers provide for out-of-band management, RS-232 DB9 ports for serial console and remote management access, and tertiary storage using a removable PC card. The main difference between the models is the amount of flash or SDRAM available. Of course, these differences only apply to the routing engines. There are substantial differences in capacity, throughput and available interfaces between models, as well.
Table 31 Routing Engines on Different Models
Model |
Platform |
Available Redundancy |
Compact Flash |
SDRAM |
M5 and M10 |
333MHz Pentium II with integrated 256MB level 2 cache |
NO |
96MB |
256, 512, or 768MB |
M20, M40, and M40e |
333MHz Pentium II with 512MB cache |
M20YES M40NO |
80MB |
Up to 768MB |
M160 |
333MHz Pentium II768MB with integrated 256MB level 2 cache |
YES |
80 or 96MB |
768MB |
The M160 Miscellaneous Control Subsystem
Only the M160 router uses the routing engine in conjunction with a miscellaneous control subsystem (MCS). The two are installed adjacently in the rear of the chassis and together form a host module. Both components are required to function. If a routing engine is installed without the MCS, the routing engine will not workand vice versa. The router will accommodate up to two host modules.
The MCS performs the following functions:
Acts as a middle man between the routing engine and the sensors throughout the system; relays statistical information to the routing engine, and relays control messages and alarms out to the system from the routing engine
Controls power-up and power-down of system components
Decides which of any given redundant components will act as master
Performs reset attempts on flexible PIC concentrators (FPCs), when necessary (the FPC will be discussed later in this chapter in Section 3.2.2)
Acts as the SONET 19.44MHz clock source and monitors all other system clocks
You have probably noticed that some of these functions are performed by the routing engine itself on other router modules. While this is true, no other M-Series model provides the port density and forwarding speed of the M160. Some of the functions traditionally built into the routing engine have been moved out into this new router component, the MCS, to focus the routing engine on more specific tasks.
The MCS comprises the following:
A PCI interface to the routing engine
Two BITS interfaces for external clock sources
A 100Mbps Ethernet interface to other system modules
A 19.44MHz stratum 3 SONET clock source
A controller for monitoring the sensors
A debugging port (RS-232)
LEDs
An offline button
3.2.1.4 The Craft Interface
Positioned on the front of the chassis, the craft interface provides an external look into the internal workings of the router. It can be used as a troubleshooting tool, a monitoring tool, or both. Although the craft interface looks different on each model, the workings are very similar. The example figures in this section are based on the M40 model.
The main features of the craft interface are the following:
LED indicators
Alarm indicators
Routing engine ports
LCD display screen (on the M40 and M160 only)
The craft interface on an M40 model, shown in Figure 35, displays the status of the FPCs, of the routing engine and of general alarm conditions. Each FPC has a corresponding button on the craft interface. LEDs above the button indicate whether the FPC's status is "OK" or it has failed to initialize.
Figure 35 M40 Craft Interface
Alarm LEDs indicate the level of an alarm if one has occurred. On this alarm panel are two alarm relay contacts. These can be used to connect external alarm devices to the craft interface. If a yellow or red alarm occurs, the external alarm device would also be activated. Alarms can be silenced with the alarm cutoff button, but this does not remove the alarm condition.
Red alarms indicate a condition in which a service interruption could occur, such as a component failure. Yellow alarms are generally indicative of recoverable errors or maintenance alerts.
Routing engine access is provided on the right side of the craft interface through a console port, an auxiliary port, and a management Ethernet port. The status of the routing engine is indicated as either OK or Fail. More information about the LED status indicators is provided in Table 32.
Table 32 Craft Interface Indicators
LED |
Color/Action |
Description |
OK |
Green/Blinking |
Initializing |
OK |
Green/Solid |
Running |
Fail |
Red/Solid |
Offline, owing to failure (In the case of the routing engine, this could mean that the system control board did not detect the routing engine.) |
ALARMS |
|
|
Red Alarm |
Red/Solid |
System failure, power supply failure, or system threshold exceeded |
Yellow Alarm |
Amber/Solid |
Maintenance alert or indication of temperature increase |
The FPC buttons are used to take an FPC offline, before removing the FPC, for instance. To do this, press and hold the button for three seconds, or until the red Fail LED becomes solid. Then, it is safe to remove the FPC.
The LCD display screen, shown in Figure 36, works in either idle mode or in alarm mode. When in idle mode, the LCD display will show the current system status. When in alarm mode, the LCD display will provide more information about the alarm condition. To interact with the LCD menu, use the buttons and directional arrows to the right of the LCD display screen.
Figure 36 Craft Interface LCD Panel
Finally, the craft interface provides three ways to interact with the CLI:
- Console port
- Auxiliary port
- Management Ethernet port
Using an RS-232 serial cable, an external console, such as a dumb terminal and keyboard, can be connected to the console port to display system messages and information constantly or to enter the CLI. A laptop computer or modem may be connected to the auxiliary port for quick, portable access to the CLI.
The management Ethernet port can be used to connect the router to any Ethernet LAN through an autosensing 10/100 RJ-45 port. Most network administrators connect this port to a management LAN for out-of-band management of the routers. Unlike with routers from some other vendors, this management port can be controlled via the CLI, but it will not route traffic and, therefore, cannot be used as a spare port.
3.2.1.5 Redundancy and Maintenance Options
In the busy network core, network availability is everything. Juniper Networks has designed its routers to reduce single points of failure. Routing engines are no exception. Most router models can be configured with redundant routing engines, thereby reducing system downtime in the event of a failure. (There will be an interruption of routing services during the failover, however, as there is whenever an routing engine is inserted or removed.) Table 33 provides more information on redundant components in each router model. (Note that some of the components described here will be covered in detail later in this chapter.)
Table 33 Redundancy in Components by Model
Model |
M5 and M10 |
M20 |
M40 and M40e |
M160 |
Redundant routing engine functions |
none |
P/S Cooling Routing engine SSB |
P/S Cooling |
P/S Cooling Host module PFE clock generator SFM |
The routing engine is said to be hot-pluggable, which simply means that it may be inserted while the router is powered up. Routing functions will be interrupted whenever a routing engine is removed or inserted, however.
NOTE
If the router does not have two routing engines, the router will not be operational without the single routing engine inserted.
Maintaining the routing engine requires attention to the LED on the craft interface to check for alarms or other indications of operational problems. The system administrator can also find some information from the CLI by using the following command:
system@m20# show chassis routing-engine Routing Engine status: Temperature 28 degrees C / 82 degrees F DRAM 768 Mbytes CPU utilization: User 0 percent Background 0 percent Kernel 0 percent Interrupt 0 percent Idle 100 percent Start time 2002-03-06 17:23:09 UTC Uptime 20 hours, 44 minutes, 41 seconds Load averages: 1 minute 5 minute 15 minute system@m20# show chassis routing-engine
Notice that you can see items such as router uptime (the amount of time the router has been powered up), temperature of the chassis, and the amount of DRAM installed.
Many parts of the routing engine are field-serviceable (also called field-replaceable), meaning that replacement or spare parts can be used to get the routing engine back into operation quickly without having to ship it to Juniper Networks for repair. Replacement of these parts should be done under the guidance of an engineer from the JTAC and is not recommended for inexperienced service personnel.
3.2.2 Packet Forwarding Engine (PFE)
The PFE is the second basic component of the Juniper Networks router. It is the mass-transit system part of the router, so to speak. Whereas the routing engine is the brain of the router, the PFE tends to be more of a workhorse, carrying out the instructions it has been given. The job of the PFE is to move packets as quickly as possible back out of the router. If it can't do that, for instance when there is no entry in the forwarding table for a given destination, it hurries the packets bound for that unknown destination off to the routing engine and goes on about its business.
This section will give you an overview of the design and function of the PFE. It will also show you how the packets move through the router so that you can fully understand the way the whole system works.
3.2.2.1 Design and Operation
On Juniper Networks routers, the PFE is designed to perform Layer 2 and Layer 3 switching, route lookups, and rapid forwarding of packets. Using ASICs, the strategy of the PFE is to divide and conquer the business of forwarding. To that end, the PFE itself is split into several major components:
Midplane
PICs
FPCs
Control board (switching/forwarding)
The midplane, sometimes referred to as the backplane, is really the back of the cage that holds the line cards. The line cards connect into the midplane when inserted into the chassis from the front. The routing engine plugs into the rear of the midplane from the rear of the chassis. The purpose of the midplane is to carry the electrical signals and power to each line card and to the routing engine.
The PICs are the actual components that contain the interface ports. Each PIC is plugged into a FPC, such as the one shown in Figure 37. Each individual PIC contains an ASIC that handles media-specific functions, such as framing or encapsulation, and has its own LED status indicator on the front. PICs are available for SDH/SONET, ATM, Gigabit Ethernet, Fast Ethernet, and DS3/E3.
Figure 37 The FPC
The FPC can contain from one to four PICs in a mix-and-match style. In other words, you could have four different kinds of PICs on a single FPC. This reflects a great deal of flexibility that is welcome in most networks. Installed from the front of the chassis, the FPC carries the signals from the PICs to the midplane. Each FPC has its own input-output (I/O) ASIC and buffer memory.
In the M5 and M10, PICs do not connect to an FPC, but to an FEB. In the M20, M40, and M160, PICs connect to an FPC. There are obviously other significant architectural differences. The PICs for the M5 and M10 are interchangeable; however, due to architectural differences these same PICs cannot be used in the M20, M40, and M160.
The FPC performs the important functions of decapsulating the packet, parsing it, and breaking it up into 64-byte memory blocks, before passing it to the distributed buffer manager (DBM) ASIC. It is at this point that the packet is first written to memory. The DBM ASIC manages and writes packets to the shared memory across all FPCs. While writing the packets to buffer memory, the DBM ASIC is also extracting information on the destination of the packet, as you will see this when we look at packet flow later in this section. Note: In each router slot, there must either be an FPC or a blank panel installed to ensure adequate cooling and airflow through the router.
The M160 router is the only exception to this overview of the FPC. The M160 actually can use two different types of FPCthe FPC1 and the FPC2.
The control board is an add-on component in the PFE and will be covered in more detail in Section 3.2.2.2. Each control board performs part of the overall function of the PFE, such as communications with the routing engine through an internal interface and with the FPCs through an internal hub.
PFE Processes
As Figure 38 shows, the PFE has an embedded microkernel that serves as the brains of the PFE, interacting with the interface process and the chassis process to monitor and control these functions. It is the interface process that has direct communication with the kernel of the routing engine. This communication includes forwarding exception and control packets to the routing engine, receiving packets to be forwarded, receiving the forwarding table updates, providing information about the health of the PFE, and permitting configuration of the interfaces from the user-CLI process on the routing engine.
Figure 38 Packet Forward Engine Processes
The PFE contains a stored forwarding table, which is static until a new one is received from the routing engine. No dynamic routing protocol processes run on the PFE. The interface process consults the forwarding table to look up next-hop information. The interface process also has direct communication with the ASICs on the PFE, which will be discussed in detail in the next section. Finally, the chassis processesenvironment, health, and so oncommunicate directly with the microkernel of the PFE and with the ASICs. ASICs
Now we will take a look at the location of the ASICs involved in packet processing and see how they relate to one another. Figure 39 shows a section-by-section view of the positioning and communication.
Figure 39 System ASICs
Starting from the bottom of Figure 39, you can see that each of the PICs contain at least one I/O manager ASIC responsible for media-specific tasks, such as encapsulation. The packets pass through these I/O ASICs on their way into and out of the router. The I/O manager ASIC on the PIC is specifically responsible for the following:
Managing the connection to the I/O manager ASIC on the FPC
Managing link-layer framing and creating the bit stream Performing cyclical redundancy checks (CRCs)
Detecting link-layer errors and generating alarms, when necessary
On the FPC is another I/O manager ASIC. This ASIC takes the packets from the PICs and breaks them into 64-byte memory blocks, also known as J-cells, for storage in shared FPC memory. It is at this point that accounting is performed and CoS policies, which define the handling of traffic based upon classification of types of service, are implemented. This ASIC is specifically responsible for the following:
Breaking incoming packets (as bit streams) into 64-byte blocks, or J-cells
Sending the J-cells to the first DBM
Decoding encapsulation and protocol-specific information
Counting packets and bytes for each logical circuit
Verifying packet integrity
Applying CoS rules to packets
The first DBM ASICs encountered are responsible for receiving the J-cells and spreading them across the shared memory. In the M40, it is the backplane that contains the DBM ASICs; on the M5, M10, M20 and M160, the DBM ASICs are on the control boards.
In parallel, the first DBM ASIC passes forwarding-related information extracted from the packets to the Internet processor, which then performs the route lookup and sends the information over to a second DBM ASIC. The Internet processor ASIC also collects exception packets and sends them to the routing engine. This second ASIC then takes this information and the 64-byte blocks and forwards them to the I/O manager ASIC of the egress FPCor multiple egress FPCs, in the case of multicastfor reassembly.
The DBM ASICs are responsible for the following:
Managing the packet memory distributed across all FPCs
Extracting forwarding-related information from packets
Telling the FPC where to forward packets
The Internet processor ASIC is responsible for the following:
Extracting next-hop information from the forwarding table
Passing the next-hop information to the second DBM ASIC
Collecting exception packets to send to the routing engine
The I/O manager ASIC on the egress FPC can perform some value-added services. In addition to incrementing TTL values and re-encapsulating the packet for handling by the PIC, it can also apply CoS rules. To do this, it may queue a pointer to the packet (never the packet itself) in one of four available queues, each having a share of link bandwidth, before applying the rules to the packet. Queuing can be based on destination address, the random early detection (RED) or weighted RED (WRED) algorithm, the value of precedence bits, and so on. Thus, we can say that the I/O manager ASIC on the FPC is responsible for the following:
Receiving the J-cells from the second DBM ASIC
Incrementing TTL values, as needed
Queuing a pointer to the packet, if necessary, before applying CoS rules
Re-encapsulating the J-cells
Sending the encapsulated packets to the PIC I/O manager ASIC
Packet Flow
Now that you have a little background information on the various ASICs, it is helpful to see exactly how a packet moves through the router. Knowing how the packets move through the router can help clarify what you have learned about the architecture of the router. First, take a look at Figure 310, and then read the explanations below to see how the forwarding decisions are made.
Figure 310 Packet Flow Forwarding Decisions
The router first receives a packet on an ingress, or incoming, PIC. The PIC I/O manager ASIC performs the type of checksum and frame checks that are required by the type of medium it serves. Once this is done, the packet is passed, as a serial bit stream, to the FPC that houses the PIC.
The I/O manager ASIC on the FPC performs the important functions of decapsulating the packet, parsing it, and breaking it up into 64-byte memory blocks, before passing it to the first DBM ASIC. At this point, the packet is first written to memory. The DBM ASIC writes all packets to packet buffer memory, which is distributed across all FPCs on the router. While writing the packets to buffer memory, the DBM ASIC is also extracting information on the destination of the packet.
Once destination information is determined, it is sent to the Internet processor ASIC, which performs the lookup in the forwarding table. Note that the forwarding table is not omnipotent. It can handle unicast packets that do not have options, such as accounting, set. It can also handle multicast packets for which it already has a cached entry. All other packets must go to the routing engine for advanced lookup and resolution. If the PFE can handle the forwarding of the packet, it finds the next hop and egress interface. The packet is then forwarded to the second DBM ASIC, which passes the packet to the I/O manager ASIC on the FPC of the egress interface.
Now the packet may be queued. Actually, as stated earlier it is a pointer to the packet that is queued. The packet itself remains in the shared FPC memory. All queuing decisions and CoS rules are applied in the absence of the actual packet. When the pointer for the packet reaches the front of the line, the I/O manager ASIC sends a request for the packet to the second DBM ASIC. The DBM ASIC reads the J-cells from shared memory and sends them to the I/O manager ASIC on the FPC, which then serializes the bits and sends them to the media-specific ASIC of the egress interface.
The I/O manager ASIC on the egress PIC applies the physical-layer framing, performs the CRC, and sends the bit stream out over the wire.
3.2.2.2 Model Differences in Control Boards
Each model of router has its own component that performs part of the overall function of the PFE. Each board will be described in a little more detail, but the operations performed by them are similar in nature. Each board communicates with the routing engine through a 100Mbps internal interface and with the FPCs through 10Mbps interfaces on an internal hub. The primary functions of the control boards are as follows:
Reset of FPC when abnormal behavior is detectedThe board will attempt to reset the FPC up to three times. If unsuccessful, the control process takes the FPC offline and sends a notification to the routing engine.
Transfer of control and exception packetsThe control board handles nearly all exception packetsthose packets for which there is no known path to the destinationpassed to it from the Internet processor ASIC. The board may then pass exception packets to the routing engine. It may also communicate errors to the routing engine via syslog messages.
Route lookupsA copy of the forwarding table is stored in SSRAM. When packets are received to be processed, the Internet processor ASIC performs the lookup on this table, makes a forwarding decision, sends a message to the midplane about the decision, and forwards the packets to the egress interface.
System monitoringThe control board keeps tabs on the condition of the router based on information it receives from sensors. If an abnormal condition is detected, the board immediately notifies the routing engine.
On most of the M-Series routers, the control board is hot-pluggable, meaning that the router need not be powered down to install or uninstall the control boards. A brief service interruption will occur, usually about 500 ms. On the M5 and M10 routers, however, the FEB is not hot-pluggable. The system must be powered down for maintenance and replacement of this board.
M5 and M10 FEB
The M5 and M10 routers are the newest of the M-Series line. Despite their small footprint, these powerful routers also use control boards in the PFE. The FEB installs in the rear of the M5 or M10 chassis, just above the power supply. The FEB on the M5 and M10 is neither hot-removable nor hot-insertable! You must power down the chassis before removing or inserting these boards.
The FEB contains the following:
A processor
The Internet Processor II ASIC
Two DBM ASICs
I/O ASIC with 1MB SRAMone on the M5, two on the M10
33MHz PCI bus connecting the system ASICs
The FEB also has its own storagefour slots of 2MB RAM to store forwarding tables associated with the ASICs, 64MB DRAM for the microkernel, EEPROM for the storage of the serial number and version of the FEB, and 512MB flash EPROM, which is programmable.
M20 SSB
On the Juniper Networks M20 router, the SSB installs from the front of the chassis into the upper-most slot. The SSB contains the Internet Processor II ASIC and two DBM ASICs. The SSB has its own processor and its own storagefour slots of 2MB RAM to store forwarding tables associated with the ASICs, 64MB DRAM for the microkernel, EEPROM for the storage of the serial number and version of the SSB, and 512MB flash EPROM, which is programmable.
M40 System Control Board
On the Juniper Networks M40 router, the system control board (SCB) installs from the front of the chassis into the center slot. The SCB contains its own processor, a PCI bus and the Internet processor ASIC, as well as 1 to 4MB SSRAM (for forwarding tables), 64MB DRAM (for the microkernel), EEPROM (which stores the SCB serial number and version), and 512MB flash EPROM (programmable). M160 Switching and Forwarding Module
On the Juniper Networks M160 router, up to four interconnected switching and forwarding modules (SFMs) can be configured. Each SFM is a two-board system containing the following components:
Internet Processor II ASIC for route lookups and forwarding
Two DBM ASICs, one to send packets to the output buffer and another to communicate notifications to the I/O ASIC on the FPCs
8MB parity-protected SSRAM
A processor subsystem for the handling of exception and control packets
EEPROM for storage of board serial number and version information
LEDs and an offline button for use prior to module removal
As stated earlier in this section, the M160 control board may be removed without a complete service interruption. There will, however, be a pause of about 500 ms while the router redistributes the functions to all other SFMs still inserted in the chassis.
3.2.3 PFE Clock Generator
The Juniper Networks M160 router has an additional unique featurean added board that acts as a clock source. The PFE clock generator (PCG) is located in the rear of the chassis, beside the routing engine. The PCG supplies a 125MHz clock source to the ASICs and modules that are part of the PFE. The M160 has two PCGs installed for redundancy. These PCGs are field replaceable and hot-pluggable.
The PCG has three LEDsone to indicate an OK state, one to indicate a Fail condition, and one that will illuminate if the PCG is the master. In addition, there is an offline button that will permit the user to take the PCG offline before removing it.