Cisco WAAS Architecture, Hardware, and Sizing
- Cisco WAAS Product Architecture
- Hardware Family
- Licensing
- Performance and Scalability Metrics
- Summary
Chapter 1, "Introduction to Cisco Wide Area Application Services (WAAS)," introduced the performance challenges created by the wide-area network (WAN) and how they are addressed by the Cisco WAAS solution. Cisco WAAS is a software component that is resident on a hardware device deployed at each location with users and servers. This hardware device, which can be deployed as a router-integrated network module for the Integrated Services Router (ISR) or as an appliance, is named either Cisco Wide-Area Application Engine (WAE) or Cisco Wide-Area Virtualization Engine (WAVE). The distinction between the two is that a WAVE device, available only as an appliance, can also provide branch office virtualization services in conjunction with WAN optimization and application acceleration. WAE devices provide only WAN optimization and application acceleration and do not provide virtualization.
This chapter provides an introduction to the Cisco WAAS hardware family, along with an in-depth examination of the hardware and software architecture. This chapter also looks at the licensing options for Cisco WAAS, positioning for each of the hardware platforms, and performance and scalability metrics for each of the platforms.
Cisco WAAS Product Architecture
The Cisco WAAS product family consists of a series of appliances and router-integrated network modules that are based on an Intel x86 hardware architecture. The product family scales from 512 MB of memory to 24 GB of memory, utilizing single-processor subsystems up to dual quad-core processor subsystems. Each Cisco WAAS device, regardless of form factor, is configured with some amount of hard disk storage and a compact flash card. The compact flash card is used for boot-time operation and configuration files, whereas the hard disk storage is used for optimization data (including object cache and Data Redundancy Elimination [DRE]), swap space, software image storage repository, and guest operating system storage in the case of WAVE devices. Having a compact flash card enables the device to remain accessible on the network should the device suffer hard drive subsystem failure for troubleshooting and diagnostics purposes (in such a scenario, optimization and virtualization services would not be operational). Also, by using the compact flash card in this way, a WAAS device can successfully boot and become accessible on the network if no disks are available to the device.
The foundational layer of the Cisco WAAS software is the underlying Cisco Linux platform. The Cisco Linux platform is hardened to ensure that rogue services are not installed and secured such that third-party software or other changes cannot be made. The Cisco Linux platform hosts a command-line interface (CLI) shell similar to that of Cisco IOS Software, which, along with the Central Manager and other interfaces, form the primary means of configuring, managing, and troubleshooting a device or system. All relevant configuration, management, monitoring, and troubleshooting subsystems are made accessible directly through this CLI as opposed to exposing the Linux shell.
The Cisco Linux platform hosts a variety of services for WAAS run-time operation. These include disk encryption, Central Management Subsystem (CMS), interface manager, reporting facilities, network interception and bypass, application traffic policy (ATP) engine, and kernel-integrated virtualization services, as shown in Figure 2-1.
Figure 2-1 Cisco WAAS Hardware and Software Architecture
The following sections examine each of the Cisco WAAS architecture items. Cisco WAAS optimization components, including Data Redundancy Elimination (DRE), Persistent LZ Compression (PLZ), Transport Flow Optimization (TFO), and application accelerators, are discussed in detail in Chapter 1, and thus are not discussed in this chapter.
Disk Encryption
Cisco WAAS devices can be configured to encrypt the data, swap, and spool partitions on the hard disk drives using encryption keys that are stored on and retrieved from the Central Manager. The disk encryption feature uses AES-256 encryption, the strongest commercially available encryption, and keys are stored only in the WAAS device memory after they have been retrieved from the Central Manager during the device boot process. Should a WAAS device be physically compromised or a disk stolen, power is removed from the device, which destroys the copy of the key in memory (memory is not persistent). When the hard disks are encrypted, loss of the key renders data on the disk unusable and scrambled. Keys are stored in the Central Manager database (which can be encrypted) and synchronized among all Central Manager devices for high availability. If a WAAS device is not able to retrieve its key from the Central Manager during boot time, it remains in pass-through mode until connectivity is restored or disk encryption is administratively bypassed. Additionally, the fetching of the key from the Central Manager is done over the Secure Sockets Layer (SSL)-encrypted session that is used for message exchanges between the WAAS devices and the Central Manager devices.
Central Management Subsystem
CMS is a process that runs on each WAAS device, including accelerators and Central Managers. This process manages the configuration and monitoring components of a WAAS device and ensures that each WAAS device is synchronized with the Central Manager based on a scheduler known as the Local Central Manager (LCM) cycle. The LCM cycle is responsible for synchronizing the Central Manager CMS process with the remote WAAS device CMS process to exchange configuration data, fetch health and status information, and gather monitoring and reporting data. The CMS process is tied to a management interface configured on the WAAS device known as the primary interface, which is configured on the WAAS device CLI prior to registration to the Central Manager. Any communication that occurs between WAAS devices for CMS purposes is done using SSL-encrypted connections for security.
Interface Manager
The Cisco WAAS device interface manager manages the physical and logical interfaces that are available on the WAAS device. Each WAAS device includes two integrated Gigabit Ethernet interfaces (including the network modules, one interface is internal and shares connectivity to a peer interface in the router through the router backplane, the other is external and can be cabled to a LAN switch, similar to an appliance). Each WAAS appliance has expansion slots to support one or more additional feature cards, such as the inline bypass adapter, which has two two-port fail-to-wire pairs. The interface manager also provides management over logical interfaces that can be configured over physical interfaces. Logical interfaces include active/standby interfaces, where one physical interface is used as a primary interface and a second interface is used as a backup in the event the primary interface fails. Another logical interface is the PortChannel interface, which can be used to team WAAS device interfaces together for the purposes of high availability and load balancing. It should be noted that active/standby interfaces are used when WAAS device interfaces connect to separate switches, whereas PortChannel interfaces are used when the WAAS device interfaces connect to the same switch.
Monitoring Facilities and Alarms
Cisco Linux provides an interface for the Cisco WAAS software to use for purposes of monitoring and generating alarms. Cisco WAAS supports the Simple Network Management Protocol (SNMP) versions 1, 2c, and 3, and a host of Management Information Bases (MIB) that provide complete coverage over the health of each individual WAAS device. Cisco WAAS also supports the definition of up to four syslog servers, which can be used as alarm recipients when syslog messages are generated. The WAAS Central Manager also has an alarm dashboard, which is described in Chapter 7, "System and Device Management." The Central Manager makes an application programming interface (API) available for third-party visibility systems, which is also discussed in Chapter 7, Chapter 8, "Configuring WAN Optimization," and Chapter 9, "Configuring Application Acceleration." Transaction logs can be configured to be stored on each of the accelerator devices in the network for persistent retention of connection statistics, which might be useful for troubleshooting, debugging, or analytics purposes. Transaction logs are not covered in this book, but a full reference on their usage can be found in the Cisco WAAS documentation.
Network Interception and Bypass Manager
The network interception and bypass manager is used by the Cisco WAAS device to establish relationships with intercepting devices where necessary and ensure low-latency bypass of traffic that the WAAS device is not intended to handle. The Web Cache Coordination Protocol version 2 (WCCPv2) is a protocol managed by the network interception and bypass manager to allow the WAAS device to successfully join a WCCPv2 service group with one or more adjacent routers, switches, or other WCCPv2-capable server devices. WCCPv2 is discussed in more detail in Chapter 4, "Network Integration and Interception." Other network interception options, which are also discussed in Chapter 4, include policy-based routing (PBR), physical inline interception, and Application Control Engine (ACE). As flows are intercepted by the WAAS device and determined to be candidates for optimization, those flows are handed to the Application Traffic Policy (ATP) engine to identify what level of optimization and acceleration should be applied based on the configured policies and classifier matches. The ATP is discussed in the next section, and Chapter 8 and Chapter 9 discuss the configuration and management of policies.
Application Traffic Policy Engine
Although the foundational platform component of Cisco WAAS is Cisco Linux, the foundational optimization layer of the Cisco WAAS software (which is as much a component of the Cisco Linux platform as it is the software) is the ATP engine. The ATP is responsible for examining details of each incoming flow (after being handled by the interception and bypass mechanisms) in an attempt to identify the application or protocol associated with the flow. This association is done by comparing the packet headers from each flow against a set of predefined, administratively configured, or dynamic classifiers, each with its own set of one or more match conditions. Flows that do not have a match with an existing classifier are considered "other" traffic and are handled according to the policy defined for other traffic, which indicates that there are no classifier matches and that the default policy should be used.
When a classifier match is found, the ATP examines the policy configuration for that classifier to determine how to optimize the flow. The ATP also notes the application group to which the classifier belongs to route statistics gathered to the appropriate application group for proper charting (visualization) and reporting. The configured policy dictates which optimization and acceleration components are enacted upon the flow and how the packets within the flow are handled. The list of configurable elements within a policy include the following:
- Type of policy: Defines whether the policy is a basic policy (optimize, accelerate, and apply a marking), Wide Area File Services Software (WAFS) transport (used for legacy mode compatibility with WAAS version 4.0 devices), and end-point mapper (EPM, used to identify universally-unique identifiers for classification and policy).
- Application: Defines which application group the statistics should be collected into, including byte counts, compression ratios, and others, which are then accessible via the WAAS device CLI or Central Manager.
-
Action: Defines the WAN optimization policy that should be applied to flows that match the classifier match conditions. This includes:
- Passthrough: Take no optimization action on this flow
- TFO Only: Apply only TCP optimization to this flow, but no compression or data deduplication
- TFO with LZ Compression: Apply TCP optimization to this flow, in conjunction with persistent LZ compression
- TFO with Data Redundancy Elimination: Apply TCP optimization to this flow, in conjunction with data deduplication
- Full Optimization: Apply TCP optimization, persistent LZ compression, and data duplication to this flow
-
Accelerate: Accelerate the traffic from within this flow using one of the available application accelerators. This provides additional performance improvement above and beyond those provided by the WAN optimization components defined in Action and includes (the capabilities are described in detail in Chapter 1):
- MS Port Mapper: Identify application based on its universally unique identifier, which allows WAAS to appropriately classify certain applications that use server-assigned dynamic port numbers
- Common Internet File System (CIFS): Acceleration for Microsoft file-sharing environments
- HTTP: Acceleration for intranet and Internet applications that use the hypertext transfer protocol
- NFS: Acceleration for UNIX file-sharing environments
- MAPI: Acceleration for Microsoft Exchange e-mail, calendaring, and collaboration environments
- Video: Acceleration for Windows Media over RTSP streams
- Position: Specify the priority order of this policy. Policies are evaluated in priority order, and the first classifier and policy match determines the action taken against the flow and where the statistics for that flow are aggregated.
- Differentiated Services Code Point (DSCP) Marking: Apply a DSCP value to the packets in the flow. WAAS can either preserve the existing DSCP markings or apply a specific marking to the packets matching the flow based on the configuration of this setting.
Settings configured in the policy are employed in conjunction with one another. For instance, the CIFS policy is, by default, configured to leverage the CIFS accelerator prior to leveraging the "full optimization" (DRE, PLZ, TFO) capabilities of the underlying WAN optimization layer. This can be coupled with a configuration that applies a specific DSCP marking to the packets within the flow. This is defined in a single policy, thereby simplifying overall system policy management. Classifiers within the ATP can be defined based on source or destination IP addresses or ranges, TCP port numbers or ranges, or universally-unique identifiers (UUID). The ATP is consulted only during the establishment of a new connection, which is identified through the presence of the TCP synchronize (SYN) flag which occurs within the first packet of the connection. By making a comparison against the ATP using the SYN packet of the connection being established, the ATP does not need to be consulted for traffic flowing in the reverse direction, as the context of the flow is established by all WAAS devices in the path between the two endpoints and applied to all future packets associated with that particular flow. In this way, classification performed by the ATP is done once against the three-way handshake (SYN, SYN/ACK packets) and is applicable for both directions of traffic flow.
Figure 2-2 shows how the ATP engine interacts with a flow and a particular policy. For more information on ATP, including configuration, please see Chapter 8 and Chapter 9.
Figure 2-2 Connection Interaction with Application Traffic Policy
Virtual Blades
Cisco WAAS utilizes Kernel-based Virtual Machine (KVM) technology from Red Hat (via the Qumranet acquisition) to allow the WAVE appliance (and the WAE-674) to host third-party operating systems and applications. As of version 4.1.3, Microsoft Windows Server, versions 2003 and 2008, are supported for installation on the WAAS Virtual Blade (VB) architecture, and certain configurations can be bundled and packaged within the WAVE configuration with full support from the Cisco Technical Assistance Center (TAC). This configuration includes Microsoft Windows Server 2008 Core, Active Directory read-only domain controller, DNS server, DHCP server, and print server. The WAAS VB architecture helps enable customers to further consolidate infrastructure by minimizing the number of physical servers required in the branch office for those applications which are not good candidates for centralization into a data center location.