Enterprise Quality of Service Part II: Enterprise Solution using Solaris Bandwidth Manager 1.6 Software
This article is Part II of a two-part series with a focus on Enterprise Networks detailing what corporations can do to prioritize traffic in an optimal manner to ensure that certain applications receive priority over less important applications, starting from the computing server up to the enterprise's egress point. This article investigates the effectiveness of Solaris™ Bandwidth Manager 1.6 (Solaris BM 1.6) software in implementing a Quality of Service (QoS) solution in an enterprise network. This article also briefly looks at how policy based network and systems management takes Solaris BM 1.6 software one step further, allowing the QoS configurations to change dynamically based on certain feedback measurements. It doesn't make sense to restrict traffic when there is no congestion, however to constantly manually perform QoS reconfigurations itself can be a daunting task, this is where policy controls play a major role.
This article details the following:
QoS deployment scenarios
Available QoS solutions
NOTE
Solaris Bandwidth Manager 1.6 software is supported on Solaris™ Operating Environment version 8 and older.
QoS Deployment Scenarios
There are several QoS solution approaches from a deployment perspective. Further, there are two available options from an implementation perspective, hardware and software. This article focuses on where in the enterprise QoS can be deployed and is limited to a software implementation.
The end-to-end path from a client to the server is composed of various network segments, each with different bandwidths and more important, different loads. QoS deployments are all based on one fundamental principle, restricting the amount of traffic that is injected into a slower link from a faster link. In this context, the notion of slower and faster is not necessarily about bandwidth, it is also about oversubscription of a link. To better understand this concept, let's step out of the enterprise environment and into the access network, where a local Internet Service Provider (ISP), provides Digital Subscriber Line (DSL) service. In order to generate profits, it is often the case where many DSL lines are aggregated into a DSL Access multiplier (DSLAM). Aggregate egress traffic is forwarded to an Optical Carrier 3 (OC-3) line. Although the DSL line is much slower in terms of bandwidth, the choke point is in fact the OC-3 link at 155 mbps, since the service provider may aggregate thousands of 144 kbs lines, hoping that not all the lines will be in use at the same time. Back in the enterprise environment, where enterprise networks are usually over provisioned. Recent trends in enterprise networks have evolved where centralize web servers often provide services to all employees and partners. This is often an area of contention and an ideal situation where Solaris BM 1.6 software may be used to control traffic. There are various deployment options and the following list describes some QoS deployment possibilities:
Outsourcing the ISP to provide QoS Network services, providing enterprise customers with a web interface provision and their own QoS policies for their portion of bandwidth.
Deploying a QoS Capable Network Switch. This is usually located at a choke point at a corporate Wide Area Network (WAN) access point.
Deploying a Solaris BM 1.6 software server at a choke point, in front of a centralized network resource such as a consolidated server.
Deploying Solaris BM 1.6 software on the consolidated server themselves.One of the main limitations of a purely network-centric approach, is that the network is not always the bottleneck. Often the server may be the source of a bottleneck. For example, web servers or application servers that are generating dynamic web pages, using JavaServer Pages™ (JSP™) technology, servlets, and Enterprise JavaBeans™ (EJB™) technology can be central processing unit (CPU) bound, due to a few relatively small-sized Hypertext Transfer Protocol (HTTP) requests. In this case, having QoS policy enforcement points (PEP) that can only control the network bandwidth does not contribute to improving overall performance. However, if there is some feedback from the servers that provides some indication of load, the QoS device can restrict the incoming requests, aligning the requests with the server load, letting only the priority requests through. This article describes QoS from network bandwidth perspective and then describes a solution that takes server load into consideration of the QoS equation.
In order to understand the effectiveness of the Solaris BM 1.6 software, FIGURE 1 illustrates several representative configurations deployed with heavy loads. Measurements were taken on the server and the client side for verification. FIGURE 1 illustrates that with the same offered load the following is true:
No QoSall clients receive poor service.
QoS on a dedicated serverclients receive good differential services, ensuring that priority clients receive noticeably better service than non-priority.
QoS located on the application serverthe amount of load that QoS uses up on a server to implement differential services. This shows that there is a cost to implementing QoS, which uses CPU cycles that could be used to service client requests.This section discusses a proposed integrated feedback closed loop solution, that integrates Solaris BM 1.6 software with Sun™ Management Center 3.0 (Sun MC 3.0) software as illustrated in FIGURE 1. This solution takes the server load into consideration when restricting traffic and providing differentiated services.
In FIGURE 1 configuration A, shows the baseline case, where no QoS was deployed.
In FIGURE 1 configuration B, shows a dedicated server deployed to control bandwidth allocation. This also illustrates how an integrated Systems and Network Approach can be used, where the policy decision point (PDP) is monitoring and controlling both the servers and the network and taking the appropriate action. In this case, if the servers become overloaded, the PDP can perform several actions to remedy the situation, depending on the Policy Decision Algorithms. The PDP can increase the priority of the process involved and can also reduce the number of requests coming into the server. This provides a closed loop solution. The PDP is located on the same server as the PEP. As previously mentioned, the PDP makes the decision about what to do with particular flows, based on console input or other input. The PDP then instructs the PEPs about what level of QoS to give specific traffic.
In FIGURE 1 configuration C, shows a deployment where the QoS function is no longer implemented on a dedicated server, but located on the server labelled Server Load. This approach describes an architecture where the PDP and policy management tool (PMT) can be shifted from a dedicated separate box, to the servers themselves (in this case, the Server Load). This would normally represent a web server. This solution may make sense for enterprise customers who do not want to add new hardware into existing data center deployments and who want to make better use of current resources that are not fully utilized. The PEP in this case, is implemented in the network protocol stack.
FIGURE 1 Performance Tests Configurations
Configuration ABaseline Results with No QoS
FIGURE 2 shows the results of the client average bandwidths for the four classes; bronze, silver, gold, and platinum. Platinum is referred to the best class of service and bronze as the worst class of service. Gold and silver are the middle of line service. Clearly all classes received poor results, ranging from .6 Mbits/sec to 0.1 Mbits/sec. Response times for the platinum class range from 38 seconds to 155 seconds. The bronze class response times range from 64 seconds to 115 seconds. FIGURE 2 also shows the average load on the client, server, and QoS Solaris BM 1.6 software server. The client and end server CPU utilization is maxed out, yet the overall throughput is extremely low. The network is saturated. This clearly demonstrates that in a oversubscribed network, all traffic degrades. If this were an example of an e-commerce site, QoS would prove to be of extreme value at the time when business peaks.
FIGURE 2 Client Side Measurements of Throughput
Configuration BQoS Policy on Dedicated Server TCP Traffic
FIGURE 3 shows the device specific configuration file used to configure the PEP, which was implemented by Solaris BM 1.6 software. Various filters and classes are defined.
FIGURE 3 Solaris Bandwidth Manager 1.6 Software Configuration File
FIGURE 4 shows the measurements taken on the client side, clearly showing that the four classes of traffic are first experiencing throughput ranging from 40 Mbits/sec to 5 Mbits/sec, much better than the results of no QoS.
FIGURE 4 Dedicated Server Case
As a cross check, FIGURE 5 and FIGURE 6 illustrate the measurements taken on the policy server, showing the bandwidth proportions of all classes of traffic. The measurements show that for the transmission control protocol (TCP) traffic, all classes are in fact receiving the proportions of bandwidth of the specified configuration. Clearly, there is a tremendous improvement in all classes except the lowest bronze class whose response times have worsened to 352 seconds during congestion. Platinum class on the other hand is consistently receiving 1.5 seconds response times and an average bandwidth of 44 Mbits/sec. Gold class is also not consistently receiving response times of 2.7 seconds with an average bandwidth of 24 Mbits/sec. Silver class is not consistently receiving 4.7 seconds response times with an average bandwidth of 14.2 Mbits/sec. Bronze class is not as important, its traffic is dramatically sacrificed for the others, starving out the lowest class queue, disproportionately. As illustrated, the bandwidth manager proved effective in allocating TCP traffic.
FIGURE 5 TCP Traffic Flow Statistics of QoS and Policy on Dedicated Server
FIGURE 6 TCP Traffic Statistics of QoS and Policy on Dedicated Server
The load statistics in FIGURE 7 show that the client and server are under a full load, and the policy server under approximately 2/3 capacity. The server is completely overloaded, because the server feedback was not used. By including feedback and restricting overall bandwidth, across all classes, it is not expected to dramatically improve response times by all clients. Better allocation of overall resources is achieved by keeping the server from reaching its saturation point.
FIGURE 7 MPSTAT-CPU Performance Load Statistics
Two sets of tests were ran, the TCP traffic and the user datagram protocol (UDP) traffic, using the dedicated server to enforce policies as shown in FIGURE 1, configuration B. The results show the usefulness of the bandwidth manager product. Premium customers are getting a larger share of the overall pipe. The TCP traffic is flow-controlled and the client slows in sending data if the server advertises a small receive window. This allows packets to be dropped or the ACK Packet returned after various time-outs. An ACK Packet is a TCP packet that the receiver sends to the sender acknowledging receipt of certain sequence of bytes of the stream. Using UDP traffic allows the client to blindly pump data.
Configuration BQoS Policy on Dedicated Server UDP Traffic
If you use the same architecture as shown in FIGURE 1, configuration C, but change the traffic from TCP to UDP, some interesting results are revealed. FIGURE 8 and FIGURE 9 graphically illustrate the measurements captured on the QoS policy server. The results show a dramatic degradation in performance, for all classes. The graphical results taken on the bandwidth manager server are consistent with the class settings. As FIGURE 3 previously illustrated, the configuration file, out of a total pipe of 100 Mbits/sec, platinum class is 50% of the pipe, gold is 25%, silver is 15%, and bronze is 10%. Referring to FIGURE 8, you can see that platinum, in general, is experiencing better bandwidth and response times than gold. In the same matter, gold is better than silver, and silver is still better than bronze. You can clearly see that it is much more difficult to implement QoS on UDP traffic than on TCP traffic. The reason for this is that TCP traffic is flow-controlled. When packets are dropped, the sender reduces the amount of traffic it injects into the network, thus reducing congestion. In comparison, UDP traffic is not well-behaved. If packets are dropped, the sender continues to interject the same amount of traffic, so the congestion on the client side is not improved.
FIGURE 8 QoS and Policy on Dedicated Server UDP Traffic Statistics
FIGURE 9 QoS and Policy on Dedicated Server UDP Traffic Flow Statistics
FIGURE 10 shows the performance measurements taken on the client side. By looking at the client throughput, you can see that the UDP traffic can be controlled by QoS. It is not controlled as well as the TCP traffic but much better than without using QoS at all.
FIGURE 10 Dedicated Server, QoS UDP Traffic
Configuration CQoS Policy Software Only Solution
FIGURE 11 shows the results of deploying the architecture that is illustrated in FIGURE 1, where the PDP function is deployed on the server running the network application. The results show that the CPU availability is required to process all the packets, classify, queue, and schedule. This is all processed in the kernel mode. The one issue realized after the experimental results are reviewed, is that the configuration of the interfaces in creating classes made a big difference. The CPU performance was much better when only one side of the network was filtered and classified, either on the ingress or egress, but not both sides.
FIGURE 11 QoS and Policy Deployed on the Application ServerTCP Traffic Statistics
Experimental Setup
This section describes an experimental setup. The client and server hosts were deployed on dual CPU Sun Enterprise 250™ servers and in-between the Sun Enterprise 250 servers is a 4 CPU Ultra™ 80 workstation running Solaris BM 1.6 software. The client side runs an equal number of New Test TCPTCP Performance Test Program (NTTCP) sessions per class. Care is taken to ensure a calibrated load among the classes, in order to achieve correct results. Equal number of platinum, gold, silver, and bronze NTTCP requests are generated from client to server.
As FIGURE 12 illustrates, the client and server are attached via an 100 Mbyte/sec FDX Netgear switch. The client side continuously runs NTTCP in a loop, ensuring that each class runs the same number of NTTCP requests, thus calibrating the load equally across all the four classes of traffic (platinum, gold, silver, and bronze). Each class is mapped directly to one logical interface, thus simplifying the filters and class configurations on the Solaris BM 1.6 software.
FIGURE 12 Example of an Experimental Setup