Understanding Gigabit Ethernet Performance on Sun Fire Servers
Network-centric computing exercises significant pressure on the network performance of servers. With the increasing popularity of gigabit Ethernet, especially the availability of lower-cost, copper-based gigabit Ethernet adapters, the question of how Sun's servers perform in this area is one of the most important issues being addressed by Sun's engineering team.
This article presents an overview of the performance of the Sun™ GigaSwift Ethernet MMF adapters hardware on a Sun Fire™ system in terms of TCP/IP networking. Most of the previous effort on TCP/IP network performance focused on bulk transfer traffic, which imposes on servers a continuous flow of packets with sizes equal to the maximum transfer unit (MTU) of the underlying carrier. In the client-server computing environment, however, not all requests from clients nor all replies from the servers are large. Frequently, the traffic contains packets that are smaller than the MTU of the carrier. Hence, this article investigates the performance of both bulk transfer and small packet traffic on a Sun Fire 6800 server.
This article discusses the network performance of Sun servers and examines the root cause of the network behavior of Sun servers by describing some of the implementation details of the Solaris™ operating environment (Solaris OE). Also, a set of tuning parameters that affect TCP/IP network performance is discussed and some tuning recommendations are made.
Many customers are not familiar with the capability of gigabit Ethernet on the Sun Fire servers. The amount of resource to support gigabit networking on a Sun Fire server is also unknown. In addition, the best practice to tune sun servers for gigabit networking needs to be promoted.
The article presents three levels of detail. The highest level discusses the throughput numbers. The mid-level discusses the amount of resources consumed by gigabit cards and the best practice to tune some of the network parameters. The lowest level discusses the TCP protocol and explains why the system behaves in a certain way.
The audience for this article is primarily Sun resellers, Solaris administrators, and Sun service engineers. The throughput numbers and resource consumption information will help some corporate infrastructure operators (CIOs) and corporate infrastructure architects.
This article covers the following topics:
Overview
Categorizing TCP Traffic
Gigabit Ethernet Latency on a Sun Fire 6800 Server
Bulk Transfer Traffic Performance
Small Packet TCP Traffic Performance
Summary
Overview
Sun servers have been used extensively in the net economy and are powering many popular web sites. Requests from HTTP clients, database clients, mail clients, directory-query clients, and other network service clients exert great pressure on Sun servers through the attached network. The responses from the server also go out through network interfaces. This client-server model of computing depends heavily on the networking performance of the client and the server to provide optimal overall performance.
Current popular network interface cards (NICs) include the fast Ethernet (hme) and quadfast Ethernet (qfe) cards. These interfaces are only capable of sending and receiving at the 100 megabit-per-second (Mbps) range for each link, which exerts little pressure on the PCI bus bandwidth in a system. However, the newer and faster gigabit Ethernet (GBE) interface cards are gaining momentum. The adoption of GBE has been simplified because the category-5 copper cables, which many of the existing local area networks (LANs) are using, can now carry the gigabit traffic.
Since the major revolution brought by gigabit Ethernet is in throughput (measured in Mbps), this article first focuses on studying bulk transfer type of traffic on Sun Fire servers. However, even though bulk transfer traffic is a major consumer of network bandwidth, a traffic type with a different distribution of packet sizes is more commonly seen when client-server applications are run. Studies show that the sizes of the packets on the World Wide Web (WWW) are trimodal [8]. They can be about 64 bytes, about 540 bytes, or 1518 bytes (including the four-byte checksum). The maximum Ethernet frame size is 1518 bytes. Hence, restricting the network performance study only to the bulk transfer traffic is insufficient, so this article evaluates the performance of both bulk transfer and small packet traffic.
This evaluation was conducted on the Sun Fire 6800 platform using the Sun™ GigaSwift Ethernet MMF adapters hardware. The article discusses how the network throughput delivered by Sun Fire servers varies with the selected socket buffer sizes, how this throughput scales with the number of CPUs, how the packet rate changes with the application of Nagle's control flow algorithm (discussed in"Small Packet Traffic Issues,") how deferred acknowledgment works, and how long it takes to transmit and receive a packet.
The rest of the article is organized as follows: "Categorizing TCP Traffic" describes the characteristics and performance issues of bulk transfer and small packet traffic and "Gigabit Ethernet Latency on a Sun Fire 6800 Server," presents an evaluation of gigabit network latency. "Bulk Transfer Traffic Performance" and "Small Packet TCP Traffic Performance" discuss the performance of bulk transfer and small packet traffic, respectively. "Summary" concludes the article.