- Open Source Advantages
- Open Source Solutions
- Software Costs
- Simplified License Management
- Lower Hardware Costs
- Scalability, Reliability, and Security
- Support
- Deny Vendor Lock-in
- Quality Software and Plentiful Resources
- Summary
Scalability, Reliability, and Security
To this point, we have discussed open source software in general terms and have included desktop and frontend software as well as server OS and back office solutions. This section hones in on several Linux advantages that have aided in its meteoric rise to a full-fledged OS player in the data center. These advantages are scalability, reliability, and security, and generally apply to all Linux distributions.
Scalability
Scalability encompasses several technologies that enable a system to accommodate larger workloads while maintaining consistent and acceptable levels of performance. Three specific scalability areas are clustering, symmetric multiprocessing (SMP), and load balancing.
Clustering
As mentioned previously, the Beowulf Project allows multiple individual Linux machines to be harnessed together in a single, high-performance cluster. Several commercial-grade cluster implementations are available, including Shell Exploration’s seismic calculations, the National Oceanographic and Atmospheric Administration’s (NOAA) weather predictions, and Google. Google is reported to have 15,000 Intel processors running Linux that are used to index more than three billion documents and handle 150 million searches per day. Linux clustering capabilities are outstanding with practical applications from finite element analysis to financial simulations.
Clustering is enabled using separate packages, Beowulf and Heartbeat. Beowulf includes a message-passing interface and bonding network software between parallel virtual machines. This provides for distributed interprocess communications and a distributed file system for applications that have been enabled for parallel processing. Simply said, it puts lots of processors on a single large task sharing data and processing power.
Clustering can also be used to ensure high availability for tasks that are not necessarily computation-intensive but must be up all the time. With a high-availability cluster, multiple (at least two) identical systems are in place with a form of keep-alive monitor or "heartbeat," which monitors the health of nodes in the cluster. If the heartbeat fails on the primary system, a second system takes over providing uninterrupted service. Cluster management is not tied to any particular machine but management services are shared among the cluster nodes so that if any single point fails, the system continues uninterrupted.
What is clustering available for? Any service that demands continuous access is a candidate. Take authentication, for example. An enterprise network might have thousands of users who authenticate each time they access network resources. If the authentication service goes down, everyone is prevented from getting to what they need. High availability ensures that authentication is always possible. E-commerce applications, email, DHCP, FTP, and high-traffic download sites are also candidates for clustering. Linux clustering capabilities provide both powerful parallel processing and enterprise-class high availability at a relatively low cost.
Scalability features that were built into the Linux 2.6 kernel provide for larger file system sizes, more logical devices, larger main memories, and more scalable SMP support, allowing it to comfortably compete on par with most Unix operating systems. Other scalability technologies include Linux support for hyperthreading, the capability to create two virtual processors on a single physical processor, providing twice the capacity to process threads. The NFS4 file system is a secure, scalable, distributed file system designed for the global Internet. With the 2.6 kernel, memory and file sizes can scale to the full limits of 32-bit hardware.
A distinct advantage of open source is that you have multiple groups such as the Linux Scalability Project at the University of Michigan’s Center for Information Technology (CITI) specifically focusing on scalability. Several of the discoveries and advancements made by this group have been incorporated into the Linux 2.6 kernel. Another example is the Enterprise Linux Group at the IBM T.J. Watson Research Center that has worked to increase scalability for large-scale symmetric multiprocessor applications. The breadth and depth of intellectual manpower applied to solving scalability problems is responsible for the accelerated acceptance of Linux as a truly scalable solution.
Symmetric Multiprocessing
Multiprocessor support (simultaneously executing multiple threads within a single process) has long been marketed as a performance-enhancing feature for operating systems on IA-32 hardware. But not until the Linux 2.6 kernel have multiple processors really been much of an advantage. Linux supports both symmetric multiprocessing (SMP) and non-uniform memory architecture (NUMA). Novell SUSE Linux has been tested with more than 128 CPUs, and with hardware based on HP/Intel Itanium 64-bit architecture, there is no limit on the number of supported processors.
Multiprocessor support with two processors can help enhance performance for uniprocessor applications such as games. Multiple processor support performance enhancements become increasingly visible with software compiles and distributed computing programs in which applications are specifically designed for divided computations among multiple processors.
Load Balancing
An early problem of large Internet sites was accommodating the sometimes wild fluctuations in traffic. An onslaught of page views or database queries could completely clog a connection or bring an application server to its knees. The open source technology called squid is widely used for load balancing traffic between multiple web and application servers.
Squid is an open source proxy web cache that speeds up website access by caching common web requests and DNS lookups. Squid runs on a number of platforms, including Unix, Mac OS X, and Windows. Caching eliminates the distance and number of locations that are required to supply an HTTP or FTP request and accelerates web servers, reducing access time and bandwidth consumption. Load balancing is also accomplished with PHP scripts that allocate database requests across multiple databases. A master database is used for updates, and proxy or slave databases are used for queries.
Reliability
As web and commerce sites have become more integral to standard business processes, the requirement for high levels of uptime is much more critical. The staying-on power of Linux when it comes to mean times between required system reboot is outstanding. One Linux shop has an interesting IT management problem. Using diskless, Linux user workstations with shared backend services also running on Linux, the primary point of failure is CPU fans and power supplies. The IT manager has a box of fans and power supplies and 80% of his administration time (he’s only a part-time administrator) is spent replacing worn-out fans and burned-out power supplies. For this company, Linux is extremely reliable.
Many Novell customers have been able to significantly improve reliability by switching from Windows to Linux. A construction company improved uptime from 95% to 99.999% after moving from Windows to SUSE Linux. The Asian Art Museum in San Francisco enjoys the same levels of reliability with its IBM/SUSE implementation, which helps it showcase nearly 15,000 treasures, and ends the need to reboot servers on average twice per month. The modular, process-based Linux architecture allows different services to be upgraded without ever taking the system down. Customers report that Linux servers have gone through dozens of upgrades and have never been rebooted.
IBM has performed extensive Linux stress tests with heavy-stress workloads on Linux kernel components, such as file system, disk I/O, memory management, scheduling, and system calls, as well as TCP, NFS, and other test components. The tests demonstrate that the Linux system is reliable and stable over long durations, and can provide a robust, enterprise-level environment.
It’s worth noting that IBM has ported Linux to every system they sell, including the IBM S/390. Customers for these systems demand absolute reliability and through IBM’s research and testing, they have found that Linux delivers—no "blue screen of death," no memory leaks, no monthly reboots, and no annual reinstalling of the operating system to regain and ensure stability.
SUSE worked with IBM, HP, and Intel to ensure that the SUSE Linux distribution was reliable, scalable, and secure enough for carrier-grade telecommunications service providers. The SUSE Carrier Grade Linux (CGL) solution is quickly becoming a preferred platform for other applications with less stringent reliability requirements, such as financial and retail markets.
Security
And last, but not least, Linux security is a major advantage over other options—particularly Windows. The viruses Love Bug, Code Red, Nimda, Melissa, Blaster, SoBig, and others have collectively cost companies billions and billions of dollars ($55 billion in 2003 according to Trend Micro). But, companies running Windows servers—not those running Linux—have for the most part, incurred this cost.
Windows is estimated to have between 40 and 60 million lines of code, as compared to Linux with around 5 million. Windows code has evolved over the years from a desktop operating system with new functionality and patches added, creating an unwieldy collection of services that is full of potential security vulnerabilities. A major culprit is unmanaged code—the capability to initiate processes with access across OS functions without the protection of a sandbox or protected area. Many Windows modules rely on complex interdependencies that are very difficult to compartmentalize and secure. Outlook is linked to Internet Explorer, and a security hole in one leads to a security breach in the other. Also, technologies such as ActiveX and IIS expose these weaknesses to outside access.
Linux programs are designed to operate in a more secure manner as isolated processes. Email attachments can’t be executed automatically, as are ActiveX controls and other specially built virus files. Linux (and Mac OS X) prevent any real damage occurring on a system unless the user is logged in with the highest levels of permissions as root or administrator. With Windows, workstation users are almost always logged on with these high-level privileges that exploit vulnerabilities.
According to a report by Dr. Nic Peeling and Dr. Julian Satchell, "There are about 60,000 viruses known for Windows, 40 or so for the Macintosh, about 5 for commercial Unix versions, and perhaps 40 for Linux. Most of the Windows viruses are not important, but many hundreds have caused widespread damage. Two or three of the Macintosh viruses were widespread enough to be of importance. None of the Unix or Linux viruses became widespread—most were confined to the laboratory." The vulnerabilities of Windows have also been of higher severity than those of Linux.
From this, you might agree with Security Focus columnist Scott Granneman who writes, "To mess up a Linux box, you need to work at it; to mess up a Windows box, you just need to work on it." Historically, the Linux open community has also been much quicker at detecting security vulnerabilities, creating and testing patches, and providing them to the community for download. The Linux code is open to thousands of "eyeballs" and both the problem and the fix are readily apparent to someone. Word in the open source community is that no major security defect has ever gone unfixed for more than 36 hours.
Security isn’t just about worms and viruses. It also includes the administration framework that controls user access anywhere in the network. Unix-like operating systems such as Linux were designed based on multiuser, distributed architectures with the capability to work on any machine from any location as if it were local. As a result, security mechanisms for the protection of running processes, transmission of data, and authentication are very secure. Using advanced Novell directory technology in conjunction with a Linux network provides a strong layer of additional security.
Governments are choosing Linux for security reasons as well. Although most Linux distributions include a rich collection of preselected packages for automatic installation, as a Linux user you can pick and choose the packages you want installed. Government organizations can create hardened security servers with a specialized set of services and minimal vulnerability by customizing the package list and compiling it according to their own security policies. For example, a server might be compiled to provide only FTP services, which makes it impervious to attacks through email, HTTP, or other common services.