What is a Microsoft Cluster?
Business today is built on data the ability to access data, manipulate data, transfer data, and analyze data. Large financial institutions and data processing firms lose billions of dollars a year due to data loss caused by system failure. In the data processing world of the past, mainframes ruled the arena. Today, many companies are trading in the mainframe for client/server-based applications and services that are more user-friendly and easier to integrate into their e-commerce implementation plans. Until recently, building a fault-tolerant, scalable, and reliable client/server system was a difficult task that included several months of planning, a room full of consulting companies, and several hundred thousand dollars. But the landscape is changing. Today, building a high-performance, reliable client/server system is easy to do and more cost effective than ever.
Cluster History
To cluster a group of computers is to use two or more independent computer systems to create one virtual server that provides seamless access to an application or service. The idea of clustering may be odd to someone new to the computing world, but clustering computers is not a new idea. Clustering theory dates back to as early as 1970. IBM was the first company to implement the clustering theory into its design, producing fault-tolerant mainframe computing products. At that point, however, clustering was a niche technology that, due to the reliability of the mainframe, was not in demand.
In the mid-1980s, Digital Equipment Corporation released the VAXcluster, a cluster of minicomputers that was supposed to solve the fault-tolerance dilemma of systems and services of the day by removing all single points of failure (SPOF) within a system. The VAXcluster was an attempt to duplicate every component that could fail within a system and run those components simultaneously. This cluster produced a computing environment in which two computers could provide simultaneous service to users. This was not the jewel of the system, however. The greatest value was from the failover ability. If one part of the system were to quit functioning due to a failure, the other computer would begin servicing all of the clients.
The VAXcluster was the first "proof of concept" for the idea of clustering, but had many issues that needed to be resolved through later iterations. Later revisions of the technology were developed for the Reduced Instruction Set Computing (RISC) platform and relied solely upon the UNIX operating system. As the Intel architecture became more popular, the same methodology that was used to create the VAXcluster was applied to the Intel based client/server market.
As more and more companies began to rely on Microsoft networking products, the need for clustering and load-balancing technologies grew. Every step that Microsoft and other vendors have taken towards the development of a windows-based clustering technology has brought excitement to the industry. For years, UNIX-based networking environments have had the ability to cluster servers in a way that rivals the reliability of the mainframe, while Windows administrators were left with minimal options in the clustering arena.
In 1995, Microsoft was delighted to announce the development of "wolfpack," the code name for the Microsoft Cluster Server (MSCS) software package developed for the Windows NT 4.0 Enterprise server platform. This software was developed, in collaboration between Digital Equipment Corporation and Tandem Computers, to allow two Windows NT 4.0 Enterprise servers to share a hard disk, providing automatic failover in the event of a failure within one of the servers. This would have been a triumph for Microsoft, for the cluster server functionality was beginning to be in high demand. Unfortunately, the software was not as solid and reliable as enterprise customers needed it to be, and third party products that offered more advanced functionality and reliability overshadowed the MSCS software.
There were many patches and upgrades to the MSCS software, and slowly it began to take on the appearance of an enterprise-level product. In 2000, Microsoft released Windows 2000, along with a version of their server product called Windows 2000 Advanced Server. This server offered the reliability and stability of an enterprise-class operating system, as well as the tools and applications needed to build high availability cluster servers.
Along with Windows 2000 Advanced Server, Microsoft has released Windows 2000 Datacenter Server, a robust operating system positioned to displace the remaining mainframe hold on the market. Datacenter Server contains the same tools and applications as Windows 2000 Advanced Server, with the added support for higher amounts of memory, processor capacity, and greater clustering capabilities. Both Windows 2000 Advanced Server and Windows 2000 Datacenter Server offer clustering and Network Load Balancing (NLB) services that are sure to become the industry standard for clustering Windows 2000-based applications and services.