- Desktop Solutions
- Client/Server Solutions
- The Internet Solution
Client/Server Solutions
The easiest way to think of the client/server software architecture is to imagine a desktop application broken into logical pieces and distributed throughout a network of computers. The rationale behind such a design is not important at the moment; trust that software built this way is done rationally and has its particular benefits, some of which you'll learn about in the coming section.
The client/server model was born from two merging demands. First, as the personal computer became more powerful in the late 1980s and early 1990s, corporations began adopting it as a lower-cost solution to low-end business processing. Essentially, the PC took on the same displacing role that mini-computers had taken against their larger, much more expensive brethren, the mainframes. Companies viewed the PC as a means to make their employees more efficient and flexible than was economically viable with minis or mainframes.
In addition to running the shrink-wrapped desktop productivity applications, corporate information technology (IT) departments, as well as software-consulting companies, began creating desktop applications specifically geared to solving business processes utilizing these relatively cheap PC platforms.
As the PC evolved and inundated the market, IT departments and hardware companies came to realize that while the personal computer empowered each person to do more than was previously possible with the hosted dumb terminals, the need for centralized processing of data (using terms loosely here) would not vanish. However, technology managers and manufacturers both realized that the Intel computer chips, which were driving the corporate PC revolution and the surrounding hardware, had sufficient performance to make it possible for the likes of Compaq to forge a new category of computer: the PC server.
NOTE
The "PC" in PC server is used only to differentiate these Intel-based computer servers from the preIntel-based servers. "PC servers" are essentially just souped-up PCs. Granted, PC manufacturers in this market have always added hardware optimized to handle the task at hand, but the basic design and certainly the roots of the server lie with the vanilla desktop personal computers.
Designed not for an individual employee but as a shared resource accessible by multiple employees, the PC servers sat in the back rooms of IT departments. Initially, these machines were used for simple centralized tasks such as storing and accessing company files and data (what became known as file servers), acting as print servers, authenticating users on the corporate network, and, in time, hosting a few small commonly accessible applications. Somewhere along the evolutionary path, software developers (including the corporate IT staffs) came up with the idea of taking the host-terminal model of the previous computing era and evolving it.
The idea was simply to alter the hosting model by replacing the dumb terminals with the already deployed "really smart terminals" (compared to dumb terminals, personal computers even in the late 1980s were Einsteins). The idea was simple: leverage the processing power that the client side of the host model now possessed.
Using PCs and servers had a cost advantage over the mainframes and minicomputers. Also, by utilizing the processing power of both the server and desktop client PCs, developers could create more robust, user-friendly, and efficient solutions than previously possible. Client/server computing was born.
Benefits of Client/Server Computing
The following list outlines some of the benefits and drawbacks of client/server solutions.
More for lessMany benefits to client/server (C/S) computing exist over the traditional hosted or standalone desktop application models. As mentioned, companies can utilize lower-cost computers to achieve the same task.
Many companies were introducing PCs because their processing power and available software (cheap relative to custom mainframe) provided more bang for the buck. This added employee-side (read: client-side) processing power is what developers use to create a new breed of solutions not previously possible at the same price point.
Breaking it all downFurthermore, application developers can divide solutions into more manageable parts. As with the dumb terminal to mainframe design, the client machine provides a user interface to the solution; however, unlike dumb terminals, the PC-based clients have much more processing power. Therefore, the PC-based terminal offers a much richer user interface and, unlike dumb terminals, can perform business processing.
Centralized information storageWhile processing in the client/server model is distributed, information storage is centralized. The server stores the data and acts as a coordinator for accessing and modifying information. This minimizes information redundancy and aids in keeping data consistent, even when multiple users/clients are working with it. You might wonder why you need a server at all. Think of client/server computing in terms of a manager-employee relationship, with the (sometimes incorrect) assumption that managers have more knowledge and experience in the particular field. Managers (servers) have more information about the company and day-to-day operations. They also tend to have a deeper understanding of the business processes. Lastly, managers know their department's priorities, strategy, goals, and outstanding tasks. They then disseminate information as needed and delegate work to their employees.
The employees (clients), on the other hand, might not have as much knowledge and experience as their managers, but they have a more focused job and have access only to the information that their manager provides or that they can infer.
The significant aspect of the delegation process is that employees manage the details of the task they are assigned and execute it based on their own conclusions. Once finished with their work, they report the results back to their manager for further processing (unless, of course, you are a Dilbert fan). Essentially, this is how client/server computing works. It departs from the host-centric model in which only the server has a processor capable of doing anything and the clients (terminals) simply feed information into the server like drones.
Thus, the general design behind client/server software is that the common, processor-intensive services that can logically be centralized are hosted on the PC server, and those less intensive, uncommon, or user-specific features find their way to the desktop PC. This enables people to produce more robust, manageable, and efficient solutions that gain in performance through a divide-and-conquer architecture.
From development and maintenance standpoints, the client/server architecture makes things arguably easier. Generally speaking, the client is easy to implement; its tasks are broken into smaller, simpler tasks that, although imbued with logic, are more mechanical in nature. It is for these reasons that the client side is often implemented using rapid application development (RAD) tools, such as Microsoft Visual Basic, Borland Delphi, and Borland C++ Builder.
The server side, in contrast, is responsible for coordinating all the information that its clients request or with which they respond. Furthermore, it must process this information in additional, often more processor-intensive ways to achieve the desired results. The server components are therefore the most difficult and costly to implement. This is one reason why separation of business logic from the user interface makes the solution easier to develop, deploy, manage, and update. For instance, if your business process changes, you might need to change how you calculate your figures even though the presentation of result remains unchanged. So, you can leave your UI code base in place and modify your server-side code only.
The opposite is also true; if customer feedback necessitates a more intuitive interface, you can update the UI of the software without touching the business logic. This division limits collateral damageinadvertent introduction of bugs in either the client or server side when working on the other.
Another, perhaps less obvious, benefit of C/S computing is that it is often securely locked away somewhere, which prevents intentional tampering, unintentional accidents, unauthorized access, or surprise interruptions, which can happen when the server is inadvertently turned off by the cleaning staff.
Drawbacks to Client/Server Computing
Although client/server computing has many benefits, it does have its disadvantages.
Complicated to implementSoftware development is about breaking a problem into pieces, making it easier to solve. To leverage the benefits of distributed processing, the design of client/server solutions often becomes complicated. This contradicts the earlier statement that they are easier to implement. Recall that there is a client side and a server side to this equation. Numerous issues including processing and data synchronization between clients and servers must be addressed depending on the solution architecture,
CostlyDistributed computing is inherently more complicated, and therefore requires more highly trained/experienced developers and architects. Obviously this raises production costs.
Longer production cyclesThe increased complexity again rears its head because the more complicated a solution is, the more time it takes to realize. This also increases the cost of the project.