- Designing Campus Networks
- Designing WANs
- Utilizing Remote Connection Design
- Providing Integrated Solutions
- Determining Your Networking Requirements
- Summary
Determining Your Networking Requirements
Designing a network can be a challenging task. Your first step is to understand your networking requirements. The rest of this chapter explains how to determine these requirements. After you have identified these requirements, refer to Chapter 2 for information on selecting network capability and reliability options that meet these requirements.
Networking devices must reflect the goals, characteristics, and policies of the organizations in which they operate. Two primary goals drive networking design and implementation:
Application availabilityNetworks carry application information between computers. If the applications are not available to network users, the network is not doing its job.
Cost of ownershipInformation system (IS) budgets today often run in the millions of dollars. As large organizations increasingly rely on electronic data for managing business activities, the associated costs of computing resources will continue to rise.
A well-designed network can help balance these objectives. When properly implemented, the network infrastructure can optimize application availability and allow the cost-effective use of existing network resources.
The Design Problem: Optimizing Availability and Cost
In general, the network design problem consists of the following three general elements:
Environmental givensEnvironmental givens include the location of hosts, servers, terminals, and other end nodes; the projected traffic for the environment; and the projected costs for delivering different service levels.
Performance constraintsPerformance constraints consist of network reliability, traffic throughput, and host/client computer speeds (for example, network interface cards and hard drive access speeds).
Networking variablesNetworking variables include the network topology, line capacities, and packet-flow assignments.
The goal is to minimize cost based on these elements while delivering service that does not compromise established availability requirements. You face two primary concerns: availability and cost. These issues are essentially at odds. Any increase in availability must generally be reflected as an increase in cost. As a result, you must weigh the relative importance of resource availability and overall cost carefully.
As Figure 1-5 shows, designing your network is an iterative activity. The discussions that follow outline several areas that you should carefully consider when planning your networking implementation.
Figure 1-5 General Network Design Process
Assessing User Requirements
In general, users primarily want application availability in their networks. The chief components of application availability are response time, throughput, and reliability:
Response time is the time between entry of a command or keystroke and the host system's execution of the command or delivery of a response. User satisfaction about response time is generally considered to be a monotonic function up to some limit, at which point user satisfaction falls off to nearly zero. Applications in which fast response time is considered critical include interactive online services, such as automated tellers and point-of-sale machines.
Applications that put high-volume traffic onto the network have more effect on throughput than end-to-end connections. Throughput-intensive applications generally involve file-transfer activities. However, throughput-intensive applications also usually have low response-time requirements. Indeed, they can often be scheduled at times when response-timesensitive traffic is low (for example, after normal work hours).
Although reliability is always important, some applications have genuine requirements that exceed typical needs. Organizations that require nearly 100% uptime conduct all activities online or over the telephone. Financial services, securities exchanges, and emergency/police/military operations are a few examples. These situations imply a requirement for a high level of hardware and topological redundancy. Determining the cost of any downtime is essential in determining the relative importance of reliability to your network.
You can assess user requirements in a number of ways. The more involved your users are in the process, the more likely that your evaluation will be accurate. In general, you can use the following methods to obtain this information:
User community profilesOutline what different user groups require. This is the first step in determining network requirements. Although many users have roughly the same requirements for an electronic mail system, engineering groups using X Windows terminals and Sun workstations in an NFS environment have different needs than PC users sharing print servers in a finance department.
Interviews, focus groups, and surveysBuild a baseline for implementing a network. Understand that some groups might require access to common servers. Others might want to allow external access to specific internal computing resources. Certain organizations might require IS support systems to be managed in a particular way according to some external standard. The least formal method of obtaining information is to conduct interviews with key user groups. Focus groups can also be used to gather information and generate discussion among different organizations with similar (or dissimilar) interests. Finally, formal surveys can be used to get a statistically valid reading of user sentiment regarding a particular service level or proposed networking architecture.
Human factors testsThe most expensive, time-consuming, and possibly revealing method is to conduct a test involving representative users in a lab environment. This is most applicable when evaluating response-time requirements. You might set up working systems and have users perform normal remote host activities from the lab network, for example. By evaluating user reactions to variations in host responsiveness, you can create benchmark thresholds for acceptable performance.
Assessing Proprietary and Nonproprietary Solutions
Compatibility, conformance, and interoperability are related to the problem of balancing proprietary functionality and open networking flexibility. As a network designer, you might be forced to choose between implementing a multivendor environment and implementing a specific, proprietary capability. For example, the Interior Gateway Routing Protocol (IGRP) provides many useful capabilities, such as a number of features designed to enhance its stability. These include holddowns, split horizons, and poison reverse updates.
The negative side is that IGRP is a proprietary routing protocol. In contrast, the integrated Intermediate System-to-Intermediate System (IS-IS) protocol is an open networking alternative that also provides a fast converging routing environment; however, implementing an open routing protocol can potentially result in greater multivendor configuration complexity.
The decisions that you make have far-ranging effects on your overall network design. Assume that you decide to implement integrated IS-IS rather than IGRP. In doing this, you gain a measure of interoperability; however, you lose some functionality. For instance, you cannot load balance traffic over unequal parallel paths. Similarly, some modems provide a high level of proprietary diagnostic capabilities but require that all modems throughout a network be of the same vendor type to fully exploit proprietary diagnostics.
Previous networking investments and expectations for future requirements have considerable influence over your choice of implementations. You need to consider installed networking equipment; applications running (or to be run) on the network; traffic patterns; physical location of sites, hosts, and users; rate of growth of the user community; and both physical and logical network layout.
Assessing Costs
The network is a strategic element in your overall information system design. As such, the cost of your network is much more than the sum of your equipment purchase orders. View it as a total-cost-of-ownership issue. You must consider the entire life cycle of your networking environment. A brief list of costs associated with networks follows:
Equipment hardware and software costsConsider what is really being bought when you purchase your systems; costs should include initial purchase and installation, maintenance, and projected upgrade costs.
Performance trade-off costsConsider the cost of going from a 5-second response time to a half-second response time. Such improvements can cost quite a bit in terms of media selection, network interfaces, networking nodes, modems, and WAN services.
Installation costsInstalling a site's physical cable plant can be the most expensive element of a large network. The costs include installation labor, site modification, fees associated with local code conformance, and costs incurred to ensure compliance with environmental restrictions (such as asbestos removal). Other important elements in keeping your costs to a minimum include developing a well-planned wiring-closet layout and implementing color-code conventions for cable runs.
Expansion costsCalculate the cost of ripping out all thick Ethernet, adding additional functionality, or moving to a new location. Projecting your future requirements and accounting for future needs saves time and money.
Support costsComplicated networks cost more to monitor, configure, and maintain. Your network should be no more complicated than necessary. Costs include training, direct labor (network managers and administrators), sparing, and replacement costs. Additional costs that should be considered are out-of-band management, SNMP management stations, and power.
Cost of downtimeEvaluate the cost of every minute that a user is unable to access a file server or a centralized database. If this cost is high, you must attribute a high cost to downtime. If the cost is high enough, fully redundant networks might be your best option.
Opportunity costsEvery choice you make has an opposing alternative option. Whether that option is a specific hardware platform, topology solution, level of redundancy, or system integration alternative, there are always options. Opportunity costs are the costs of not picking one of those options. The opportunity costs of not switching to newer technologies and topologies might be lost competitive advantage, lower productivity, and slower overall performance. Any effort to integrate opportunity costs into your analysis can help make accurate comparisons at the beginning of your project.
Sunken costsYour investment in existing cable plant, routers, concentrators, switches, hosts, and other equipment and software is your sunken costs. If the sunken costs are high, you might need to modify your networks so that your existing network can continue to be utilized. Although comparatively low incremental costs might appear to be more attractive than significant redesign costs, your organization might pay more in the long run by not upgrading systems. Too much reliance on sunken costs can cost your organization sales and market share when calculating the cost of network modifications and additions.
Estimating Traffic: Workload Modeling
Empirical workload modeling consists of implementing a working network and then monitoring traffic for a given number of users, applications, and network topology. Try to characterize activity throughout a normal workday in terms of the type of traffic passed, level of traffic, response time of hosts, time to execute file transfers, and so on. You can also observe utilization on existing network equipment over the test period.
If the tested network's characteristics are similar to a prospective network, you can try extrapolating to the prospective network's number of users, applications, and topology. This is a best-guess approach to traffic estimation given the unavailability of tools to characterize detailed traffic behavior.
In addition to passive monitoring of an existing network, you can measure activity and traffic generated by a known number of users attached to a representative test network and then extrapolate findings to your anticipated population.
One problem with modeling workloads on networks is that it is difficult to accurately pinpoint traffic load and network device performance as functions of the number of users, type of application, and geographical location. This is especially true without a real network in place. Consider the following factors that influence the dynamics of the network:
The time-dependent nature of network accessPeak periods can vary; measurements must reflect a range of observations that includes peak demand.
Differences associated with type of trafficRouted and bridged traffic place different demands on network devices and protocols; some protocols are sensitive to dropped packets; some application types require more bandwidth.
The random (nondeterministic) nature of network trafficExact arrival time and specific effects of traffic are unpredictable.
Sensitivity Testing
From a practical point of view, sensitivity testing involves breaking stable links and observing what happens. When working with a test network, this is relatively easy. Disturb the network by removing an active interface, and monitor how the change is handled by the network: how traffic is rerouted, the speed of convergence, whether any connectivity is lost, and whether problems arise in handling specific types of traffic. You can also change the level of traffic on a network to determine the effects on the network when traffic levels approach media saturation. This empirical testing is a type of regression testing: A series of specific modifications (tests) is repeated on different versions of network configurations. By monitoring the effects of the design variations, you can characterize the relative resilience of the design.
NOTE
Using a computer to model sensitivity tests is beyond the scope of this book. A useful source for more information about computer-based network design and simulation is A.S. Tannenbaum's Computer Networks (Prentice Hall, 1996).