Understanding Networking in Bioinformatics Information
People seldom improve when they have no
other model but themselves to copy after.
Oliver Goldsmith
Comparing a data network to a living organism, the hardware provides the skeleton or basic infrastructure upon which the nervous system is built. Similarly, a few hundred meters of cable running through the walls of a laboratory is necessary but insufficient to constitute a network. Rather, the data pulsing through cables or other media in a coordinated fashion define a network. This coordination is provided by electronics that connect workstations and shared computer peripherals with the networks that amplify, route, filter, block, and translate data. Every competent bioinformatics researcher should have a basic understanding of the limits, capabilities, and benefits of specific network hardware, if only to be able to converse intelligently with hardware vendors or to direct the management of an information services provider.
According to Chaos Theory, the ability to adapt and the capacity for spontaneous self-organization are the two main characteristics of complex systemssystems that have many independent variables interacting with each other in many ways and that have the ability to balance order and chaos. In this regard, computer networks qualify as complex systems, always at the edge of failure, but still working. In some sense, it's difficult to define success and failure for these systems, in part because of the so-called law of unintended consequences that stipulates these systems can provide results so beneficial, so out of proportion to the intended "success" that they overshadow the significance of the intended goal. Consider that gunpowder was intended as an elixir to prolong life, or that the adhesive on 3M Post-It Notes® was intended to be a superglue, Edison's phonograph was intended to be a telephone message recorder, and Jacquard's punch card was intended to automate the loom, not to give the computer its instructions or determine presidential elections. Such is the case with the Internet, one of the greatest enabling technologies in bioinformatics, allowing researchers in laboratories anywhere on the globe to access data maintained by the National Center for Biological Information (NCBI), the National Institutes of Health (NIH), and other government agencies.
The Internet was never intended to serve as the portal to the code of life, but was a natural successor to the cold war projects in the 1950s and early 1960s. During this time, the military establishment enjoyed the nearly unanimous respect and support of politicians and the public. Universities with the top science and engineering faculties received nearly unlimited funding, and the labors of the nation's top scientists filtered directly into industry. Military demand and government grants funded the development of huge projects that helped establish the U.S. as a Mecca for technological developments in computing and communications networks.
The modern Internet was the unintended outcome of two early complex systems: the ARPANET (Advanced Research Project Agency Network) and the SAGE system (semiautomatic ground environment), developed for the military in the early 1950s and 1960s, respectively. SAGE was the national air defense system comprised of an elaborate, ad hoc network of incompatible command and control computers, early warning radar systems, weather centers, air traffic control centers, ships, planes, and weapons systems. The communications network component of the SAGE system was comprehensive and extended beyond the border of the U.S. and included ships and aircraft. It was primarily a military system, with a civil defense link as its only tie with civilian communications system.
Government-sponsored R&D increasingly required reliable communications between industry, academia, and the military. Out of this need, and spurred by the fear of disruption of the civilian communications grid through eventual nuclear attack, a group of scientists designed a highly redundant communications system, starting with a single node at UCLA in September of 1969. By 1977, the ARPANET stretched across the U.S. and extended from Hawaii to Europe. The ARPANET quickly grew and became more complex, with an increasing number of nodes and redundant cross-links that provided alternate communications paths in the event that any particular node or link failed.
Although the ARPANET's infrastructure was an interdependent network of nodes and interconnections, the data available from the network was indistinguishable from data available from any standalone computer. The infrastructure of the system provided redundant data communications, but no quick and intuitive way for content authors to cross-link data throughout the network for later accessthe mechanism that allows today's Internet users to search for information. In 1990, ARPANET was replaced by the National Science Foundation Network (NSFNET) to connect its supercomputers to regional networks. Today, NSFNET operates as the high-speed backbone of the Internet.
Fortunately, and apparently coincidentally, during the period of military expansion in the 1950s and 1960s, federally funded researchers at academic institutions explored ways to manage the growing store of digital data amid the increasingly complex network of computers and networks. One development was hypertext, a cross-referencing scheme, where a word in one document is linked to a word in the same or a different document.
Around the time the ARPANET was born, a number of academic researchers began experimenting with computer-based systems that used hypertext. For example, in the early 1970s, a team at Carnegie-Mellon University developed ZOG, a hypertext-based system that was eventually installed on a U.S. aircraft carrier. ZOG was a reference application that provided the crew with online documentation that was richly cross-linked to improve speed and efficiency of locating data relevant to operating shipboard equipment.
In addition to applications for the military, a variety of commercial, hypertext-based document management systems were spun out of academia and commercial laboratories, such as the Owl Guide hypertext program from the University of Kent, England, and the Notecards system from Xerox PARC in California. Both of these systems were essentially stand-alone equivalents of a modern Web browser, but based on proprietary document formats with content limited to what could be stored on a hard drive or local area network (LAN). The potential market for these products was limited because of specialized hardware requirements. For example, the initial version of Owl Guide, which predated Apple's HyperCard hypertext program, was only available for the Apple Macintosh. Similarly, Notecards required a Xerox workstation running under a LISP-based operating system. These and other document management systems allowed researchers to create limited Web-like environments, but without the advantage of the current Web of millions of documents authored by others.
In this circuitous way, out of the quest for national security through an indestructible communications network, the modern Internet was born. Today, the Internet connects bioinformatics researchers in China, Japan, Europe, and worldwide, regardless of political or national affiliation. It not only provides communications, including e-mail, videoconferencing, and remote information access. Together with other networks, the Internet provides for resource sharing and alternate, reliable sources of bioinformatics data.
As an example of how important networks are in bioinformatics R&D, consider that the typical microarray laboratory involved in creating genetic profiles for custom drug development and other purposes generates huge amounts of data. Not only does an individual microarray experiment generate thousands of data points, usually in the form of 16-bit tiff (tagged image file format) files, but the experimental design leading up to the experiments, including gene data analysis, involves access to volumes of timely data as well. Furthermore, analysis and visualization of the experimental data requires that they be seamlessly and immediately available to other researchers.
The scientific method involves not only formulating a hypothesis and then generating creative and logical alternative solutions for methods of supporting or refuting it, but also a hypothesis that will withstand the scrutiny of others. Results must be verifiable and reproducible under similar conditions in different laboratories. One of the challenges of working with microarrays is that there is still considerable art involved in creating meaningful results. Results are often difficult to reproduce, even within the same laboratory. Fortunately, computational methods, including statistical methods, can help identify and control for some sources of error.
As shown in Figure 3-1, computers dedicated to experimental design, scanning and image analysis, expression analysis, and gene data manipulation support the typical microarray laboratory. The microarray device is only one small component of the overall research and design process. For example, once the experiment is designed using gene data gleaned from an online database, the microarray containing the clones of interest has to be designed and manufactured. After hybridization with cDNA or RNA from tissue samples, the chips are optically scanned and the relative intensity of fluorescent markers on the images are analyzed and stored. The data are subsequently subject to further image processing and gene expression analysis.
Figure 3-1 Microarray Laboratory Network. The computers in a typical microarray laboratory present a mixture of data formats, operating systems, and processing capabilities. The network in this example, a wired and wireless local area network (LAN), supports the microarray laboratory processes, from experimental design and array fabrication to expression analysis and publishing of results.
interest has to be designed and manufactured. After hybridization with cDNA or RNA from tissue samples, the chips are optically scanned and the relative intensity of fluorescent markers on the images are analyzed and stored. The data are subsequently subject to further image processing and gene expression analysis.
In this example, the server provides a gateway or access point to the Internet to access the national databases for gene data analysis. Individual computers, running different operating systems, share access to data generated by the microarray image scanner as soon as it's generated. For example, even though a workstation may be running MacOS, UNIX, Linux, or some version of the Windows operating system, and the microarray image scanner controller operates under a proprietary operating system, the network provides a common communications channel for sharing and capturing data from the experiment as well as making sense of it through computer-based analysis. The network also supports the sharing of resources, such as printers, modems, plotters, and other networked peripherals. In addition, a wireless extension of the network allows the researchers to share the wireless laptop for manipulating the data, such as by transforming spot data from the image analysis workstation to array data that can be manipulated by a variety of complex data-manipulation utilities. In this context, the purpose of the LAN is to provide instantaneous connectivity between the various devices in the laboratory, thereby facilitating the management, storage, and use of the data.
Consider the process without the network depicted in Figure 3-1. The gene analysis workstation would have to be connected directly to the Interneta potentially dangerous proposition without a software or hardware firewall or safety barrier to guard against potential hackers. Similarly, the results of any analysis would have to be separately archived to a floppy, Zip® disk, or CD-ROM. In addition, sharing experimental data would require burning a CD-ROM or using other media compatible with the other workstations in the laboratory. Simply attaching a data file to an e-mail message or storing it in a shared or open folder on the server would be out of the question. Data could also be shared through printouts, but because the computers aren't part of a network, each workstation requires its own printer, plotter, modem, flatbed scanner, or other peripherals. For example, unless the expression analysis workstation has its own connection to the Internet, results of the experiment can't be easily communicated to collaborating laboratories or even the department in an adjoining building. Furthermore, even though many of the public online bioinformatics databases accept submissions on floppy or other media, the practice is usually frowned upon in favor of electronic submission.
Without the wireless component of the LAN, researchers in the lab would not be able to instantly explore the data generated by the scanning and analysis workstation, but would have to wait until the other researchers operating a workstation have time to write the data to a disk or other media. More importantly, every workstation operator would be responsible for backing up and archiving their own dataa time-consuming, high-risk proposition. It's far more likely, for example, that a researcher in the laboratory will fail to manually archive local data on a regular basis than it is for a central, automated backup system to fail.
This brief tour of this prototypical microarray laboratory highlights several applications of networks in bioinformatics. The underlying advantage of the network is the ability to move data from one computer to another as quickly, transparently, and securely as possible. This entails accessing online databases, publishing findings, communicating via e-mail, working with other researchers through integrated networked applications known as groupware, and downloading applications and large data sets from online sources via file transfer protocol (FTP) and other methods.
Although many of these features can be had by simply plugging in a few network cards and following a handful of instruction manuals, chances are that several key functions won't be available without considerably more knowledge of network technology. For example, selecting and configuring a network requires that someone make educated decisions regarding bandwidth, reliability, security, and cost. Furthermore, mixed operating system environments typical of bioinformatics laboratories, which tend to have at least one workstation running Linux or UNIX, presents challenges not found in generic office networks.
What's more, it may not be obvious from the simple network depicted in Figure 3-1 that bioinformatics networks present unique networking challenges that typically can't be addressed by generic network installations. The first is that there is a huge amount of data involved. The network isn't handling short e-mail messages typical of the corporate environment, but massive sequence strings, images, and other data. In addition, unlike networks that support traditional business transaction processing, data are continually flowing from disk arrays, servers, and other sources to computers for processing because the data can't fit into computer RAM. As a result, the network and external data sources are in effect extensions of the computer bus, and the performance of the network limits the overall performance of the system. It doesn't matter whether the computer processor is capable of processing several hundred million operations per second if the network feeding data from the disks to the computer has a throughput of only 45 Mbps.
This chapter continues the exploration of the Internet, intranets, wireless systems, and other network technologies that apply directly to sharing, manipulating, and archiving sequence data and other bioinformatics information. The following sections explore network architecturehow a network is designed, how the components on the system are connected to the network, and how the components interact with each other. As illustrated in Figure 3-2, this includes examining networks from the perspective of:
Geographical scope
Underlying model or models used to implement the network
Signal transmission technology
Bandwidth or speed
Physical layout or topology
Protocol or standards used to define how signals are handled by the network
Ownership or funding source involved in network development
Hardware, including cables, wires, and other media used to provide the information conduit from one device to the next
Content carried by the network
Figure 3-2 Network Taxonomy. Networks can be characterized along a variety of parameters, from size or geographical scope to the contents carried by the network.
This chapter also explores the practical network implementation issues, especially network security, and considers the future of network technology.
Geographical Scope
The geographical extent of a network is significant because it affects bandwidth, security, response time, and the type of computing possible. For example, it is only because of the high-speed Internet backbone that real-time teleconferencing and model sharing are possible on a worldwide basis.
Although the geographical boundaries are somewhat arbitrary the networks are commonly referred to as personal area networks (PANs), LANs, metropolitan area networks (MANs), or wide area networks (WANs), as depicted in Figure 3-3. Although many networks are interconnected, they can also function alone.
Figure 3-3 Network Geographical Scope. Bioinformatics R&D incorporates network resources on worldwide (WAN), institution-wide (MAN), and laboratory-wide (LAN and PAN) levels.
PANs, which are limited to the immediate proximity of the user, or about a 10-meter radius, are typically constructed using wireless technology. LANs extend to about 100-meters from a central server, or a single floor in a typical research building. MANs take over where LANs leave off, covering entire buildings and extending tens of kilometers. MANs are typically implemented with digital subscriber line (DSL), cable modem, and fixed wireless technologies. WANs extend across the continent and around the globe, and are typically composed of a combination of terrestrial fixed satellite systems, coaxial cable, and fiber optical cable. The public switched telephone network and the Internet are examples of WANs.
Grid computing, in which multiple PCs are interconnected to form a distributed supercomputer, can take advantage of LAN, MAN, and WAN technology, as a function of computer processing speed, network connection bandwidth, and the effectiveness of the software that coordinates activities among computers on the grid. For example, relatively slow-speed DSL and cable modem connections are used by many of the experimental grid systems, such as the Folding@home project at Stanford University. The system uses standard DSL and cable modem networks, which provide between 125 Kbps and 1 Mbps throughput, to connect over 20,000 PCs to form a grid computer. The higher-speed grid systems are necessarily limited to MAN distances using conventional Internet connections or WAN distances with much higher-speed network connections. For example, the Department of Energy's Science Grid project is based on a 622 Mbps fiber network running a suite of software that includes Globus grid software.