- Ubiquitous Computing
- Web Services
- The Semantic Web
- Spaces Computing
- Peer-to-Peer Computing
- Collaborative Computing
- Dependable Systems
- Security
- Languages
- Pervasive Computing
- Cluster Concepts
- Distributed Agents
- Distributed Algorithms
- Distributed Databases
- Distributed Filesystems
- Distributed Media
- Distributed Storage
- Grid Computing
- Massively Parallel Systems
- Middleware
- Mobile and Wireless Computing
- Network Protocols
- Operating Systems
- Real-Time and Embedded Systems
- Commentary
- Endnotes
Massively Parallel Systems
In the early 1990s, parallel processing systems started to emerge from the shadow of the vector processing supercomputers which they would complement and arguably replace.[30] Parallel processing began with dozens of ordinary processors that could be connected in such a way as to allow simultaneous calculations of different units of data from some larger problem.
Thinking Machines Corporation was founded in 1983 with the intention of providing compute resources to support the always-nascent artificial intelligence field. Business concerns led TMC to adjust, ultimately becoming the market leader for massively parallel systems around 1990. TMC's Connection Machine 5 (CM-5) was the first large-scale, massively parallel system, which utilized a single-instruction stream multiple-data stream (SIMD) architecture, essential for parallel processing.[31] The CM-5 was an all-Sun Microsystems play, featuring a Sun 2000 SMP front-end compile server and a bevy of Sun workstations using SPARC microprocessors working under the Solaris operating environment.
TMC is no more. While a leader in massively parallel systems for a short time, its business model was evidently lacking. But the concepts and history of massively parallel systems echo into the Network Age, with grid computing, languages, and distributed algorithms all beneficiaries of early massively parallel systems (MPS).
Computing approaches like that of SETI@home are similar in many respects to MPS systems. The SETI (Search for Extra-Terrestrial Intelligence) project began in 1984 with a single mainframe system to analyze the data harvested from radio antennae around the world. The ability of one system, regardless of its power, to adequately analyze data across a wide range frequencies and a wide swath of pattern probabilities is very limited. But a project like SETI is easily victim to budget cuts during political cycles when such matters make for visible policy adjustments; no funds were available to buy the needed processing capabilities to adequately do the job. Enter SETI@home in 1995. With the growing deployment of home PCs, which consume electricity whether their CPUs are busy or not, the advent of the commercial Internet and the browser gave rise to a resource ripe for harvest. The SETI@home project capitalized on this opportunity. By distributing a free screen saver that was actually an application that performed CPU-intensive calculations on discrete units of data gathered from myriad antennae, the project utilized the capabilities of a indeterminately large number of otherwise wasted cycles in a worldwide NDC massively parallel system. Other efforts with similar needs will inevitably follow.