Studying Design Rationales of Exemplars
How should designers study exemplars in their fields? To study an architecture, one can read the manual. To study an implementation, one can read maintenance documentation. But for overview, one has to study the technical papers and books about the products to get the rationales.
Most technical papers, however, emphasize the whats and give skimpy coverage to the whys. And many designs never get explicated by their original designers at all; the creators are too busy on their next designs.
The exceptions cluster around the early days of any technology and around later revolutions, when approaches vary widely and debates are hot. These papers, like reports of military victories, are always after the fact, and they are usually rationalized; that is, they are far more rational in retrospect than was the actual design process. For most of us, that process was rich with potholes, blind alleys, mistaken turns, and alterations of goals. We learn a lot from the few exceptions to this post hoc smoothing.
Computer processor architecture provides a fruitful example for a study of exemplars. The technology is recent enough that there were many venues and outlets for descriptions. The field began with a wide diversity of design approaches and has converged to a “standard architecture.” Blaauw and I elaborate on this evolution in Chapter 9 of our Computer Architecture [1997]. Revolutions—virtual memory, minicomputers, microcomputers, and RISC architectures—punctuated the historical development. Each occasioned fresh debates and hence stimulated fuller rationales.
First-Generation Computers
The most important computer paper ever written is:
Burks, Goldstine, and von Neumann [1946], “Preliminary discussion of the logical design of an electronic computing instrument.”
It is an incredible piece of work—must reading for every computer scientist. It cogently sets forth the stored-program concept, the three-register arithmetic unit, and many other ideas besides. The coverage is complete; the reasoning, compelling.
Maurice Wilkes says of an earlier draft,
I sat up late into the night reading the report. ... I recognized this at once as the real thing, and from that time on never had any doubt as to the way computer development would go.4
Wilkes further says there that this paper sets forth the ideas generated at the University of Pennsylvania in discussions among Presper Eckert, John Mauchly, and John von Neumann. He regrets that the extremely fruitful ideas are usually credited to von Neumann alone and has been at some pains to correct this misunderstanding.
After “Preliminary discussion” appeared, many groups in many places started building stored-program computers, using vacuum-tube logic. The first successes were at Manchester, with a running but unusably small Baby, and at Cambridge, with the first useful stored-program machine, the EDSAC. These rationales are very well documented: Williams [1948], “Electronic digital computers”; Wilkes [1949], “The EDSAC.”
The most important early supercomputers are the IBM Stretch and the Control Data CDC 6600. Buchholz [1962], Planning a Computer System: Project Stretch, gives mostly rationale papers. However, the most noteworthy paper is Chapter 17, which describes a radically different sort of computer—a data-streaming coprocessor designed for cryptanalytic use—with hardly any description of the application or rationale for machine features.
The CDC 6600 quickly succeeded the Stretch as the world’s fastest computer and came to dominate scientific supercomputing. It is the ancestor of the Cray family of supercomputers. Thornton [1970], The Design of a Computer—The CDC 6600, gives lots of rationale.
Third-Generation Computers
Second-generation computer architectures ran out of gas; that is, they lacked enough address bits to handle the large memories that had become economical and indispensable. An incompatible break in many product lines’ architectures became inevitable, although painful. Fortunately, integrated circuits provided a large improvement in realization cost, and high-level languages enabled recompilation, so that the switch to new architectures could be afforded. New architectures occasioned new rationales.
Blaauw and Brooks [1997], Computer Architecture, while not a rationale book, nevertheless includes rationales for many of the System/360 architectural decisions. Those are the examples we could explicate from personal knowledge. Amdahl [1964] and Blaauw [1964] give abbreviated synopses of the System/360 rationale.
Virtual Memory
The Manchester Atlas introduced the automatic paging of blocks of instructions and data from a slower backing store into a smaller high-speed memory. Developers of time-sharing operating systems at Michigan and MIT soon proposed generalizing this concept into a full-fledged virtual memory, with vast namespace. GE and IBM built such computers. Again a revolution; again new rationales: Sumner [1962] (Atlas), Dennis [1965], Arden [1966].
The Minicomputer Revolution
Transistor-diode logic offered a radically cheaper way of realizing computers. Such a machine, the DEC PDP-8, changed the world by making a computer that individual departments, not whole institutions, could afford and control. This sociological advance was at least as important as the technological performance/cost advance. Minicomputers were made by the thousands, coexisting with, rather than replacing, the so-called mainframes.
The mainframe makers were content with their business models, and—fat, dumb, and happy—they universally missed the minicomputer revolution. Many new computer makers started up. The most successful was Digital Equipment Corporation. Bell [1978] treats the rationales and evolution of DEC’s minicomputers.
The Microcomputer and RISC Revolutions
A similar sociological and technological revolution took place with integrated circuits. Radically lower costs meant that individuals, rather than departments, could have and control their own personal machines. Microcomputers are made by the millions.
This time it was the minicomputer makers, quite successful at what they were doing, who were fat, dumb, and happy. They missed the microcomputer revolution. Hewlett-Packard survived; DEC did not. Some of the mainframe makers, notably IBM, got back into the game and became major suppliers of personal microcomputers.
Again, the revolution spawned a cascade of rationales: Hoff [1972] (one-chip CPU), Patterson [1981] (RISC I), Radin [1982] (IBM 801).5
Experts in other disciplines can readily develop similar lists, giving the flow of history, the revolutions, and the milestone documented exemplars.