Reuse
Fact 15
Reuse-in-the-small (libraries of subroutines) began nearly 50 years ago and is a well-solved problem.
Discussion
There is a tendency in the computing world to assume that any good idea that comes along must be a new idea. Case in pointreuse.
Truth to tell, the notion of reuse is as old as the software field. In the mid-1950s, a user organization for scientific applications of IBM "mainframes" (that term was not used in those days) was formed. One of its most important functions was serving as a clearinghouse for contributed software subroutines. The organization was called Share, appropriately enough, and the contributed routines became the first library of reusable software. The way to gain fame, back in those early computing days, was to be known as someone who contributed good quality routines to the library. (It was not, however, a way of gaining fortune. Back in those days, software had no monetary valueit was given away free with hardware. Note, here, another good idea that is not newopen-source or freeware software.)
Now those early libraries of software routines contained what we today would call reuse-in-the-small routines. Math functions. Sorts and merges. Limited-scope debuggers. Character string handlers. All those wonderful housekeeping capabilities that most programmers needed (and still need) at one time or another. In fact, my first brush with (extremely limited!) fame came in contributing a debug routine addendum to the Share library.
Reuse was built into the software development process, back in those days. If you were writing a program that needed some kind of common capability, you went first to the Share library to see if it already existed. (Other user groups, like Guide and Common, probably had their own libraries for their own application domains. I was not a business application programmer at that time, so I don't really know whether Guide and Common functioned like Share.) I remember writing a program that needed a random number generator and going to the Share library to find one I could use. (There were plenty of them, from your basic random number generator to those that generated random numbers to some predictable pattern, like fitting a normal curve.)
Reuse in those days was catch-as-catch-can, with no quality control on what was placed in the library. However, having your name attached to a Share library routine was a big deal, and you worked very hard to make sure your contribution was error-free before you submitted it. I don't remember any quality problems with reused Share routines.
Why this trip down memory lane? Because it is important in trying to understand the reuse phenomenon and its status today, to realize that this is a very old and very successful idea. Following the success of reuse-in-the-small, and, in spite of efforts to expand that concept into larger components, the state of reuse remained fairly constant over the years. Why that is will be discussed in Fact 16.
Controversy
The primary controversy here is that too many people in the computing field think that reuse is a brand-new idea. As a result, there is enormous (and often hyped) enthusiasm for this concept, an enthusiasm that would be more realistic if people understood its history and its failure to grow over the years.
Sources
This memory of early days' reuse is very vivid for me. In fact, the best account of this phenomenon is in my own personal/professional reflection (Glass 1998) (he immodestly said). The Share organization (it still functions today) would be another place to find documentation of its early days (it actually produced what we would today call a tools and parts catalog, wherein potential users could find out what modules were available to them, organized by the problem those modules solved).
Reference
Glass, Robert L. 1998. "Software ReflectionsA Pioneer's View of the History of the Field." In In the Beginning: Personal Recollections of Software Pioneers. Los Alamitos, CA: IEEE Computer Society Press.
Fact 16
Reuse-in-the-large (components) remains a mostly unsolved problem, even though everyone agrees it is important and desirable.
Discussion
It is one thing to build useful small software components. It is quite another to build useful large ones. In Fact 15, we solved the reuse-in-the-small problem as far back as more than 40-something years ago. But the reuse-in-the-large problem has remained unsolved over those same intervening years.
Why is that? Because there are a lot of different opinions on this subject, I address this "why" question in the Controversy section that follows.
But the key word in understanding this problem is the word useful. It is not very difficult to build generalized, reusable routines. Oh, it is more difficultsome say three times more difficult than it is to build comparable special-purpose routines (there's a fact, Fact 18, that covers this)but that is not a prohibitive barrier. The problem is, once those reusable modules are built, they have to do something that truly matches a great variety of needs in a great variety of programs.
And there's the rub. We see in the discussion of the controversy surrounding this topic that (according to one collection of viewpoints, at least) a diverse collection of problems to be solved results in a diverse set of component needstoo diverse, at least at this time, to make reuse-in-the-large viable.
Controversy
There is considerable controversy surrounding the topic of reuse-in-the-large. First, advocates see reuse-in-the-large as the future of the field, a future in which programs are screwed together from existing components (they call it component-based software engineering). Others, typically practitioners who understand the field better (there's no bias in that comment!), pooh-pooh the idea. They say that it is nearly impossible to generalize enough functions to allow finessing the development of special-purpose, fitted to the problem at hand, components.
The resolution of this particular controversy falls into a topic that might be called software diversity. If there are enough common problems across projects and even application domains, then component-based approaches will eventually prevail. If, as many suspect, the diversity of applications and domains means that no two problems are very similar to one another, then only those common housekeeping functions and tasks are likely to be generalized, and they constitute only a small percentage of a typical program's code.
There is one source of data to shed light on this matter. NASA-Goddard, which over the years has studied software phenomena at its Software Engineering Laboratory (SEL) and which services the very limited application domain of flight dynamics software, has found that up to 70 percent of its programs can be built from reused modules. Even the SEL, however, sees that fact as a function of having a tightly constrained application domain and does not anticipate achieving that level of success across more diverse tasks.
Second, there is a controversy in the field as to why reuse-in-the-large has never caught on. Many, especially academics, believe it is because practitioners are stubborn, applying the "not-invented-here" (NIH) syndrome to allow them to ignore the work of others. Most people who believe in NIH tend to view management as the problemand the eventual solution. From that point of view, the problems of reuse-in-the-large are about will, not skill. It is management's task, these people say, to establish policies and procedures that foster reuse to create the necessary will.
In fact, few claim that there is a problem of skill in reuse. Although it is generally acknowledged that it is considerably more difficult to built a generalized, reusable version of a capability than its ad hoc alternative, it is also generally acknowledged that there is no problem in finding people able to do that job.
My own view, which contradicts both the NIH view and the will-not-skill view, is that the problem is close to being intractable. That is, because of the diversity problem mentioned earlier, it is the exception rather than the rule to find a component that would be truly generalizable across a multiplicity of applications, let alone domains. My reason for holding that view is that over the years one of the tasks I set for myself was to evolve reuse-in-the-small into reuse-in-the-large. I sought and tried to build reuse-in-the-large components that would have all the widespread usefulness of those reuse-in-the-small routines from the Share library. And I came to understand, as few today seem to understand, how difficult a task that really is. For example, knowing that one of the bread-and-butter tools in the Information Systems application domain was the generalized report generator, I tried to produce the analogous capability for the scientific/engineering domain. Despite months of struggle, I could never find enough commonality in the scientific/engineering report generation needs to define the requirements for such a component, let alone build one.
In my view, then, the failure of reuse-in-the-large is likely to continue. It is not an NIH problem. It is not a will problem. It is not even a skill problem. It is simply a problem too hard to be solved, one rooted in software diversity.
No one wants me to be correct, of course. Certainly, I don't. Screwed-together components would be a wonderful way to build software. So would automatic generation of code from a requirements specification. And neither of those, in my view, is ever likely to happen in any meaningful way.
Sources
There are plenty of sources of material on reuse-in-the-large, but almost all of them present the viewpoint that it is a solvable problem.
As mentioned earlier, one subset of this Pollyanna viewpoint consists of those who see it as a management problem and present approaches that management can use to create the necessary will. Two recent sources of this viewpoint are
IEEE Standard 1517. "Standard for Information TechnologySoftware Life Cycle ProcessesReuse Processes; 1999." A standard, produced by the engineering society IEEE, by means of which the construction of reusable componentry can be fostered
McClure, Carma. 2001. Software ReuseA Standards-Based Guide. Los Alamitos, CA: IEEE Computer Society Press. A how-to book for applying the IEEE standard.
Over the years, a few authors have been particularly realistic in their view of reuse. Any of the writings of Ted Biggerstaff, Will Tracz, and Don Reifer on this subject are worth reading.
Reifer, Donald J. 1997. Practical Software Reuse. New York: John Wiley and Sons.
Tracz, Will. 1995. Confessions of a Used Program Salesman: Institutionalizing Reuse. Reading, MA: Addison-Wesley.
Fact 17
Reuse-in-the-large works best in families of related systems and thus is domain-dependent. This narrows the potential applicability of reuse-in-the-large.
Discussion
OK, so reuse-in-the-large is a difficult, if not intractable, problem. Is there any way in which we can increase the odds of making it work?
The answer is "yes." It may be nearly impossible to find components of consequence that can be reused across application domains, but within a domain, the picture improves dramatically. The SEL experience in building software for the flight dynamics domain is a particularly encouraging example.
Software people speak of "families" of applications and "product lines" and "family-specific architectures." Those are the people who are realistic enough to believe that reuse-in-the-large, if it is ever to succeed, must be done in a collection of programs that attacks the same kinds of problems. Payroll programs, perhaps even human resource programs. Data reduction programs for radar data. Inventory control programs. Trajectory programs for space missions. Notice the number of adjectives that it takes to specify a meaningful domain, one for which reuse-in-the-large might work.
Reuse-in-the-large, when applied to a narrowly defined application domain, has a good chance of being successful. Cross-project and cross-domain reuse, on the other hand, does not (McBreen 2002).
Controversy
The controversy surrounding this particular fact is among people who don't want to give up on the notion of fully generalized reuse-in-the-large. Some of those people are vendors selling reuse-in-the-large support products. Others are academics who understand very little about application domains and want to believe that domain-specific approaches aren't necessary. There is a philosophical connection between these latter people and the one-size-fits-all tools and methodologists. They would like to believe that the construction of software is the same no matter what domain is being addressed. And they are wrong.
Sources
The genre of books on software product families and product architectures is growing rapidly. This is, in another words, a fact that many are just beginning to grasp, and a bandwagon of supporters of the fact is now being established. A couple of very recent books that address this topic in a domain-focused way are
Bosch, Jan. 2000. Design and Use of Software Architectures: Adopting and Evolving a Product-Line Approach. Boston: Addison-Wesley.
Jazayeri, Mehdi, Alexander Ran, and Frank van der Linden. 2000. Software Architecture for Product Families: Principles and Practice. Boston: Addison-Wesley.
Reference
McBreen, Pete. 2002. Software Craftsmanship. Boston: Addison-Wesley. Says "cross-project reuse is very hard to achieve."
Fact 18
There are two "rules of three" in reuse: (a) It is three times as difficult to build reusable components as single use components, and (b) a reusable component should be tried out in three different applications before it will be sufficiently general to accept into a reuse library.
Discussion
There is nothing magic about the number three in reuse circles. In the two rules of three, those threes are rules of thumb, nothing more. But they are nice, memorable, realistic rules of thumb.
The first is about the effort needed to build reusable components. As we have seen, to construct reusable components is a complex task. Often, someone building a reusable component is thinking of a particular problem to be solved and trying to determine whether there is some more general problem analogous to this specific one. A reusable component, of course, must solve this more general problem in such a way that it solves the specific one as well.
Not only must the component itself be generalized, but the testing approach for the component must address the generalized problem. Thus the complexity of building a reusable component arises in the requirements"what is the generalized problem?"design, "how can I solve this generalized problem? in coding, and in testing portions of the life cycle. In other words, from start to finish.
It is no wonder that knowledgeable reuse experts say it takes three times as long. It is also worth pointing out that, although most people are capable of thinking about problems in a generalized way, it still requires a different mindset from simply solving the problem at hand. Many advocate the use of particularly skilled, expert generalizers.
The second rule of thumb is about being sure that your reusable component really is generalized. It is not enough to show that it solves your problem at hand. It must solve some related problems, problems that may not have been so clearly in mind when the component was being developed. Once again, the number threetry your component out in three different settingsis arbitrary. My guess is that it represents a minimum constraint. That is, I would recommend trying out your generalized component in at least three different applications before concluding that it truly is generalized.
Controversy
This fact represents a couple of rules of thumb, rules that few have reason to doubt. Everyone would acknowledge that reusable components are harder to develop and require more verification than their single-task brethren. The numbers three might be argued by some, but there is hardly anyone who is likely to defend them to the death, since they are rules of thumb and nothing more.
Sources
This fact has come to be known over the years as "Biggerstaff's Rules of Three." There is a very early paper by Ted Biggerstaff, published in the 1960s or 1970s, that first mentions reuse rules of three. Unfortunately, the passage of time has eroded by ability to recall the specific reference, and my many attempts to use the Internet to overcome that memory loss have not helped. However, in the References section, I mention studies of Biggerstaff's role.
I have a particular reason for remembering the rules of thumb and Biggerstaff, however. At the time Biggerstaff's material was published, I was working on a generalized report generator program for business applications (I mentioned it earlier in passing). I had been given three report generators (for very specific tasks) to program, andsince I had never written a report generator program before I gave more than the usual amount of thought to the problem.
The development of the first of the three generators went quite slowly, as I thought about all the problems that, to me, were unique. Summing up columns of figures. Summing sums. Summing sums of sums. There were some interesting problems, very different from the scientific domain that I was accustomed to, to be solved here.
The second program didn't go much faster. The reason was that I was beginning to realize how much these three programs were going to have in common, and it had occurred to me that a generalized solution might even work.
The third program went quite smoothly. The generalized approaches that had evolved in addressing the second problem (while remembering the first) worked nicely. Not only was the result of the third programming effort the third required report generator, but it also resulted in a general-purpose report generator. (I called it JARGON. The origin of the acronym is embarrassing and slightly complicated, but forgive me while I explain it. The company for which I worked at the time was Aerojet. The homegrown operating system we used there was called Nimble. And JARGON stood for Jeneralized (ouch!) Aerojet Report Generator on Nimble.)
Now, I had already formed the opinion that thinking through all three specific projects had been necessary to evolve the generalized solution. In fact, I had formed the opinion that the only reasonable way to create a generalized problem solution was to create three solutions to specific versions of that problem. And along came Biggerstaff's paper. You can see why I have remembered it all these years.
Unfortunately, I can't verify the first rule of three, the one about it taking three times as long. But I am absolutely certain that, in creating JARGON, it took me considerably longer than producing one very specific report generator. I find the number three quite credible in this context, also.
Biggerstaff, Ted, and Alan J. Perlis, eds. 1989. Software Reusability. New York: ACM Press.
Tracz, Will. 1995. Confessions of a Used Program Salesman: Institutionalizing Reuse. Reading, MA: Addison-Wesley.
Fact 19
Modification of reused code is particularly error-prone. If more than 20 to 25 percent of a component is to be revised, it is more efficient and effective to rewrite it from scratch.
Discussion
So reuse-in-the-large is very difficult (if not impossible), except for families of applications, primarily because of the diversity of the problems solved by software. So why not just change the notion of reuse-in-the-large a little bit? Instead of reusing components as is, why not modify them to fit the problem at hand? Then, with appropriate modifications, we could get those components to work anywhere, even in totally unrelated families of applications.
As it turns out, that idea is a dead end also. Because of the complexity involved in building and maintaining significant software systems (we will return to this concept in future facts), modifying existing software can be quite difficult. Typically, a software system is built to a certain design envelope (the framework that enables but at the same time bounds the chosen solution) and with a design philosophy (different people will often choose very different approaches to building the same software solution). Unless the person trying to modify a piece of software understands that envelope and accepts that philosophy, it will be very difficult to complete a modification successfully.
Furthermore, often a design envelope fits the problem at hand very nicely but may completely constrain solving any problem not accommodated within the envelope, such as the one required to make a component reusable across domains. (Note that this is a problem inherent in the Extreme Programming approach, which opts for early and simple design solutions, making subsequent modification to fit an enhancement to the original solution potentially very difficult.)
There is another problem underlying the difficulties of modifying existing software. Those who have studied the tasks of software maintenance find that there is one task whose difficulties overwhelm all the other tasks of modifying software. That task is "comprehending the existing solution." It is a well-known phenomenon in software that even the programmer who originally built the solution may find it difficult to modify some months later.
To solve those problems, software people have invented the notion of maintenance documentation documentation that describes how a program works and why it works that way. Often such documentation starts with the original software design document and builds on that. But here we run into another software phenomenon. Although everyone accepts the need for maintenance documentation, its creation is usually the first piece of baggage thrown overboard when a software project gets in cost or schedule trouble. As a result, the number of software systems with adequate maintenance documentation is nearly nil.
To make matters worse, during maintenance itself, as the software is modified (and modification is the dominant activity of the software field, as we see in Fact 42), whatever maintenance documentation exists is probably not modified to match. The result is that there may or may not be any maintenance documentation, but if there is, it is quite likely out-of-date and therefore unreliable. Given all of that, most software maintenance is done from reading the code.
And there we are back to square one. It is difficult to modify software. Things that might help are seldom employed or are employed improperly. And the reason for the lack of such support is often our old enemies, schedule and cost pressure. There is a Catch-22 here, and until we find another way of managing software projects, this collection of dilemmas is unlikely to change.
There is a corollary to this particular fact about revising software components:
It is almost always a mistake to modify packaged, vendor-produced software systems.
It is a mistake because such modification is quite difficult; that's what we have just finished discussing. But it is a mistake for another reason. With vendor-supplied software, there are typically rereleases of the product, wherein the vendor solves old problems, adds new functionality, or both. Usually, it is desirable for customers to employ such new releases (in fact, vendors often stop maintaining old releases after some period of time, at which point users may have no choice but to upgrade to a new release).
The problem with in-house package modifications is that they must be redone with every such new release. And if the vendor changes the solution approach sufficiently, the old modification may have to be redesigned totally to fit into the new version. Thus modifying packaged software is a never-ending proposition, one that continues to cost each time a new version is used. In addition to the unpleasant financial costs of doing that, there is probably no task that software people hate more than making the same old modification to a piece of software over and over again. Morale costs join dollar costs as the primary reason for accepting this corollary as fact.
There is nothing new about this corollary. I can remember back to the 1960s when, considering how to solve a particular problem, rejecting modifying vendor software on the grounds that it would be, long-term, the most disastrous solution approach. Unfortunately, as with many of the other frequently forgotten facts discussed in this book, we seem to have to keep learning that lesson over and over again.
In some research I did on the maintenance of Enterprise Resource Planning (ERP) systems (for example, SAPs), several users said that they had modified the ERP software in-house, only to back out of those changes when they realized to their horror what they had signed up for.
Note that this same problem has interesting ramifications for the open-source software movement. It is easy to access open-source code to modify it, but the wisdom of doing so is clearly questionable, unless the once-modified version of the open-source code is to become a new fork in the system's development, never to merge with the standard version again. I have never heard open-source advocates discuss this particular problem. (One solution, of course, would be for the key players for the open-source code in question to accept those in-house modifications as part of the standard version. But there is never any guarantee that they will choose to do that.)
Controversy
To accept these facts, it is necessary to accept another factthat software products are difficult to build and maintain. Software practitioners generally accept this notion. There is, unfortunately, a belief (typically among those who have never built production-quality software) that constructing and maintaining software solutions is easy. Often this belief emerges from those who have never seen the software solution to a problem of any magnitude, either because they have dealt only with toy problems (this is a problem for many academics and their students) or because their only exposure to software has been through some sort of computer literacy course wherein the most complicated piece of software examined was one that displayed "Hello, World" on a screen.
Because of the rampant naiveté inherent in that belief, there are many who simply will not accept the fact that modifying existing software is difficult. Those people, therefore, will continue to hold the belief that solution modification is the right approach to overcoming the diversity problems of reuse-in-the-large (and, I suppose, for tailoring vendor packages). There is probably nothing to be done for people who adhere to that beliefexcept to ignore them whenever possible.
Sources
The primary fact here was discovered in research studies of software errors and software cost estimation. The SEL of NASA-Goddardan organization that we discuss frequently in this bookconducted studies of precisely the problem of whether modifying old code was more cost-effective than starting a new version from scratch (McGarry et al. 1984; Thomas 1997). Their findings were impressive and quite clear. If a software system is to be modified at or above the 20 to 25 percent level, then it is cheaper and easier to start over and build a new product. That percentage is lowsurprisingly low, in fact.
You may recall that the SEL specializes in software for a very specific application domainflight dynamics. You may also recall that the SEL has been extremely successful in using reuse-in-the-large to solve problems in their very specialized domain. One might choose to question his or her findings on the grounds that they might differ for other domains; but, on the other hand, my tendency is to accept them because (a) the SEL appears to be more than objective in its explorations of this (and other) subjects, (b) SEL was quite motivated to make reuse-in-the-large work in whatever way it could be made to work, and (c) my own experience is that modifying software built by someone else is extremely difficult to get right. (Not to mention that famous quotation from Fred Brooks [1995], "software work is the most complex that humanity has ever undertaken."
References
Brooks, Frederick P., Jr. 1995. The Mythical Man-Month. Anniversary ed. Reading, MA: Addison Wesley.
McGarry, F., G. Page, D. Card, et al. 1984. "An Approach to Software Cost Estimation." NASA Software Engineering Laboratory, SEL-83-001 (Feb.). This study found the figure to be 20 percent.
Thomas, William, Alex Delis, and Victor R. Basili. 1997. "An Analysis of Errors in a Reuse-Oriented Development Environment." Journal of Systems and Software 38, no. 3. This study reports the 25 percent figure.
Fact 20
Design pattern reuse is one solution to the problems inherent in code resue..
Discussion
Up until now, this discussion of reuse has been pretty discouraging. Reuse-in-the-small is a well-solved problem and has been for over 45 years. Reuse-in-the-large is a nearly intractable problem, one we may never solve except within application families of similar problems. And modifying reusable components is often difficult and not a very good idea. So what's a programmer to do to avoid starting from scratch on each new problem that comes down the pike?
One thing that software practitioners have always done is to solve today's problem by remembering yesterday's solution. They used to carry code listings from one job to the next, until the time came that software had value (in the 1970s), and then various corporate contractual provisions and some laws made it illegal to do.
One way or another, of course, programmers still do. They may carry their prior solutions in their heads or they may actually carry them on disk or paper, but the need to reuse yesterday's solution in today's program is too compelling to quit doing it entirely. As a legal consultant, I have, on occasion, been called on to deal with the consequences of such occurrences.
Those transported solutions are often not reinstated verbatim from the old code. More often, those previous solutions are kept because of the design concepts that are embodied in the code. At a conference a couple of decades ago, Visser (1987) reported what most practitioners already know: "Designers rarely start from scratch."
What we are saying here is that there is another level at which to talk about reuse. We can talk about reusing code, as we have just finished doing. And we can talk about reusing design. Design reuse exploded dramatically in the 1990s. It was an idea as old as software itself; and yet, when it was packaged in the new form of "design patterns," suddenly it had new applicabilityand new respect. Design patterns, nicely defined and discussed in the first book on the subject (Gamma 1995), gained immediate credibility in both the practitioner and academic communities.
What is a design pattern? It is a description of a problem that occurs over and over again, accompanied by a design solution to that problem. A pattern has four essential elements: a name, a description of when the solution should be applied, the solution itself, and the consequences of using that solution.
Why were patterns so quickly accepted by the field? Practitioners recognized that what was happening here was something they had always done, but now it was cloaked in new structure and new respectability. Academics recognized that patterns were in some ways a more interesting concept than code reuse, in that they involved design, something much more abstract and conceptual than code.
In spite of the excitement about patterns, it is not obvious that they have had a major impact in the form of changed practice. There are probably two reasons for that.
Practitioners, as I noted earlier, had already been doing this kind of thing.
Initially, at least, most published patterns were so-called housekeeping (rudimentary, nondomain-specific) patterns. The need to find domain-specific patterns is gradually being recognized and satisfied.
This particular fact has its own interesting corollary:
Design patterns emerge from practice, not from theory.
Gamma and his colleagues (1995) acknowledge the role of practice, saying things like "None of the design patterns in this book describes new or unproven designs . . . [they] have been applied more than once in different systems" and "expert designers . . . reuse solutions that have worked for them in the past." This is a particularly interesting case of practice leading theory. Practice provided the notion of, and tales of the success of, something that came to be called patterns. Discovering this, theory built a framework around this new notion of patterns and facilitated the documentation of those patterns in a new and even more useful way.
Controversy
The notion of design patterns is widely accepted. There is an enthusiastic community of academics who study and publish ever-widening circles of patterns. Practitioners value their work in that it provides organization and structure, as well as new patterns with which they may not be familiar.
It is difficult, however, to measure the impact of this new work on practice. There are no studies of which I am aware as to how much of a typical application program is based on formalized patterns. And some say that the overuse of patterns (trying to wedge them into programs where they don't fit) can lead to "unintelligible . . . code, . . . decorators on top of facades generated by factories."
Still, since no one doubts the value of the work, it is safe to say that design patterns represent one of the most unequivocally satisfying, least forgotten, truths of the software field.
Sources
In recent years, a plethora of books on patterns has emerged. There are almost no bad books in this collection; anything you read on patterns is likely to be useful. Most patterns books, in fact, are actually a catalog of patterns collected on some common theme. The most important book on patterns, the pioneer and now-classic book, is that by Gamma et al. This book with its collection of authors has become known as the "Gang of Four." It is listed in the References section that follows.
References
Gamma, Erich, Richard Helm, Ralph Johnson, and John Vlissides. 1995. Design Patterns. Reading, MA: Addison-Wesley.
Visser, Willemien. 1987. "Strategies in Programming Programmable Controllers: A Field Study of a Professional Programmer." Proceedings of the Empirical Studies of Programmers: Second Workshop. Ablex Publishing Corp.