Exploring the Role of Executable UML in Model-Driven Architecture
- Raising the Level of Abstraction
- Executable UML
- Making UML Executable
- Model Compilers
- Model-Driven Architecture
Introduction
Organizations want systems. They don't want processes, meetings, models, documents, or even code.1 They want systems that workas quickly as possible, as cheaply as possible, and as easy to change as possible. Organizations don't want long software development lead-times and high costs; they just want to reduce systems development hassles to the absolute minimum.
But systems development is a complicated business. It demands distillation of overlapping and contradictory requirements; invention of good abstractions from those requirements; fabrication of an efficient, cost-effective implementation; and clever solutions to isolated coding and abstraction problems. And we need to manage all this work to a successful conclusion, all at the lowest possible cost in time and money.
None of this is new. Over thirty years ago, the U.S. Department of Defense warned of a "software crisis" and predicted that to meet the burgeoning need for software by the end of the century, everyone in the country would have to become a programmer. In many ways this prediction has come true, as anyone who has checked on the progress of a right or made a stock trade using the Internet can tell you. Nowadays, we all write our own programs by filling in formsat the level of abstraction of the application, not the software.
1.1 Raising the Level of Abstraction
The history of software development is a history of raising the level of abstraction. Our industry used to build systems by soldering wires together to form hard-wired programs. Machine code allowed us to store programs by manipulating switches to enter each instruction. Data was stored on drums whose rotation time had to be taken into account so that the head would be able to read the next instruction at exactly the right time. Later, assemblers took on the tedious task of generating sequences of ones and zeroes from a set of mnemonics designed for each hardware platform.
Later, programming languages, such as FORTRAN, were born and "formula translation" became a reality. Standards for COBOL and C enabled portability between hardware platforms, and the profession developed techniques for structuring programs so that they were easier to write, understand, and maintain. We now have languages such as Smalltalk, C++, Eiffel, and Java, each with the notion of object-orientation, an approach for structuring data and behavior together into classes and objects.
As we moved from one language to another, generally we increased the level of abstraction at which the developer operates, requiring the developer to learn a new higher-level language that may then be mapped into lower-level ones, from C++ to C to assembly code to machine code and the hardware. At first, each higher layer of abstraction was introduced only as a concept. The first assembly languages were no doubt invented without the benefit of an (automated) assembler to turn the mnemonics into bits, and developers were grouping functions together with the data they encapsulated long before there was any automatic enforcement of the concept. Similarly, the concepts of structured programming were taught before there were structured programming languages in widespread industrial use (pace, Pascal).
Layers of Abstraction and the Market
The manner in which each higher layer of abstraction reached the market follows a pattern. The typical response to the introduction of the next layer of abstraction goes something like this: "Formula translation is a neat trick, but even if you can demonstrate it with an example, it couldn't possibly work on a problem as complex and intricate as mine."
As the tools became more useful and their value became more obvious, a whole new set of hurdles presented themselves as technical folk tried to acquire the wherewithal to purchase the tools. Now managers wanted to know what would happen if they came to rely on these new tools. How many vendors are there? Are other people doing this? Why should we take the risk in being first? What happens if the compiler builder runs out of business? Are we becoming too dependent on a single vendor? Are there standards? Is there interchange?
Initially, it must be said, compilers generated inefficient code. The development environment, as one would expect, comprised a few, barely production-level tools. These were generally difficult to use, in part because the producers of the tools focused first on bringing the technology to market to hook early adopters, and later on prettier user interfaces to broaden that market. The tools did not necessarily integrate with one another. When programs went wrong, no supporting tools were available: No symbolic debuggers, no performance profiling tools, no help, really, other than looking at the generated code, which surely defeated the whole purpose.
Executable UML and the tooling necessary to compile and debug an executable UML model are only now passing from this stage, so expect some resistance today and much better tools tomorrow.
But over time the new layers of abstraction became formalized, and tools such as assemblers, preprocessors, and compilers were constructed to support the concepts. This has the effect of hiding the details of the lower layers so that only a few experts (compiler writers, for example) need concern themselves with the details of how that layer works. In turn, this raises concerns about the loss of control induced by, for example, eliminating the GOTO statement or writing in a high-level language at a distance from the "real machine." Indeed, sometimes the next level of abstraction has been too big a reach for the profession as a whole, of interest to academics and purists, and the concepts did not take a large enough mindshare to survive. (ALGOL-68 springs to mind. So does Eiffiel, but it has too many living supporters to be a safe choice of example.)
Object Method History
Object methods have a complex history because they derive from two very different sources.
One source is the programming world, whence object-oriented programming came. Generalizing shamelessly, object-oriented programmers with an interest in methods were frustrated with the extremely process-oriented perspective of "structured methods" of the time. These methods, Structured Analysis and Structured Design, took functions as their primary view of the system, and viewed data as a subsidiary, slightly annoying, poor relation. Even the "real-time" methods at most just added state machines to the mix to control processing, and didn't encapsulate at all. There was a separate "Information Modeling" movement that was less prominent and which viewed data as all, and processing as a nuisance to be tolerated in the form of CRUD++. Either way, both of these camps completely missed the object-oriented boat. To add insult to injury, one motivation for objectsthe notion that an object modeled the real world, and then seamlessly became the software objectwas prominently violated by the emphasis in transforming from one (analysis) notation, data row diagrams, to another (design) notation, the structure chart.
Be that as it may, the search was on for a higher level of abstraction than the programming language, even though some claimed that common third-generation programming languages such as Smalltalk had already raised the level of abstraction far enough.
The other source was more centered in analysis. These approaches focused on modeling the concepts in the problem, but in an object-oriented way. Classes could be viewed as combinations of data, state, and behavior at a conceptual level only. In addition to the model, reorganization of "analysis" classes into "design" classes, and re-allocation of functionality were expected. There was no need to model the specific features used from a programming language because the programmer was to fill in these details. Perhaps the purest proponents of this point of view were Shlaer and Mellor. They asserted classes with attributes clearly visible on the class icon seemingly violating encapsulation, with the full expectation that object-oriented programming schemes would select an appropriate private data structure with the necessary operations.
These two sources met in the middle to yield a plethora of methods, each with its own notation (at least 30 published), each trying to some extent to meet the needs of both camps. Thus began the Method Wars, though Notation Wars might be more accurate.
UML is the product of the Method Wars. It uses notations and ideas from many of the methods extant in the early nineties, sometimes at different levels of abstraction and detail.
As the profession has raised the level of abstraction at which developers work, we have developed tools to map from one layer to the next automatically. Developers now write in a high-level language that can be mapped to a lower-level language automatically, instead of writing in the lower-level language that can be mapped to assembly language, just as our predecessors wrote in assembly language and translated that automatically into machine language.
Clearly, this forms a pattern: We formalize our knowledge of an application in as high a level language as we can. Over time, we learn how to use this language and apply a set of conventions for its use. These conventions become formalized and a higher-level language is born that is mapped automatically into the lower-level language. In turn, this next-higher-level language is perceived as low level, and we develop a set of conventions for its use. These newer conventions are then formalized and mapped into the next level down, and so on.