Paradoxes of Software Architecture
Software architecture is fraught with paradox, resulting in opposing forces that are counterintuitive and conflicting. As a result, we often don't realize the architectural goals we seek. Instead, we find ourselves dealing with the very issues we try to avoid.
Other paradoxes in software development clearly exist, though we're beginning to understand new ways to overcome them. Waterfall methodologies encouraged us to capture detailed requirements early in the development lifecycle. The sense of irony is fascinating. Teams who failed to gather accurate requirements on one project would increase the amount of time spent eliciting requirements on the next, only to find that didn't work either. This is foolhardy! We cannot solve a problem using the technique that is the problem's root cause; it's impossible to elicit a stable set of requirements up front. Agile methodologies have since steered us in the right direction.
Unfortunately, we continue to experience three paradoxes of architecture:
• Paradox #1: Flexibility breeds complexity. We aim to design flexible software; yet, in doing so, we see an undesirable increase in complexity.
• Paradox #2: Reuse complicates use. We strive to develop reusable software, only to impair the software's ease of use.
• Paradox #3: Evolution impedes survival. We design a software system that can evolve, but in doing so hasten its death.
To overcome these paradoxes, we draw upon the wisdom of software giants, and we examine how "architecture all the way down" helps.
Paradox #1: Flexibility Breeds Complexity
We aim to design flexible software; yet, in doing so, we see an undesirable increase in complexity.
In fact, this first paradox leads us toward the other two. That is, we try to design flexible software for two reasons:
• So that we can reuse software entities
• So that a system can evolve as necessary
As Ralph Johnson explains, "[M]aking everything easy to change makes the entire system very complex." [1]
In this sense, we might say that the bane of software development is complexity, which certainly comes as no surprise. But perhaps the real bane of software development is that we aim to design software with too much flexibility—that flexibility is what leads us to the complexity we despise. Ironically, it's also flexibility that increases reuse and enables evolution, although the resulting complexity hinders use and decreases survival of these same systems, as shown in Figure 1.
Figure 1 Paradoxes of architecture.
This is not to say that we should avoid designing flexible software systems. But the key is to identify where the flexibility is warranted, in order to avoid unnecessary complexity. To understand how to do this, we turn to Grady Booch: [2]
Architecture is design, but not all design is architecture. Rather, architecture focuses on significant decisions, where significant is measured by the impact of changing that decision. For this reason, architectural decisions tend to concentrate upon identifying and controlling the seams in a system, which are described in terms of interfaces and mechanisms and so typically mark the units of change in the system.
Where are these seams of which Booch speaks? We'll see in a moment, but first let's explore the second paradox.
Paradox #2: Reuse Complicates Use
We strive to develop reusable software, only to impair the software's ease of use.
Reuse is software development's panacea. The ability to compose systems from reusable elements has long been our Achilles' heel. We want reuse badly, yet our failures are spectacular. Almost all major technology trends of the past 20 years (and probably before) tout reuse as the saving grace.
What happened? In the early 1990s, object orientation promised to save software development. It hasn't. In the late 1990s, component-based development promised to save software development. It didn't, and the movement died. In the early 2000s, service-oriented architecture (SOA) promised to save software development. It didn't, although SOA development teams are still trying. Why is reuse so difficult?
First, we turn to the wisdom of Robert Martin and his Reuse-Release Equivalence Principle [3] to understand why objects have failed in delivering on the promise of reuse:
The granule of reuse is the granule of release.
Think about how you release software. What is it that you create? Typically you create some physical entity, such as a JAR file or service that you deploy. These are your granules of release; subsequently, they're also your granules of reuse. Objects are too fine-grained to serve as a granule of release and make poor candidates for a granule of reuse.
Of course, objects do play a role in reuse. Through abstraction, objects allow us to design flexible software that's open for extension and can be tailored based on need. In other words, we reuse a piece of software by designing extension points that allow other developers to configure that piece of software to a new context. However, as we've noted previously, this flexibility breeds complexity. Clemens Szyperski gives us the use/reuse paradox: [4]
Maximizing reuse complicates use.
To discover how this complication occurs, we simply need to examine two attributes of a software entity that affect its reusability: granularity and weight.
Granularity
Granularity is the extent to which a system is broken down into parts. Coarse-grained entities offer richer behavior than fine-grained entities do. Because they do more, they also tend to be larger than fine-grained entities. To maximize reuse, we try to compose coarse-grained entities from fine-grained entities. Of course, this approach results in a lot of dependencies between the fine-grained entities, making those entities more difficult to use. In general, we can say the following:
Coarse-grained entities are easier to use, but fine-grained entities are more reusable.
Weight
Weight is the extent to which a software entity depends on its environment. A heavyweight entity depends on its operating environment; a lightweight entity avoids such dependencies. When creating an entity that runs in multiple environments, we're forced to move the environment-specific dependencies (that is, the context dependencies) from code to configuration. This change makes the entity more reusable, but it's also more difficult to use because the entity must be configured for each environment.
Designing and configuring a lightweight entity is more difficult than simply dropping in an entity that's programmed to operate in that specific environment. In general, we can say the following:
Lightweight modules are more reusable, but heavyweight modules are easier to use.
It's clear that the conflict between reuse and use, as explored through granularity and weight, will challenge even the most experienced developers.
Paradox #3: Evolution Impedes Survival
We desire a software system that can evolve, but in doing so hasten its death.
Software tends to rot over time. When you establish your initial vision for the software's design and architecture, you imagine a system that's easy to modify, extend, and maintain—that is, a software system that can evolve as change occurs.
Unfortunately, as time passes, changes trickle in that exercise your architecture in unexpected ways. The flexibility we seek contributes to hindering our ability to understand the impact of change. Each change begins to resemble nothing more than another hack, until finally the system becomes a tangled web of code through which few developers care to venture. Most of us have experienced this phenomenon. Ultimately, we violate our original architectural goals, and the interdependencies between different areas of the system increase. Sadly, design rot is self-inflicted, and technical debt describes the effect of rotting design.
Technical debt is a metaphor developed by Ward Cunningham, who uses the term to describe the design tradeoffs we make in order to meet schedules and customer expectations. [5] Martin Fowler helps us to understand technical debt by comparing it to financial debt: [6]
Like a financial debt, the technical debt incurs interest payments, which come in the form of the extra effort that we have to do in future development because of the quick and dirty design choice. We can choose to continue paying the interest, or we can pay down the principal by refactoring the quick and dirty design into the better design. Although it costs to pay down the principal, we gain by reduced interest payments in the future.
In some situations, leveraging suboptimal designs, and thereby incurring technical debt, is warranted to meet short-term demands. For instance, your schedule may not allow longer-term designs to be used. However, if we ignore technical debt, it continues to build over time, and incurring too much debt leads to significant inefficiencies surrounding our ability to change the software system effectively. Meir M. Lehman's Law of Increasing Complexity summarizes this phenomenon well: [7]
As [a] system evolves its complexity increases unless work is done to maintain or reduce it.
In Search of the Vaunted Silver Bullet
The paradoxes paint a gloomy picture. Is there any hope of designing software that's flexible enough to endure the test of time, while simultaneously offering us the opportunity of reuse? As an optimist, I'd like to believe so, and "architecture all the way down" is part of the solution.
The issue is not that the object-oriented paradigm has failed us, or that SOA has failed us. The issue is that neither of these two paradigms is enough on its own, and even together they're not enough. Objects are too fine-grained to solve many of the challenges, and services are too coarse. As Booch suggests, we need to emphasize the seams of the system. But where are the seams in a system of objects? There are far too many object interactions to fully understand, even in a moderately sized system. Though services expose an interface that represents these seams, they're too coarse-grained, leaving the underlying implementation of an individual service to suffer a paradoxical fate.
Something is missing.
The answer lies, at least partially, in the native deployment constructs of various platforms. On the Java platform, this is the JAR file. On .NET, it's the assembly. These deployment constructs offer the opportunity to modularize our software systems, as shown in Figure 2:
• Modularization gives us an alternative unit of granularity whereby focusing on the seams between modules informs us of the areas that need the greatest flexibility.
• Modularity gives us an alternative granule of reuse.
• Modularity allows us to encapsulate implementation details at a finer-grained level that increase the adaptability of the system by isolating change.
Wrapping Up
Software complexity is a terrible enemy that inhibits our ability to develop adaptable software that's easy to understand. To deal with complexity, we try designing flexible software. Sadly, too often the flexibility we hope will tame complexity and yield higher degrees of reuse and maintainability has a paradoxical effect. The complexity hinders use and decreases the ability of our software to survive long-term.Overcoming these paradoxes isn't easy, but "architecture all the way down" can help. By focusing on different units of granularity, including objects, models, and services, we gain important information on areas of the system that require greater flexibility than others, while encapsulating other design details.
Did this article whet your appetite? Java Application Architecture: Modularity Patterns with Examples Using OSGi explores these ideas in greater detail and introduces 18 patterns to help you build better software.
(c)References
[1] Ralph Johnson, in an email message to Martin Fowler quoted in Fowler's article "Who Needs an Architect?" IEEE Software, 2003.
[2] Grady Booch. "The Handbook of Software Architecture."
[3] Robert C. Martin, "Design Principles and Design Patterns." January 2000.
[4] Clemens Szyperski, Component Software: Beyond Object-Oriented Programming, Second Edition. Addison-Wesley, 2002.
[5] Ward Cunningham, "Ward Explains Debt Metaphor" (video), Jan. 22, 2011.
[6] Martin Fowler, "Technical Debt." Feb. 26, 2009.
[7] Meir M. Lehman, "On Understanding Laws, Evolution, and Conservation in the Large-Program Life Cycle." Journal of Systems and Software Vol. 1, 1980, pp. 213