- Why Evaluate an Architecture?
- 2 When Can an Architecture Be Evaluated?
- 3 Who's Involved?
- 4 What Result Does an Architecture Evaluation Produce?
- 5 For What Qualities Can We Evaluate an Architecture?
- 6 Why Are Quality Attributes Too Vague for Analysis?
- 7 What Are the Outputs of an Architecture Evaluation?
- 8 What Are the Benefits and Costs of Performing an Architecture Evaluation?
- 9 For Further Reading
- 10 Discussion Questions
2.8 What Are the Benefits and Costs of Performing an Architecture Evaluation?
The main, and obvious, benefit of architecture evaluation is, of course, that it uncovers problems that if left undiscovered would be orders of magnitude more expensive to correct later. In short, architecture evaluation produces better architectures. Even if the evaluation uncovers no problems that warrant attention, it will increase everyone's level of confidence in the architecture.
But there are other benefits as well. Some of them are hard to measure, but they all contribute to a successful project and a more mature organization. You may not experience all of these on every evaluation, but the following is a list of the benefits we've often observed.
Puts Stakeholders in the Same Room
An architecture evaluation is often the first time that many of the stakeholders have ever met each other; sometimes it's the first time the architect has met them. A group dynamic emerges in which stakeholders see each other as all wanting the same thing: a successful system. Whereas before, their goals may have been in conflict with each other (and in fact, still may be), now they are able to explain their goals and motivations so that they begin to understand each other. In this atmosphere, compromises can be brokered or innovative solutions proposed in the face of greater understanding. It is almost always the case that stakeholders trade phone numbers and e-mail addresses and open channels of communication that last beyond the evaluation itself.
Forces an Articulation of Specific Quality Goals
The role of the stakeholders is to articulate the quality goals that the architecture should meet in order to be deemed successful. These goals are often not captured in any requirements document, or at least not captured in an unambiguous fashion beyond vague platitudes about reliability and modifiability. Scenarios provide explicit quality benchmarks.
Results in the Prioritization of Conflicting Goals
Conflicts that might arise among the goals expressed by the different stakeholders will be aired. Each method includes a step in which the goals are prioritized by the group. If the architect cannot satisfy all of the conflicting goals, he or she will receive clear and explicit guidance about which ones are considered most important. (Of course, project management can step in and veto or adjust the group-derived prioritiesperhaps they perceive some stakeholders and their goals as "more equal" than othersbut not unless the conflicting goals are aired.)
Forces a Clear Explication of the Architecture
The architect is compelled to make a group of people not privy to the architecture's creation understand it, in detail, in an unambiguous way. Among other things, this will serve as a dress rehearsal for explaining it to the other designers, component developers, and testers. The project benefits by forcing this explication early.
Improves the Quality of Architectural Documentation
Often, an evaluation will call for documentation that has not yet been prepared. For example, an inquiry along performance lines will reveal the need for documentation that shows how the architecture handles the interaction of run-time tasks or processes. If the evaluation requires it, then it's an odds-on bet that somebody on the project team (in this case, the performance engineer) will need it also. Again, the project benefits because it enters development better prepared.
Uncovers Opportunities for Cross-Project Reuse
Stakeholders and the evaluation team come from outside the development project, but often work on or are familiar with other projects within the same parent organization. As such, both are in a good position either to spot components that can be reused on other projects or to know of components (or other assets) that already exist and perhaps could be imported into the current project.
Results in Improved Architecture Practices
Organizations that practice architecture evaluation as a standard part of their development process report an improvement in the quality of the architectures that are evaluated. As development organizations learn to anticipate the kinds of questions that will be asked, the kinds of issues that will be raised, and the kinds of documentation that will be required for evaluations, they naturally preposition themselves to maximize their performance on the evaluations. Architecture evaluations result in better architectures not only after the fact but before the fact as well. Over time, an organization develops a culture that promotes good architectural design.
Now, not all of these benefits may resonate with you. If your organization is small, maybe all of the stakeholders know each other and talk regularly. Perhaps your organization is very mature when it comes to working out the requirements for a system, and by the time the finishing touches are put on the architecture the requirements are no longer an issue because everyone is completely clear what they are. If so, congratulations. But many of the organizations in which we have carried out architecture evaluations are not quite so sophisticated, and there have always been requirements issues that were raised (and resolved) when the architecture was put on the table.
There are also benefits to future projects in the same organization. A critical part of the ATAM consists of probing the architecture using a set of quality-specific analysis questions, and neither the method nor the list of questions is a secret. The architect is perfectly free to arm her- or himself before the evaluation by making sure that the architecture is up to snuff with respect to the relevant questions. This is rather like scoring well on a test whose questions you've already seen, but in this case it isn't cheating: it's professionalism.
The costs of architecture evaluation are all personnel costs and opportunity costs related to those personnel participating in the evaluation instead of something else. They're easy enough to calculate. An example using the cost of an ATAM-based evaluation is shown in Table 2.1. The left-most column names the phases of the ATAM (which will be described in subsequent chapters). The other columns split the cost among the participant groups. Similar tables can easily be constructed for other methods.
Table 2.1 shows figures for what we would consider a medium-size evaluation effort. While 70 person-days sounds like a substantial sum, in actuality it may not be so daunting. For one reason, the calendar time added to the project is minimal. The schedule should not be impacted by the preparation at all, nor the follow-up. These activities can be carried out behind the scenes, as it were. The middle phases consume actual project days, usually three or so. Second, the project normally does not have to pay for all 70 staff days. Many of the stakeholders work for other cost centers, if not other organizations, than the development group. Stakeholders by definition have a vested interest in the system, and they are often more than willing to contribute their time to help produce a quality product.
Table 2.1 Approximate Cost of a Medium-Size ATAM-Based Evaluation
Participant Group ATAM Phase |
Evaluation Team (assume 5 members) |
Stakeholders |
|
Project Decision Makers (assume architect, project manager, customer) |
Other Stakeholders (assume 8) |
||
Phase 0: Preparation |
1 person-day by team leader |
1 person-day |
0 |
Phase 1: Initial evaluation (1 day) |
5 person-days |
3 person-days |
0 |
Phase 2: Complete evaluation (3 days) |
15 person-days |
9 person-days + 2 person-days to prepare |
16 person-days (most stakeholders present only for 2 days) |
Phase 3: Follow-up |
15 person-days |
3 person-days to read and respond to report |
0 |
TOTAL |
36 person-days |
18 person-days |
16 person-days |
It is certainly easy to imagine larger and smaller efforts than the one characterized by Table 2.1. As we will see, all of the methods are flexible, structured to iteratively spiral down into as much detail as the evaluators and evaluation client feel is warranted. Cursory evaluations can be done in a day; excruciatingly detailed evaluations could take weeks. However, the numbers in Table 2.2 represent what we would call nominal applications of the ATAM. For smaller projects, Table 2.2 shows how those numbers can be halved.
If your group evaluates many systems in the same domain or with the same architectural goals, then there is another way that the cost of evaluation can be reduced. Collect and record the scenarios used in each evaluation. Over time, you will find that the scenario sets will begin to resemble each other. After you have performed several of these almost-alike evaluations, you can produce a "canonical" set of scenarios based on past experience. At this point, the scenarios have in essence graduated to become a checklist, and you can dispense with the bulk of the scenario-generation part of the exercise. This saves about a day. Since scenario generation is the primary duty of the stakeholders, the bulk of their time can also be done away with, lowering the cost still further.
Table 2.2 Approximate Cost of a Small ATAM-Based evaluation
Participant Group ATAM Phase |
Evaluation team (assume 2 members) |
Stakeholders |
|
Project Decision Makers (assume architect, project manager) |
Other Stakeholders (assume 3) |
||
Phase 0: Preparation |
1 person-day by team leader |
1 person-day |
0 |
Phase 1: Initial evaluation (1 day) |
2 person-days |
2 person-days |
0 |
Phase 2: Complete evaluation (2 days) |
4 person-days |
4 person-days + 2 person-days to prepare |
6 person-days |
Phase 3: Follow-up |
8 person-days |
2 person-days to read and respond to report |
0 |
TOTAL |
15 person-days |
11 person-days |
6 person-days |
Table 2.3 Approximate Cost of a Medium-Size Checklist-based ATAM-Based Evaluation
Participant Group ATAM Phase |
Evaluation Team (assume 4 members) |
Stakeholders |
|
Project Decision Makers (assume architect, project manager, customer) |
Other Stakeholders (assume the customer validates the checklist) |
||
Phase 0: Preparation |
1 person-day by team leader |
1 person-day |
0 |
Phase 1: Initial evaluation (1 day) |
4 person-days |
3 person-days |
0 |
Phase 2: Complete evaluation (2 days) |
8 person-days |
6 person-days |
2 person-days |
Phase 3: Follow-up |
12 person-days |
3 person-days to read and respond to report |
0 |
TOTAL |
25 person-days |
13 person-days |
2 person-days |
(You still may want to have a few key stakeholders, including the customer, to validate the applicability of your checklist to the new system.) The team size can be reduced, since no one is needed to record scenarios. The architect's preparation time should be minimal since the checklist will be publicly available even when he or she begins the architecture task.
Table 2.3 shows the cost of a medium-size checklist-based evaluation using the ATAM, which comes in at about 4⁄7 of the cost of the scenario-based evaluation of Table 2.1.
The next chapter will introduce the first of the three architecture evaluation methods in this book: the Architecture Tradeoff Analysis Method.