- Scratch Each Other's Backs
- Reviews and Team Culture
- Peer Review Sophistication Scale
- Planning for Reviews
- Guiding Principles for Reviews
Reviews and Team Culture
While individual participants can always benefit from a peer review, a broad review program can succeed only in a culture that values quality. "Quality" has many dimensions, including freedom from defects, satisfaction of customer needs, timeliness of delivery, and the possession of desirable product functionality and attributes. Members of a software engineering culture regard reviews as constructive activities that help both individuals and teams succeed. They understand that reviews are not intended to identify inferior performers or to find scapegoats for quality problems.
Reviews can result in two undesirable attitudes on the part of the work product's author. Some people become lax in their work because they're relying on someone else to find their mistakes, just as some programmers expect testers to catch their errors. The author is ultimately responsible for the product; a review is just an aid to help the author create a high-quality deliverable. Sometimes when I'm reading a draft of an article or book chapter I've written, I hear a little voice telling me that a section is incorrect or awkwardly phrased. I used to tell myself, "I'll give it to the reviewers and see what they think." Big mistake: the reviewers invariably disliked that clumsy section. Now whenever I hear that little voice, I fix the problem before I waste my reviewers' time.
The other extreme to avoid is the temptation to perfect the product before you allow another pair of eyes to see it. This is an ego-protecting strategy: you won't feel embarrassed about your mistakes if no one else sees them. I once managed a developer who refused to let anyone review her code until it was complete and as good as she could make itfully implemented, tested, formatted, and documented. She regarded a review as a seal of approval rather than as the in-process quality-improvement activity it really is.
Such reluctance has several unfortunate consequences. If your work isn't reviewed until you think it's complete, you are psychologically resistant to suggestions for changes. If the program runs, how bad can it be? You are likely to rationalize away possible bugs because you believe you've finished and you're eager to move on to the next task. Relying on your own deskchecking and unit testing ignores the greater efficiency of a peer review for finding many defects.
At the same time, the desire to show our colleagues only our best side can become a positive factor. Reviews motivate us to practice superior craftsmanship because we know our coworkers will closely examine our work. In this indirect way, peer reviews lead to higher quality. One of my fellow consultants knows a quality engineer who began to present his team with summaries of defects found during reviews, without identifying specific work products or authors. The team soon saw a decrease in the number of bugs discovered during reviews. Based on what he knew about the team, my colleague concluded that authors created better products after they learned how reviews were being used on the project and knew what kinds of defects to look for. Reviews weren't a form of punishment but stimulated a desire to complete a body of work properly.
The Influence of Culture
In a healthy software engineering culture, a set of shared beliefs, individual behaviors, and technical practices define an environment in which all team members are committed to building quality products through the effective application of sensible processes (Wiegers 1996a). Such a culture demands a commitment by managers at all levels to provide a quality-driven environment. Recognizing that team success depends on helping each other do the best possible job, members of a healthy culture prefer to have peers, not customers, find software defects. Having a coworker locate a defect is regarded as a "good catch," not as a personal failing.
Peer reviews have their greatest impact in a healthy software culture, and a successful review program contributes strongly to creating such a culture. Prerequisites for establishing and sustaining an effective review program include:
Defining and communicating your business goals for each project so reviewers can refer to a shared project vision
Determining your customers' expectations for product quality so you can set attainable quality goals
Understanding how peer reviews and other quality practices can help the team achieve its quality goals
Educating stakeholders within the development organizationand, where appropriate, in the customer communityabout what peer reviews are, why they add value, who should participate, and how to perform them
Providing the necessary staff time to define and manage the organization's review process, train the participants, conduct the reviews, and collect and evaluate review data
The dynamics between the work product's author and its reviewers are critical. The author must trust and respect the reviewers enough to be receptive to their comments. Similarly, the reviewers must show respect for the author's talent and hard work. Reviewers should thoughtfully select the words they use to raise an issue, focusing on what they observed about the product. Saying, "I didn't see where these variables were initialized" is likely to elicit a constructive response, whereas "You didn't initialize these variables" might get the author's hackles up. The small shift in wording from the accusatory "you" to the less confrontational "I" lets the reviewer deliver even critical feedback effectively. Reviewers and authors must continue to work together outside the reviews, so they all need to maintain a level of professionalism and mutual respect to avoid strained relationships.
An author who walks out of a review meeting feeling embarrassed, personally attacked, or professionally insulted will not voluntarily submit work for peer review again. Nor do you want reviews to create authors who look forward to retaliating against their tormentors. The bad guys in a review are the bugs, not the author or the reviewers, but it takes several positive experiences to internalize this reality. The leaders of the review initiative should strive to create a culture of constructive criticism in which team members seek to learn from their peers and to do a better job the next time. To accelerate this culture change, managers should encourage and reward those who initially participate in reviews, regardless of the review outcomes.
Reviews and Managers
The attitude and behavior that managers exhibit toward reviews affect how well the reviews will work in an organization. Although managers want to deliver quality products, they also feel pressure to release products quickly. They don't always understand what peer reviews or inspections are or the contribution they make to shipping quality products on time. I once encountered resistance to inspections from a quality manager who came from a manufacturing background. He regarded inspections as a carryover from the old manufacturing quality practice of manually examining finished products for defects. After he understood how software inspections contribute to quality through early removal of defects, his resistance disappeared.
Managers need to learn about peer reviews and their impact on the organization so they can build the reviews into project plans, allocate resources for them, and communicate their commitment to reviews to the team. If reviews aren't planned, they won't happen. Managers also must be sensitive to the interpersonal aspects of peer reviews. Watch out for known culture killers, such as managers singling out certain developers for the humiliating "punishment" of having their work reviewed.
Without visible and sustained commitment to peer reviews from management, only those practitioners who believe reviews are important will perform them. Management commitment to any engineering practice is more than providing verbal support or giving team members permission to use the practice. Figure 21 lists eleven signs of management commitment to peer reviews.
Figure 21. Eleven signs of management commitment to peer reviews
To persuade managers about the value of reviews, couch your argument in terms of what outcomes are important to the manager's view of success. Published data convinces some people, but others want to see tangible benefits from a pilot or trial application in their own organization. Still other managers will reject both logical and data-based arguments for reviews and simply say no. In this case, keep in mind one of my basic software engineering cultural principles"Never let your boss or your customer talk you into doing a bad job"and engage your colleagues in reviews anyway (perhaps quietly, to avoid unduly provoking your managers).
A dangerous situation arises when a manager wishes to use data collected from peer reviews to assess the performance of the authors (Lee 1997). Software metrics must never be used to reward or penalize individuals. The purposes of collecting data from reviews is to better understand your development and quality processes, to improve processes that aren't working well, and to track the impact of process changes. Using defect data from inspections to evaluate individuals is a classic culture killer. It can lead to measurement dysfunction, in which measurement motivates people to behave in a way that produces results inconsistent with the desired goals (Austin 1996).
I recently heard from a quality manager at a company that had operated a successful inspection program for two years. The development manager had just announced his intention to use inspection data as input to the performance evaluations of the work product authors. Finding more than five bugs during an inspection would count against the author. Naturally, this made the development team members very nervous. It conveyed the erroneous impression that the purpose of inspections is to punish people for making mistakes or to find someone to blame for troubled projects. This misapplication of inspection data could lead to numerous dysfunctional outcomes, including the following:
To avoid being punished for their results, developers might not submit their work for inspection. They might refuse to inspect a peer's work to avoid contributing to someone else's punishment.
Inspectors might not point out defects during the inspection, instead telling the author about them offline so they aren't tallied against the author. Alternatively, developers might hold "pre-reviews" to filter out bugs unofficially before going through a punitive inspection. This undermines the open focus on quality that should characterize inspection. It also skews any metrics you're legitimately tracking from multiple inspections.
Inspection teams might debate whether something really is a defect, because defects count against the author, and issues or simple questions do not. This could lead to glossing over actual defects.
The team's inspection culture might develop an implicit goal of finding few defects rather than revealing as many as possible. This reduces the value of the inspections without reducing their cost, thereby lowering the team's return on investment from inspections.
Authors might hold many inspections of small pieces of work to reduce the chance of finding more than five bugs in any one inspection. This leads to inefficient and time-wasting inspections. It's a kind of gamesmanship, doing the minimum to claim you have had your work inspected but not properly exploiting the technique.
These potential problems underscore the risks posed to an inspection program by using inspection data to evaluate individuals. Such evaluation criminalizes the mistakes we all make and pits team members against each other. It motivates participants to manipulate the process to avoid being hurt by it. If I were a developer in this situation, I would encourage management to have the organization's peer review coordinator (see Chapter 10) summarize defects collected from multiple inspections so the defect counts aren't linked to specific authors. If management insisted on using defect counts for performance appraisal, I would refuse to participate in inspections. Managers may legitimately expect developers to submit their work for review and to review deliverables that others create. However, a good manager doesn't need defect counts to know who the top contributors are and who is struggling.
When inspection metrics were introduced into one organization, a manager happily exclaimed, "This data will help me measure the performance of my engineers!" After the inspection champion explained the philosophy of software measurement to him, the manager agreed not to see the data from individual inspections. He publicly described the inspection process as a tool to help engineers produce better products. He told the engineers he would not view the individual inspection measures because he was interested in the big picture, the overall efficiency of the software engineering process. This manager's thoughtful comments helped defuse resistance to inspection measurement in his organization.
Why People Don't Do Reviews
If peer reviews are so great, why isn't everybody already doing them? Factors that contribute to the underuse of reviews include lack of knowledge about reviews, cultural issues, and simple resistance to change, often masquerading as excuses. If reviews aren't a part of your organization's standard practices, understand why so you know what must change to make them succeed.
Many people don't understand what peer reviews are, why they are valuable, the differences between informal reviews and inspections, or when and how to perform reviews. Education can solve this problem. Some developers and project managers don't think their projects are large enough or critical enough to need reviews. However, any body of work can benefit from an outside perspective.
The misperception that testing is always superior to manual examination also leads some practitioners to shun reviews. Testing has long been recognized as a critical activity in developing software. Entire departments are dedicated to testing, with testing effort scheduled into projects and resources allocated for testing. Organizations that have not yet internalized the benefits of peer reviews lack an analogous cultural imperative and a supporting infrastructure for performing them.
A fundamental cultural inhibitor to peer reviews is that developers don't recognize how many errors they make, so they don't see the need for methods to catch or reduce their errors. Many organizations don't collect, summarize, and present to all team members even such basic quality data as the number of errors found in testing or by customers. Authors who submit their work for scrutiny might feel that their privacy is being invaded, that they're being forced to air the internals of their work for all to see. This is threatening to some people, which is why the culture must emphasize the value of reviews as a collaborative, nonjudgmental tool for improved quality and productivity.
Previous unpleasant review experiences are a powerful cultural deterrent. The fear of management retribution or public ridicule if defects are discovered can make authors reluctant to let others examine their work. In poorly conducted reviews, authors can feel as though theynot their workare being criticized, especially if personality conflicts already exist between specific individuals. Another cultural barrier is the attitude that the author is the most qualified person to examine his part of the system ("Who are you to look for errors in my work?"). Similarly, a common reaction from new developers who are invited to review the work of an experienced and respected colleague is, "Who am I to look for errors in his work?"
Traditional mechanisms for adopting improved practices are having practitioners observe what experienced role models do and having supervisors observe and coach new employees. In many software groups, though, each developer's methods remain private, and they don't have to change the way they work unless they wish to (Humphrey 2001). Paradoxically, many developers are reluctant to try a new method unless it has been proven to work, yet they don't believe the new approach works until they have successfully done it themselves. They don't want to take anyone else's word for it.
And then there are the excuses. Resistance often appears as NAH (not applicable here) syndrome (Jalote 2000). People who don't want to do reviews will expend considerable energy trying to explain why reviews don't fit their culture, needs, or time constraints. One excuse is the arrogant attitude that some people's work does not need reviewing. Some team members can't be bothered to look at a colleague's work. "I'm too busy fixing my own bugs to waste time finding someone else's." "Aren't we all supposed to be doing our own work correctly?" "It's not my problem if Jack has bugs in his code." Other developers imagine that their software prowess has moved them beyond needing peer reviews. "Inspections have been around for 25 years; they're obsolete. Our high-tech group uses only leading-edge technologies."
Protesting that the inspection process is too rigid for a go-with-the-flow development approach signals resistance to a practice that is perceived to add bureaucratic overhead. Indeed, the mere existence of a go-with-the-flow development process implies that long-term quality isn't a priority for the organization. Such a culture might have difficulty adopting formal peer reviews, although informal reviews might be palatable.
Overcoming Resistance to Reviews
To establish a successful review program, you must address existing barriers in the categories of knowledge, culture, and resistance to change. Lack of knowledge is easy to correct if people are willing to learn. My colleague Corinne found that the most vehement protesters in her organization were already doing informal reviews. They just didn't realize that a peer deskcheck is one type of peer review (see Chapter 3). Corinne discussed the benefits of formalizing some of these informal reviews and trying some inspections. A one-day class that includes a practice inspection gives team members a common understanding about peer reviews. Managers who also attend the class send powerful signals about their commitment to reviews. Management attendance says to the team, "This is important enough for me to spend time on it, so it should be important to you, too" and "I want to understand reviews so I can help make this effort succeed."
Dealing with cultural issues requires you to understand your team's culture and how best to steer the team members toward improved software engineering practices (Bouldin 1989; Caputo 1998; Weinberg 1997; Wiegers 1996a). What values do they hold in common? Do they share an understanding ofand a commitment toquality? What previous change initiatives have succeeded and why? Which have struggled and why? Who are the opinion leaders in the group and what are their attitudes toward reviews?
Larry Constantine described four cultural paradigms found in software organizations: closed, open, synchronous, and random (Constantine 1993). A closed culture has a traditional hierarchy of authority. You can introduce peer reviews in a closed culture through a management-driven process improvement program, perhaps based on one of the Software Engineering Institute's capability maturity models. A management decree that projects will conduct reviews might succeed in a closed culture, but not in other types of organizations.
Innovation, collaboration, and consensus decision-making characterize an open culture. Members of an open culture want to debate the merits of peer reviews and participate in deciding when and how to implement them. Respected leaders who have had positive results with reviews in the past can influence the group's willingness to adopt them. Such cultures might prefer review meetings that include discussions of proposed solutions rather than inspections, which emphasize findingnot fixingdefects during meetings.
Members of a synchronous group are well aligned and comfortable with the status quo. Because they recognize the value of coordinating their efforts, they are probably already performing at least informal reviews. A comfort level with informal reviews eases implementation of an inspection program.
Entrepreneurial, fast-growing, and leading-edge companies often develop a random culture populated by autonomous individuals who like to go their own ways. In random organizations, individuals who have performed peer reviews in the past might continue to hold them. The other team members might not have the patience for reviews, although they could change their minds if quality problems from chaotic projects burn them badly enough.
However you describe your culture, people will want to know what benefits a new process will provide to them personally. A better way to react to a proposed process change is to ask, "What's in it for us?" Sometimes when you're asked to change the way you work, your immediate personal reward is small, although the team as a whole might benefit in a big way. I might not get three hours of benefit from spending three hours reviewing someone else's code. However, the other developer might avoid ten hours of debugging effort later in the project, and we might ship the product sooner than we would have otherwise.
Table 21 identifies some benefits various project team members might reap from reviewing major life-cycle deliverables. Of course, the customers also come out ahead. They receive a timely product that is more robust and reliable, better meets their needs, and increases their productivity. Higher customer satisfaction leads to business rewards all around.
Table 21. Benefits from Peer Reviews for Project Roles
Project Role |
Possible Benefits from Peer Reviews |
Developer |
|
Development Manager |
|
Maintainer |
|
Project Manager |
|
Quality Manager |
|
Requirements Analyst |
|
Test Engineer |
|
Arrogant developers who believe reviews are beneath them might enjoy getting praise and respect from coworkers as they display their superior work during reviews. If influential resisters come to appreciate the value of peer reviews, they might persuade other team members to try them, too. A quality manager once encountered a developer named Judy who was opposed to "time-sapping" inspections. After participating under protest, Judy quickly saw the power of the technique and became the group's leading convert. Because Judy had some influence with her peers, she helped turn developer resistance toward inspections into acceptance. Judy's project team ultimately asked the quality manager to help them hold even more inspections. Engaging developers in an effective inspection program helped motivate them to try some other software quality practices, too.
In another case, a newly hired system architect who had experienced the benefits of inspections in his previous organization was able to overcome resistance from members of his new team. The data this group collected from their inspections backed up the architect's conviction that they were well worth doing.