- How Feature Factory Companies Work
- Coordinative Flow: A Logical Way of Working with Unexpected Results
- Nine Out of Ten Ideas Will Fail
- Collaborative Flow: A Simple Way of Working with Outstanding Results
- Key Takeaways
Collaborative Flow: A Simple Way of Working with Outstanding Results
Nobody deserves to waste time. I know how much it hurts to see your work leading nowhere. After many years on the road, I learned that failures are inevitable. Instead of adding steps to prevent failures, identifying and quickly dropping flawed ideas make more sense.
Everything I share in this book is simple because I’m not good at dealing with complexity. Day in and day out, I strive to remove complexity by adding simplicity. That’s how I uncovered ways of working that increase the chances of creating value sooner.
The more you simplify, the sooner you can move from a feature factory to an empowered team.
Unlike the coordinative flow, the collaborative flow focuses on iterations instead of phases. Each iteration enhances learning, empowering teams to choose between investing further or dropping the idea.
Figure 1.5 illustrates how teams start with several ideas and collaborate to identify what drives value while dropping what doesn’t.
Figure 1.6 Dropping bad ideas fast enough and focusing on promising ones with a collaborative flow
Let’s understand the different iterations of a collaborative flow and how that leads to empowerment.
Evaluate
The beginning of a collaborative flow is the same as for a coordinative flow. You’ve got plenty of ideas, and everyone wants everything done by yesterday. The trick isn’t to identify the most promising ideas upfront, but rather to evaluate all of them and drop the misfitting ones. Dropping ideas gives you freedom because you’ve got fewer expectations to manage.
To drop ideas, confront them with your strategy. Here are the questions you should ask:
How does it get us closer to our product vision?
How does it relate to our product strategy?
How does it contribute to our current objectives?
How does it deliver on our value proposition?
Drop the idea if you lack answers to any of these questions. You may miss some of the attributes I mentioned (vision, strategy, objectives, value proposition) and struggle to assess your idea against them. You’ve got to do your homework (read Chapter 5 to set your product strategy).
Don’t invest time in ideas unrelated to your strategy. By that, I mean don’t increase the size of your backlog, but instead make your trash bin bigger.
You’ll be tempted to park your idea somewhere and eventually return to it. Don’t do that, because it will distract you. Whatever is relevant to your customers and business will return to you.
Idea evaluation should take a couple of hours, not more than that. Collaboration is essential, so get key business stakeholders to do this exercise with the product team. It’s not about defending ideas, but rather about checking how they fit your strategy.
Strive to identify fitting ideas and drop unrelated ones pragmatically.
Learn
The learning iteration starts with ideas fitting your strategy, but that doesn’t mean jumping straight to implementation. You should drop ideas your customers don’t desire, the business cannot support, you don’t have the technology to develop, or it’s unethical to pursue. Keep it simple, and ask the following questions about each remaining idea:
How much do customers want it? (Desirability)
How does the business benefit? (Viability)
How well can we deliver it? (Feasibility)
How right is doing it? (Ethics)
Figure 1.7 represents an adapted version of the Innovation Trinity popularized by IDEO,4 also known as a product Venn diagram. The overlapping areas highlight the sweet spot of promising ideas based on the preceding questions.
The sooner you drop bad ideas, the sooner you can focus on the promising ones. Answering critical questions will help you with that. I like how Marty Cagan (2018) approaches this issue. He uses the following questions to address critical risks:
Will the customers buy this or choose to use it? (Value risk)
Can the user figure out how to use it? (Usability risk)
Can we build it? (Feasibility risk)
Does this solution work for our business? (Business viability risk)
You may not find answers to each of these questions, but you will have assumptions. Invest some time testing the critical assumptions (business critical and weak evidence), and decide which ideas are worth pursuing.
Figure 1.7 An adaption of the Innovation Trinity showing the sweet spot of promising ideas
This iteration should take a few days, but not more than that. Use qualitative information to determine how to progress. You can do interviews with customers and business stakeholders, low-fidelity prototype testing, and quick technical experiments as needed.
The goal is to find evidence confirming ideas are desirable, feasible, and viable (refer to Chapter 7 for more information on how to do this). If one of these characteristics is missing, drop the idea and move on.
A common trap is focusing on viability and feasibility. Just because you can do something, that doesn’t mean you should do it. Investing in an idea without evidence of desirability is no more than a bet. The result is often a feature nobody uses. Proceed only with the ideas that have evidence strong enough to justify the investment you’re about to make.
It’s fundamental to collaborate with business stakeholders and customers. Otherwise, you will end up with undesired results.
You’ll drop 30% to 50% of the remaining ideas.5 If your numbers are out of this range, either you’re too strict or you’re too loose. I’d recommend reviewing your evidence and decisions.
Experiment
After learning about the key aspects of your ideas, it’s time to run more robust experiments. You want to test which solutions can deliver the potential results. Exploring a few alternatives and sticking with the most promising ones is essential.
It’s all too common to pick one solution and go all in with it. I discourage you from following this path, because it quickly leads to commitment escalation. As humans, the more invested we are in something, the more willingly we invest in it.
You need to choose which experiment makes more sense to you. Your ultimate objective is to have solid information that justifies investing further. For that, you will need a combination of qualitative and quantitative data.
It’s not within the scope of this book to cover the diverse product experiments you can apply. I recommend reading Testing Business Ideas by David Bland to learn more about this topic.
Here’s my secret: I like using tech debt (software quality compromises requiring future rework) as a tool to accelerate learning. The goal is to hack a quick and dirty solution, get it live to a small portion of your audience, and gather evidence on how that helps them get the job done. I must warn you, though, software engineers are often opposed to increasing tech debt. They tend to be strict about creating this kind of tech debt, because many product managers do not support them in paying it off on time.
Prudent use of tech debt is a tool6 (more on that in Chapter 8). It’s like getting a mortgage. You go to the bank, get a mortgage, and acquire your dream house. You collected the desired value fast, but then you owe the bank and should pay the debt off. If you accumulate more debt, you may go bankrupt. The same thing happens with tech debt. It enables you to reduce the time to collect value, but whenever it’s proven worthwhile, you must pay the debt off before getting a new loan.
You will drop another 30% to 50% of the remaining ideas during the experiment iteration. The reasons will vary, but it turns out that customers often show unexpected behaviors, which enables you to review how worthy your idea is. Not all ideas justify the investment despite being desirable, feasible, and viable at first glance.
The critical part of experimentation is to drop solutions that don’t work. Feature factory teams can’t do that because delivering the output is their goal. But empowered teams can because achieving the outcome is their success metric.
Whenever you want to drop an idea, involve your business stakeholders to reach this conclusion collaboratively. Evidence will support your decision, and involving business stakeholders will get their buy-in (read the discussion of “opinions over evidence” in Chapter 2).
Launch
Ideas that survive the experiment iteration are the prominent ones. In the previous iteration, you built to learn. Now, you build to scale. Paying the tech debt off before you make the solution available to your whole audience or jump to your next opportunity is fundamental.
Unlike with a coordinative flow, applying a collaborative flow will help you drop bad ideas faster. This method benefits from the power of collaboration over coordination.
Be aware that not all companies are comfortable with a highly collaborative flow. For example, in Germany, many companies long for yearly plans with feature roadmaps and prescriptive timelines. Sadly, those plans create an exhaustive need for coordination to get everything progressing. Yet, you can foster change gradually. When facing such situations, you could take a step-by-step journey to help companies uncover better ways of creating value (read Chapter 4 to get insights on how to act).
I have one thought about teams that are able to create value faster than others, and it’s a rather obvious one: You have to do what most teams don’t do to achieve the results most teams don’t get. Move away from coordinative flows, and do your best to focus on collaborative flows instead.
The more coordination you have to endure, the less time you have to focus on uncovering what creates value. Don’t force your teams to behave as feature factories.
The more effectively your teams can collaborate, the faster they can adapt, learn, drop bad ideas, and focus on promising ones. That’s the power of enabling empowered teams.