In the last post, we talked about what went wrong in our estimation process. What could we have done differently?
First, the team could have started by looking at how much effort it took to complete previous, comparable work. The greater the similarity—between a previous project and the one we are estimating—the better, but even work done by different teams using different approaches would have been better than using individual judgement. The best candidate for a comparison analysis would have been our proof-of-concept effort. We could have easily leveraged the results of our proof-of-concept by categorizing and scaling common features, artifacts, task lists, defects, administration, and other aspects to reflect what we anticipated for the follow-on project.
Second, we could have used more than one approach and compared the results. Closely-grouped results may improve our confidence and, over time, lead us to drop redundant approaches. Widely-varying results would help us identify factors accounted for by one approach (e.g., feature abundance) and not by another (e.g., scaled tasks).
We first encountered these ideas in Steve McConnell’s excellent book, Software Estimation: Demystifying the Black Art . He refers to this as Count, Compute, Judge and explores how to apply it in great depth.
Be Reasonable
Next, the team could have agreed on probability assumptions and used an estimation system that permitted them to record and combine estimates with probabilities. Agreed-upon probability assumptions ensure everyone understands the parameters of reasonable estimates, while an estimation system ensures they can reliably create estimates within those parameters.
Good probability assumptions start with a minimal, common terminology, because terms related to probability—“best and worst case”, “likely”, and “reasonable”—can mean very different things to different people. For example, we define a “reasonable” estimate as between a “best case” and “worst case”, with specific probabilities for each.
The absence of a system—supporting estimates with probabilities—hinders both the quality of a team’s estimates and the ability of decision-makers to use their estimates. Good decision-makers understand how to work with probabilities, including when and how to apply cost, schedule, and performance controls, and what they can expect to accomplish with them.
Without these, however, our team and decision-makers couldn’t understand the risks they were taking or the choices available to them.
Put Experience in its Place
Finally, the team could have agreed on how prior experience would inform estimation, thereby aligning everyone’s expectations and helping to manage the influence of their biases along the way. Agreeing on the role of experience starts estimation off on the right foot, and managing biases helps keep it on track.
Some of the best descriptions of important biases, how to recognize them, and especially how to limit their influence on decision-making can be found in Daniel Kahneman’s excellent book, Thinking Fast and Slow. Other, more casual reads on the role of cognitive biases in decision-making include Annie Duke’s How to Decide and Thinking in Bets.
By not taking these steps, our team’s biases led to under-estimation at first (“this will be easy, since we’ve already done it once”), then overestimation (“this will be hard, since the last attempt was hard”). This oversight also prevented us from leveraging the team’s experience to resolve familiar problems ahead of time.
A Fighting Chance
We can’t eliminate uncertainty and bias; they will always play a role. When we consistently test the assumptions behind our estimates, however, we can manage these influences. This permits us to estimate well enough to ensure projects are scoped correctly and, in turn, provide teams and decision-makers with a valuable head start.
In the next post, I’ll describe how we estimate now.