Tag Archives: forecast

Predicting vs. Forecasting (Part 2)

storm

Yesterday, Hurricane Matthew swept passed my home in Boca Raton, Florida.  For the U.S., it’s caused some property damage and a few people died because 911 personnel couldn’t get to those few that had life-endangering emergencies.  In Haiti, the storm wreaked havoc on that poor nation, and hundreds have died.  ;'(

Weather forecasters make forecasts.  They make predictions, too, but we don’t call it weather predicting, we call it weather forecasting.  What’s the difference?

A prediction is a single outcome of what a future uncertainty looks like.  It ignores the possible many other outcomes, some of which are probable, some of which are improbable.

Forecasting, however, recognizes that there are many possible, future outcomes for a given uncertainty.  Some of those outcomes are improbable, some are more probable.

For hurricanes, weather forecasters use the familiar “cone of uncertainty” which looks like a funnel.  The narrow part of the funnel is the expected path of the eye of the hurricane that’s nearest to where the eye of the hurricane currently is.  The wide part of the cone or funnel is three of five days away.  Anyone who is familiar with agile estimation is likely familiar with the cone of uncertainty because it works the same way.  Agile teams can pretty accurately predict what their velocity will be in the next sprint, but it’s hard to estimate what they’ll get done three months from now.

Project managers ought to become skilled at creating project forecasts instead of project predictions.  We may still need to create predictions for schedule and budget for our project sponsors who authorize and fund projects, but the better way to align expectations among all key stakeholders and improve executive decision-making is to make forecasts — not predictions.

If we have to offer predictions — a single budget number for a project, or a single date on which a project will be complete — we ought to at least offer that budget number or calendar date with a confidence level:  “With 90% certainty, the project will cost $800,000 or less, and, with 90% confidence, we will finish the project by March 31.”

When we share predictions with calculated confidence levels, we implicitly allow that the prediction may not come to pass (and how likely is that risk).  If a project sponsor demands greater assurance that the project will be done, we can offer other, more confident predictions (which naturally cost more money and take more time).  If a project sponsor wants to shrink the budget and/or schedule, we can do that too — and then share the risk that the budget and schedule will fail using easily understood probabilities.

Predicting vs. Forecasting (Part 1)

At my 2016 PMI Global Congress presentation next week, I’ll be hinting at the differences between project predicting and project forecasting.

What is a prediction?  And what is a forecast?

A prediction is a projection about the future (that’s true of a forecast, too).  A prediction offers a single outcome for the future, which is unlike a forecast.  I can predict that the Miami Dolphins, the South Florida hometown favorite football team, will fail to secure a playoff spot by the end of the NFL season.  Again.  That is a single outcome (of course, there are only two outcomes possible — either they will, or they won’t).

When the Miami Dolphins play their next game on Sunday, at home, against the Cleveland Browns.  Both teams are 0-2.  One website predicts that the Dolphins will win, 12.6 to 28.  I don’t bet, but predicting any team to score a fractional point just sounds wrong to me.

But in this case, is it possible that both teams will score something other than their predicted point total?  Of course.  There are many other, possible outcomes, some of which are plausible and probable (like, the Dolphins scoring only 21 points, or maybe 31 points), and some are plausible but improbable (like scoring 0 points, or 60 points).

We estimate our projects like we’re predicting football scores for the upcoming weekend game.  We create single-value estimates of the future.  We offer a single, predicted outcome even though there are many other, possible outcomes.

Worse, no one really knows, exactly, what the predicted outcome represents.  If I say the project will finish in 30 weeks, is that a most likely outcome?  An optimistic outcome?  A pessimistic outcome?  An average outcome?  What does 30 weeks represent?

And whatever it represents, do all the stakeholders (my project sponsor in particular) know what 30 weeks represents?

If I estimate a most likely outcome of 30 weeks, what’s the likelihood that it will be 31 weeks?  Or 35 weeks?  Or 40 weeks?

And what happens if I know that the 30 weeks estimate represents a most likely outcome (which has about a 50% likelihood of success), but my project sponsor thinks that it’s a highly confident estimate representing 95% confidence?  The answer is, I’m misaligned with my sponsor and the project is at-risk of not delivering on-time.

Project predictions, without included reliability, are dangerous.  And yet, virtually every project schedule is created with predicted work efforts or activity duration.  And virtually every project is budgeted with a single-value, projected cost for the entire project, even though the actual project cost may have many possible outcomes.

As PMs, we have to know how to effectively create project predictions when we have to do that, but we can also offer a better way of projecting future project outcomes:  we can forecast them.