Category Archives: Forecasting

Agile Forecasting with SPERT Beta Edition V2

Today I added a new Agile Forecast tab to the forthcoming Version 2 of Statistical PERT Beta Edition. This Agile Forecast tab is very similar to the Agile Forecast tab in the SPERT Normal Edition Version 3. I introduced the Agile Forecast tab last year in Version 2 of SPERT Normal Edition.

What makes the Agile Forecast tab different in the SPERT Beta Edition is that it, obviously, uses the beta distribution to model the uncertainty around what each Scrum sprint (or agile iteration) will accomplish. This opens up the possibility to model this uncertainty using shape parameters that very nearly create a uniform distribution (in SPERT Beta Edition, this is called a “Guesstimate” about how likely the most likely outcome really is).

I compared some results between SPERT Normal and SPERT Beta Editions, and found that both editions calculate the same release date, just in different ways. Notably, I used “Medium-Low Confidence” in SPERT Normal, but had to use “Very Low Confidence” in SPERT Beta to obtain a close match of the standard deviation. By default, SPERT Normal uses only 7 subjective phrases for how likely the most likely outcome is, but SPERT Beta uses 10. The two editions treat the subjective phrases differently, even though they share the same word phrases.

Try downloading the latest in-development SPERT Beta Edition Version 2 (currently, version 2.0b) from GitHub and check-out the Agile Forecasting tab!

Version 3 Released!

On March 10, 2019, I officially released the public version of Statistical PERT Normal Edition. This is a terrific enhancement upgrade to Statistical PERT because Version 3 adds new and enhanced data visualization features that help you see the uncertainty you’re modeling.

New in Version 3:

  • Row-level Sparklines that show the shape of the implied bell-curve, based upon your 3-point estimate and subjective judgment about the most likely outcome
  • New, tri-colored combo charts which show the bell-curve for the aggregate of all the uncertainties you’ve modeled (in the 1-point entry, 3-point entry, and Mixed entry worksheets)
  • Three, side-by-side scenario comparisons on the Agile Forecast tab, to make it easier to do sensitivity analysis when choosing different input variables

Row-level Sparklines are useful because they visually show whether the bell-curve is flat (lots of uncertainty), narrow (not much uncertainty), or shaped familiarly (moderate uncertainty). They also indicate whether your 3-point estimate implies skewing either to the left or to the right. If the peak of the Sparkline bell-curve is centered, there is no skewing. If the peak is shifted to the left, then your uncertainty is skewed to the right, and vice versa. By showing the uncertainty in a Sparkline, you can immediately recognize what kind of uncertainty you’ve modeled, without looking at the actual input data you entered and deciphering mentally.

The tri-colored combo charts is a HUGE enhancement to Statistical PERT. You won’t realize how useful these are until you start to play with them. Their greatest value is they visually show the kind of uncertainty you’re estimating, which is very helpful when presenting or negotiating with stakeholders. Imagine showing your project sponsor (who is under-funding your project) a tri-colored combo chart that has more than 50% of the area under the curve in the color red. You say to your sponsor, “The area under the curve shows the likelihood your project will fail to meet its goal, given the project constraints we have to work with.” And then you explain to your sponsor how you arrived at this conclusion.

The tri-colored combo chart is a combination of a line chart and a bar chart. The colored area under the curve is actually 100 bars that touch each other. The line that touches the peak of each bar chart smooths the appearance of the normal curve. This whole combo chart is configurable — change the bar sizes, colors, much more.

To create the tri-colored bar chart, you’ll notice there are a lot of hidden columns to the right of the main worksheet display. Feel free to unhide these hidden columns and explore how the tri-colored bar chart’s data is created.

This summer, I’ll be working to enhance Statistical PERT Beta Edition to give it the same data visualization upgrade as what the Normal Edition now has. I hope to release Version 2 of the Beta Edition by the end of summer 2019.

I’m working with Pluralsight to release a brand-new course to replace my 2015 course that first introduced Statistical PERT to the world. Statistical PERT has changed a LOT over the last four years, so it’s a great time to replace my 2015 course, Easily Estimate Projects Using Statistics and Excel with my new course, Easily Estimate Projects and Products. This new course will show product owners working in an agile environment how to use Statistical PERT to forecast how many iterations (sprints) it will take to complete a given amount of work off the product backlog. Statistical PERT is a great, easy way for product owners to create feature forecasts.

Look for other, smaller enhancements to Statistical PERT Normal Edition later 2019 and 2020!

Palisade Risk Conference 2016


I’ll be heading to New Orleans for the first time in my life in a few weeks.  The Palisade Risk Conference 2016 is being held on November 1 & 2, 2016.  I have used Palisade’s DecisionTools Suite since my first year in graduate school (in the MSPM program at George Washington University) back in 2010.  I love the power and utility of @Risk, the Excel add-in program for doing Monte Carlo simulations.  @Risk has helped me create Statistical PERT because I use @Risk simulations to compare against and validate my Statistical PERT approach and results.  Without @Risk, there would be no Statistical PERT.

I liken Statistical PERT to a screwdriver:  simple, easy-to-use, solves a lot of problems, inexpensive (Statistical PERT is free, actually).  By contrast, @Risk is complex, not-as-easy-to-use (but still easy for rudimentary modeling), solves a whole lot more problems, but is expensive.  Whereas everyone has a screwdriver in their junk drawer or garage, not everyone has, say, a miter saw or some other, expensive power tool.

If you’re planning attending the Risk Conference in New Orleans, hit me up!

Predicting vs. Forecasting (Part 2)


Yesterday, Hurricane Matthew swept passed my home in Boca Raton, Florida.  For the U.S., it’s caused some property damage and a few people died because 911 personnel couldn’t get to those few that had life-endangering emergencies.  In Haiti, the storm wreaked havoc on that poor nation, and hundreds have died.  ;'(

Weather forecasters make forecasts.  They make predictions, too, but we don’t call it weather predicting, we call it weather forecasting.  What’s the difference?

A prediction is a single outcome of what a future uncertainty looks like.  It ignores the possible many other outcomes, some of which are probable, some of which are improbable.

Forecasting, however, recognizes that there are many possible, future outcomes for a given uncertainty.  Some of those outcomes are improbable, some are more probable.

For hurricanes, weather forecasters use the familiar “cone of uncertainty” which looks like a funnel.  The narrow part of the funnel is the expected path of the eye of the hurricane that’s nearest to where the eye of the hurricane currently is.  The wide part of the cone or funnel is three of five days away.  Anyone who is familiar with agile estimation is likely familiar with the cone of uncertainty because it works the same way.  Agile teams can pretty accurately predict what their velocity will be in the next sprint, but it’s hard to estimate what they’ll get done three months from now.

Project managers ought to become skilled at creating project forecasts instead of project predictions.  We may still need to create predictions for schedule and budget for our project sponsors who authorize and fund projects, but the better way to align expectations among all key stakeholders and improve executive decision-making is to make forecasts — not predictions.

If we have to offer predictions — a single budget number for a project, or a single date on which a project will be complete — we ought to at least offer that budget number or calendar date with a confidence level:  “With 90% certainty, the project will cost $800,000 or less, and, with 90% confidence, we will finish the project by March 31.”

When we share predictions with calculated confidence levels, we implicitly allow that the prediction may not come to pass (and how likely is that risk).  If a project sponsor demands greater assurance that the project will be done, we can offer other, more confident predictions (which naturally cost more money and take more time).  If a project sponsor wants to shrink the budget and/or schedule, we can do that too — and then share the risk that the budget and schedule will fail using easily understood probabilities.

Predicting vs. Forecasting (Part 1)

At my 2016 PMI Global Congress presentation next week, I’ll be hinting at the differences between project predicting and project forecasting.

What is a prediction?  And what is a forecast?

A prediction is a projection about the future (that’s true of a forecast, too).  A prediction offers a single outcome for the future, which is unlike a forecast.  I can predict that the Miami Dolphins, the South Florida hometown favorite football team, will fail to secure a playoff spot by the end of the NFL season.  Again.  That is a single outcome (of course, there are only two outcomes possible — either they will, or they won’t).

When the Miami Dolphins play their next game on Sunday, at home, against the Cleveland Browns.  Both teams are 0-2.  One website predicts that the Dolphins will win, 12.6 to 28.  I don’t bet, but predicting any team to score a fractional point just sounds wrong to me.

But in this case, is it possible that both teams will score something other than their predicted point total?  Of course.  There are many other, possible outcomes, some of which are plausible and probable (like, the Dolphins scoring only 21 points, or maybe 31 points), and some are plausible but improbable (like scoring 0 points, or 60 points).

We estimate our projects like we’re predicting football scores for the upcoming weekend game.  We create single-value estimates of the future.  We offer a single, predicted outcome even though there are many other, possible outcomes.

Worse, no one really knows, exactly, what the predicted outcome represents.  If I say the project will finish in 30 weeks, is that a most likely outcome?  An optimistic outcome?  A pessimistic outcome?  An average outcome?  What does 30 weeks represent?

And whatever it represents, do all the stakeholders (my project sponsor in particular) know what 30 weeks represents?

If I estimate a most likely outcome of 30 weeks, what’s the likelihood that it will be 31 weeks?  Or 35 weeks?  Or 40 weeks?

And what happens if I know that the 30 weeks estimate represents a most likely outcome (which has about a 50% likelihood of success), but my project sponsor thinks that it’s a highly confident estimate representing 95% confidence?  The answer is, I’m misaligned with my sponsor and the project is at-risk of not delivering on-time.

Project predictions, without included reliability, are dangerous.  And yet, virtually every project schedule is created with predicted work efforts or activity duration.  And virtually every project is budgeted with a single-value, projected cost for the entire project, even though the actual project cost may have many possible outcomes.

As PMs, we have to know how to effectively create project predictions when we have to do that, but we can also offer a better way of projecting future project outcomes:  we can forecast them.