Monthly Archives: September 2016

Predicting vs. Forecasting (Part 1)

At my 2016 PMI Global Congress presentation next week, I’ll be hinting at the differences between project predicting and project forecasting.

What is a prediction?  And what is a forecast?

A prediction is a projection about the future (that’s true of a forecast, too).  A prediction offers a single outcome for the future, which is unlike a forecast.  I can predict that the Miami Dolphins, the South Florida hometown favorite football team, will fail to secure a playoff spot by the end of the NFL season.  Again.  That is a single outcome (of course, there are only two outcomes possible — either they will, or they won’t).

When the Miami Dolphins play their next game on Sunday, at home, against the Cleveland Browns.  Both teams are 0-2.  One website predicts that the Dolphins will win, 12.6 to 28.  I don’t bet, but predicting any team to score a fractional point just sounds wrong to me.

But in this case, is it possible that both teams will score something other than their predicted point total?  Of course.  There are many other, possible outcomes, some of which are plausible and probable (like, the Dolphins scoring only 21 points, or maybe 31 points), and some are plausible but improbable (like scoring 0 points, or 60 points).

We estimate our projects like we’re predicting football scores for the upcoming weekend game.  We create single-value estimates of the future.  We offer a single, predicted outcome even though there are many other, possible outcomes.

Worse, no one really knows, exactly, what the predicted outcome represents.  If I say the project will finish in 30 weeks, is that a most likely outcome?  An optimistic outcome?  A pessimistic outcome?  An average outcome?  What does 30 weeks represent?

And whatever it represents, do all the stakeholders (my project sponsor in particular) know what 30 weeks represents?

If I estimate a most likely outcome of 30 weeks, what’s the likelihood that it will be 31 weeks?  Or 35 weeks?  Or 40 weeks?

And what happens if I know that the 30 weeks estimate represents a most likely outcome (which has about a 50% likelihood of success), but my project sponsor thinks that it’s a highly confident estimate representing 95% confidence?  The answer is, I’m misaligned with my sponsor and the project is at-risk of not delivering on-time.

Project predictions, without included reliability, are dangerous.  And yet, virtually every project schedule is created with predicted work efforts or activity duration.  And virtually every project is budgeted with a single-value, projected cost for the entire project, even though the actual project cost may have many possible outcomes.

As PMs, we have to know how to effectively create project predictions when we have to do that, but we can also offer a better way of projecting future project outcomes:  we can forecast them.

2016 PMI Global Congress – North America is next week!

This Saturday, I’m flying out to San Diego to present Statistical PERT to Congress attendees on Monday, September 26, 2016 from 3:15PM – 4:45PM.  If you’re planning on attending this year’s Congress, please attend my session to learn more about Statistical PERT.

I attended last year’s PMI Global Congress in Orlando — my first — and was greatly impressed at all the great information sharing and networking that is available to project managers over three days.  I decided that on this side of retirement, I plan on continuing to attend the North America Congress since it was so high-value for me.

In 2017, I intend to look for other venues to present Statistical PERT to people in North America.  If you are a conference organizer looking for speakers, please reach out to me.

SPERT™ Mobile released + Version 1.3 of SPERT™ for Task Estimation

This weekend, I released a new mobile version of Statistical PERT:  SPERT™ Mobile.  This new special release makes Statistical PERT easily used on iPhones and Android smartphones that have Microsoft Excel Mobile installed.  If you have a smartphone but don’t have Excel Mobile installed, it’s a free install.  Download Excel Mobile either from iTunes or Google Play.

Also this weekend, I updated Statistical PERT for task duration to Version 1.3.  In this new release, I’ve added a Welcome worksheet that has some brief, introductory information, including a link to a new Quick Start guide for use with any Statistical PERT spreadsheet.  In Version 1.3, I’ve add a pie chart visual to show the confidence interval for a collection of tasks, plus the upperbound and lowerbounds of the task total.

What is Soothsaying?

If you want to try Soothsaying your project’s risk register, it might help you to learn a little bit about that means.

Below is the article that’s published on on soothsaying your risk register (in case you’re not a PMI member and therefore can’t read the article on PMI’s sister website).

From – August 2016 theme on Risk

If you could look at your project’s risk register, then peer into a crystal ball and foretell which risks will occur sometime during your project’s execution phase, would that be helpful? And if you could tell your project sponsor, with 90% confidence, how much risk contingency you need to protect your project—would you be interested? You can—by becoming a risk soothsayer.

This doesn’t have anything to do with divination. Let me show you how to soothsay your risk register using a special Microsoft Excel® template to do risk analysis.

Before I do that, though, let’s consider how risk analysis is typically done. Usually, risks are simply put into an Excel template like this one, and if they’re given at all, risk probabilities and impacts are deterministically determined. A risk has a “low” or “medium” probability of occurrence, but there’s no definition for what that means (so there can be no alignment about that definition). And risk impacts are “medium,” “high” or “very high”—but what does that mean? Too often, deterministic risk analysis neither brings about stakeholder alignment, nor does it improve project decision making.

Wouldn’t it be better to know that, under existing conditions of uncertainty, you expect up to four of the 15 risks in your risk register to fire (that is, occur), and so you’ll need 44 additional time-units of schedule contingency to protect your project with 90% confidence? But if the sponsor wants less contingency, you can protect your project with 76% confidence if they provide just 33 additional time-units of schedule contingency.

An Example Risk Register
Let’s look at where these numbers and probabilities come from. From there, you can begin soothsaying your own risk register. Let’s start with an example risk register that has 15 identified risks. Rather than just saying each risk has “medium”, “low” or “very low” probability of occurrence, let’s name the specific probability we want to associate to each subjective term. How about starting with Figure 1 to align all stakeholder’s definition of Risk Probability:

Figure 1: Sample definitions of risk probability

Obviously, we can add or remove subjective terms and equate them with different probabilities until the project team and sponsor agree on what they mean. This brings about stakeholder alignment.

Now let’s move from deterministic estimates about risk impact to more probabilistic ones. An easy way to do that is using the Program Evaluation and Review Technique (PERT), which requires a three-point estimate for project uncertainties (minimum, most likely, maximum). Usually, these judgments are made using expert opinion just like with single-value (deterministic) impact estimates, but now everyone gets a sense of how much impact variation exists for each risk. Figure 2 shows an example of a single entry in the risk register for schedule impact using a three-point estimate:

Figure 2: A PERT expected value for risk impact

Notice that the entry calculates the PERT expected value (10) of the risk impact using the PERT formula:

(Minimum + 4(Most Likely) + Maximum) / 6

Since there are 15 risks in my example risk register, I’ll make three-point estimates for all of them, and sum the PERT expected values for all 15 risks; the sum total is 166 time-units, and the average of impact is 11.1 time-units per risk. Figure 3 shows just the last three risks in the risk register before the PERT sum total and average:

Figure 3: The PERT sum and average for 15 risks

But it’s highly unlikely all 15 risks are going to fire on our project, so we don’t need a schedule reserve of 166. If we knew exactlyhow many risks were going to fire, and which ones they were, then we’d know exactly how much schedule contingency to have on our project.This is where Excel’s binomial functions come in—we’ll use Excel’s built-in statistical functions to soothsay the risk register.

The Binomial Functions in Excel
The binomial probability distribution answers the question, “Up to how many risks will fire on this project?” (I use the verb “fire” to mean that the risk occurs during the execution phase, and now we have to deal with its uncertain, adverse impacts). In Excel, there are two binomial functions: BINOM.DIST (binomial distribution) and BINOM.INV (binomial inverse). Let’s use BINOM.INV to soothsay the risk register.

BINOM.INV requires three arguments:

  • trials are the number of risks in the risk register
  • probability of success is the probability of occurrence for each trial (risk)
  • alpha is how confident we want to be in our soothsaying (usually 75% or more)

Example: Our sample risk register has 15 risks, so the trials argument is 15. We can use Excel’s AVERAGE function to get the average probability of occurrence across all 15 risks, where each risk has an individual occurrence probability of between 5% and 25%. The average risk occurrence is 13.7%, so the probability of success is 13.7%. (Don’t get confused: “Success” in this context means something bad happens to our project—a risk occurs, or fires, during the execution phase). Finally, let’s be 90% confident in the schedule reserve, so the alpha argument is 90%. In Excel, the BINOM.INV function looks like this:

BINOM.INV(15, 0.137, 0.90) = 4

With 90% confidence, we now soothsay that up to four project risks will fire during the project’s execution phase. It might be fewer than four, but I’m ready to accept a 10% chance that five or more risks will fire—an improbable occurrence that, to me (and hopefully to you), is tolerable.

So we need to protect our project against up to four risks that we soothsaid will fire during the project’s execution phase. The question now is: Which four? The four risks with the least impact, or the greatest? Or a mix of low-impact and high-impact risks?

Soothsaying the Schedule Reserve
Instead of soothsaying exactly which risks will fire, let’s start by finding the average expected value for all risks; that involves just adding up the PERT expected value for all 15 risks (166), then dividing by the number of risks in the register (15):

166 / 15 = 11 (rounded)

On average, the schedule impact for each risk in our risk register is 11 time-units.

We expect, with 90% confidence, that up to four risks will fire during the project execution phase, and each fired risk has an average schedule impact of 11 time-units. So let’s add 44 time-units to our project schedule to address the schedule risk found in our risk register:

4 fired risks * 11 time-units per risk = 44 time-units for the schedule reserve

Now that we’ve soothsaid schedule risk, we can use the exact same method to soothsay budget risk, too!

Monte Carlo Simulation Comparison
How well does risk soothsaying work? To find out, I used Palisade’s @Risk7® to create a Monte Carlo simulation model for the example risk register, and compared its results to our soothsaid schedule reserve of 44. I created the simulation model using the same 15 risks, their respective likelihood of occurrence and three-point estimates. Then I used @Risk’s RiskPERT function, which is a special beta probability distribution, to model the uncertain nature of each risk’s schedule impact. After running the simulation model for 50,000 trials, I compared its results with the schedule reserve that came from risk soothsaying.

Figure 4 shows what the Monte Carlo simulation results look like at the 90% confidence level:

Figure 4 : Monte Carlo simulation of the risk register

The graph shows that the most likely scenario (the mode) is that none of the 15 risks will fire during the project execution phase. But to have 90% confidence that there is enough buffer in the project schedule to handle risks that might fire, the model shows that we need a schedule reserve of 44 time-units—just exactly what we soothsaid! Only in 10% of the simulated trials would a reserve of 44 time-units fail to protect the project. (Note that both simulation modeling and risk soothsaying are equally dependent upon the quality of work done during risk assessment, and that results from risk simulation may vary from risk soothsaying).

Using Microsoft Excel’s built-in statistical functions, it’s easy to be a risk soothsayer. You can soothsay anytime you add, modify or remove risks from your risk register. Risk soothsaying allows project managers to thoughtfully protect the project schedule and budget against known risks, increasing the likelihood of overall project success.

Now try soothsaying using your project’s risk register! (Just be sure to tell your project team and sponsor that what you’re really doing is risk analysis…)

Statistical PERT for Fantasy Football!

This summer, I was in Chicago visiting family and talking with my nephew about fantasy football.  The whole Chicagoland family is really into fantasy football; I’m not.  I watch NFL games on TiVo, skipping commercials and past any game that looks like it’s non-competitive.

As I talked with my nephew about how he picks players from his team roster to put into the upcoming game, I realized there was an application for Statistical PERT even with fantasy football!

Essentially, my nephew explained that he has several players from which to choose to put into a game for one game position, like, who will be his team quarterback.  He has two or more to choose from on his team.  He faces the same choices with other player-positions, too.

The question is, which player on his fantasy team will perform best, given each player’s health, and the real-game opponents each player faces?

This uncertainty can be modeled using a regret table in statistics.  Using a regret table helps you make a reasonable, probabilistic choice that will give you the least regret among all the choices you have.

With this special adaption of Statistical PERT, I’ve added a regret table to the example workbook.  The regret table is conditionally formatted so it highlights which player will give you the best point performance at a variety of probabilistic points.  The player who will give you the least regret across many possible outcomes is the player you would put into the game.  That player will give you the least regret, statistically speaking.

So, you enter each player for a given position, model what you think each player is capable of for the upcoming weekend games, and let the SPERT worksheet show you which player will give you the least regret across many uncertain possibilities.

Notably, in testing this special Fantasy Football version, I didn’t find that using the subjective opinion about the mode changes anything; irrespective of how you subjectively feel about the most likely outcome, the spreadsheet will continue to choose the same player (or players, if there’s a tie) as the player/s that will give you the least regret.

We have a new tagline for anyone using this special version to help you pick your fantasy football players for next week’s game:  “Lose With Less Regret.”   🙂

Here’s the download!