Tag Archives: Monte Carlo simulation

Palisade Risk Conference 2016

palisade-risk-conference-2016

I’ll be heading to New Orleans for the first time in my life in a few weeks.  The Palisade Risk Conference 2016 is being held on November 1 & 2, 2016.  I have used Palisade’s DecisionTools Suite since my first year in graduate school (in the MSPM program at George Washington University) back in 2010.  I love the power and utility of @Risk, the Excel add-in program for doing Monte Carlo simulations.  @Risk has helped me create Statistical PERT because I use @Risk simulations to compare against and validate my Statistical PERT approach and results.  Without @Risk, there would be no Statistical PERT.

I liken Statistical PERT to a screwdriver:  simple, easy-to-use, solves a lot of problems, inexpensive (Statistical PERT is free, actually).  By contrast, @Risk is complex, not-as-easy-to-use (but still easy for rudimentary modeling), solves a whole lot more problems, but is expensive.  Whereas everyone has a screwdriver in their junk drawer or garage, not everyone has, say, a miter saw or some other, expensive power tool.

If you’re planning attending the Risk Conference in New Orleans, hit me up!

What is Soothsaying?

If you want to try Soothsaying your project’s risk register, it might help you to learn a little bit about that means.

Below is the article that’s published on ProjectManagement.com on soothsaying your risk register (in case you’re not a PMI member and therefore can’t read the article on PMI’s sister website).

From ProjectManagement.com – August 2016 theme on Risk

If you could look at your project’s risk register, then peer into a crystal ball and foretell which risks will occur sometime during your project’s execution phase, would that be helpful? And if you could tell your project sponsor, with 90% confidence, how much risk contingency you need to protect your project—would you be interested? You can—by becoming a risk soothsayer.

This doesn’t have anything to do with divination. Let me show you how to soothsay your risk register using a special Microsoft Excel® template to do risk analysis.

Before I do that, though, let’s consider how risk analysis is typically done. Usually, risks are simply put into an Excel template like this one, and if they’re given at all, risk probabilities and impacts are deterministically determined. A risk has a “low” or “medium” probability of occurrence, but there’s no definition for what that means (so there can be no alignment about that definition). And risk impacts are “medium,” “high” or “very high”—but what does that mean? Too often, deterministic risk analysis neither brings about stakeholder alignment, nor does it improve project decision making.

Wouldn’t it be better to know that, under existing conditions of uncertainty, you expect up to four of the 15 risks in your risk register to fire (that is, occur), and so you’ll need 44 additional time-units of schedule contingency to protect your project with 90% confidence? But if the sponsor wants less contingency, you can protect your project with 76% confidence if they provide just 33 additional time-units of schedule contingency.

An Example Risk Register
Let’s look at where these numbers and probabilities come from. From there, you can begin soothsaying your own risk register. Let’s start with an example risk register that has 15 identified risks. Rather than just saying each risk has “medium”, “low” or “very low” probability of occurrence, let’s name the specific probability we want to associate to each subjective term. How about starting with Figure 1 to align all stakeholder’s definition of Risk Probability:


Figure 1: Sample definitions of risk probability

Obviously, we can add or remove subjective terms and equate them with different probabilities until the project team and sponsor agree on what they mean. This brings about stakeholder alignment.

Now let’s move from deterministic estimates about risk impact to more probabilistic ones. An easy way to do that is using the Program Evaluation and Review Technique (PERT), which requires a three-point estimate for project uncertainties (minimum, most likely, maximum). Usually, these judgments are made using expert opinion just like with single-value (deterministic) impact estimates, but now everyone gets a sense of how much impact variation exists for each risk. Figure 2 shows an example of a single entry in the risk register for schedule impact using a three-point estimate:


Figure 2: A PERT expected value for risk impact

Notice that the entry calculates the PERT expected value (10) of the risk impact using the PERT formula:

(Minimum + 4(Most Likely) + Maximum) / 6

Since there are 15 risks in my example risk register, I’ll make three-point estimates for all of them, and sum the PERT expected values for all 15 risks; the sum total is 166 time-units, and the average of impact is 11.1 time-units per risk. Figure 3 shows just the last three risks in the risk register before the PERT sum total and average:


Figure 3: The PERT sum and average for 15 risks

But it’s highly unlikely all 15 risks are going to fire on our project, so we don’t need a schedule reserve of 166. If we knew exactlyhow many risks were going to fire, and which ones they were, then we’d know exactly how much schedule contingency to have on our project.This is where Excel’s binomial functions come in—we’ll use Excel’s built-in statistical functions to soothsay the risk register.

The Binomial Functions in Excel
The binomial probability distribution answers the question, “Up to how many risks will fire on this project?” (I use the verb “fire” to mean that the risk occurs during the execution phase, and now we have to deal with its uncertain, adverse impacts). In Excel, there are two binomial functions: BINOM.DIST (binomial distribution) and BINOM.INV (binomial inverse). Let’s use BINOM.INV to soothsay the risk register.

BINOM.INV requires three arguments:

  • trials are the number of risks in the risk register
  • probability of success is the probability of occurrence for each trial (risk)
  • alpha is how confident we want to be in our soothsaying (usually 75% or more)

Example: Our sample risk register has 15 risks, so the trials argument is 15. We can use Excel’s AVERAGE function to get the average probability of occurrence across all 15 risks, where each risk has an individual occurrence probability of between 5% and 25%. The average risk occurrence is 13.7%, so the probability of success is 13.7%. (Don’t get confused: “Success” in this context means something bad happens to our project—a risk occurs, or fires, during the execution phase). Finally, let’s be 90% confident in the schedule reserve, so the alpha argument is 90%. In Excel, the BINOM.INV function looks like this:

BINOM.INV(15, 0.137, 0.90) = 4

With 90% confidence, we now soothsay that up to four project risks will fire during the project’s execution phase. It might be fewer than four, but I’m ready to accept a 10% chance that five or more risks will fire—an improbable occurrence that, to me (and hopefully to you), is tolerable.

So we need to protect our project against up to four risks that we soothsaid will fire during the project’s execution phase. The question now is: Which four? The four risks with the least impact, or the greatest? Or a mix of low-impact and high-impact risks?

Soothsaying the Schedule Reserve
Instead of soothsaying exactly which risks will fire, let’s start by finding the average expected value for all risks; that involves just adding up the PERT expected value for all 15 risks (166), then dividing by the number of risks in the register (15):

166 / 15 = 11 (rounded)

On average, the schedule impact for each risk in our risk register is 11 time-units.

We expect, with 90% confidence, that up to four risks will fire during the project execution phase, and each fired risk has an average schedule impact of 11 time-units. So let’s add 44 time-units to our project schedule to address the schedule risk found in our risk register:

4 fired risks * 11 time-units per risk = 44 time-units for the schedule reserve

Now that we’ve soothsaid schedule risk, we can use the exact same method to soothsay budget risk, too!

Monte Carlo Simulation Comparison
How well does risk soothsaying work? To find out, I used Palisade’s @Risk7® to create a Monte Carlo simulation model for the example risk register, and compared its results to our soothsaid schedule reserve of 44. I created the simulation model using the same 15 risks, their respective likelihood of occurrence and three-point estimates. Then I used @Risk’s RiskPERT function, which is a special beta probability distribution, to model the uncertain nature of each risk’s schedule impact. After running the simulation model for 50,000 trials, I compared its results with the schedule reserve that came from risk soothsaying.

Figure 4 shows what the Monte Carlo simulation results look like at the 90% confidence level:


Figure 4 : Monte Carlo simulation of the risk register

The graph shows that the most likely scenario (the mode) is that none of the 15 risks will fire during the project execution phase. But to have 90% confidence that there is enough buffer in the project schedule to handle risks that might fire, the model shows that we need a schedule reserve of 44 time-units—just exactly what we soothsaid! Only in 10% of the simulated trials would a reserve of 44 time-units fail to protect the project. (Note that both simulation modeling and risk soothsaying are equally dependent upon the quality of work done during risk assessment, and that results from risk simulation may vary from risk soothsaying).

Conclusion
Using Microsoft Excel’s built-in statistical functions, it’s easy to be a risk soothsayer. You can soothsay anytime you add, modify or remove risks from your risk register. Risk soothsaying allows project managers to thoughtfully protect the project schedule and budget against known risks, increasing the likelihood of overall project success.

Now try soothsaying using your project’s risk register! (Just be sure to tell your project team and sponsor that what you’re really doing is risk analysis…)