What’s your definition of a “Nearly Certain” outcome?

In Statistical PERT, a “nearly certain” outcome is one that has a standard deviation of about 7% of the range between the pessimistic and optimistic outcomes.  What’s so special about 7%?  Why not 5%?  Why not 1%?  Or 10%?

When I developed Statistical PERT last year, I arbitrarily considered a bell-shaped uncertainty that could have 100 possible outcomes (trials).  One hundred is just an easy number to work with, one that you can get your arms around and wrestle with.

If I told you to choose to distribute hypothetical outcomes for a bell-shaped uncertainty 100 times so I could get a sense of your uncertainty surrounding the mean and mode, you would rightfully balk.  Make up 100 hypothetical values for an uncertainty?  Just to distribute them between a minimum point-estimate and a maximum point-estimate?  Just so you can show me what kind of a bell the bell-shaped uncertainty is?

But what if instead I made the task easier by saying, “Show me how 100 hypothetical trials might play-out for this uncertainty, but with one caveat:  you can only choose three possible values for each trial:  minimum, most likely, and maximum.”  That would be considerably easier to do, as now you would just be left with figuring out how many times the minimum uncertainty would occur, how many times the most likely outcome would occur, and how many times the maximum outcome would occur.  You’re just distributing the 3-point estimate you already created.

And then, what if I said, “Show me what a ‘nearly certain’ likelihood looks like for the most likely outcome.”  You only have 100 hypothetical trials to work with, and you can only use one of three possible values to distribute across those 100 trials.

Very likely, you would distribute the 3-point estimate this way:  one time the minimum outcome would occur, one time the maximum outcome would occur, and 98 times the most likely outcome would occur.  To not equate at least 1 out of 100 trials with both the minimum and maximum outcomes would mean you’re not working with a bell-shaped uncertainty.

Now, open up Excel.  Enter a minimum value for a bell-shaped uncertainty in cell A1.  Put the maximum value in cell A2.  And for the next 98 cells in column A, put in a most likely value for the made-up, bell-shaped uncertainty of your choosing.

What’s the standard deviation for this distribution?  Use STDEV.P(A1:A100) to find out.  Depending on what you entered for cell values in column A, the standard deviation could be just about anything.

Finally, divide the standard deviation by the range between the minimum and maximum point-estimates.  What do you get?

A number pretty close to 7%.

If we start with the premise that a most likely outcome is “nearly certain” to occur, and that a nearly certain outcome can be modeled with a 1-98-1 split of 100 hypothetical trials, the resulting standard deviation divided by the range between the minimum and maximum point-estimates will be around 7%.

Don’t like the default SPERT definition of a “nearly certain” outcome?

Then change it.

Statistical PERT is a technique with a lot of flexibility built-in to it.  Find a standard deviation for 200 or 500 or 1000 trials, if you want.  And use more than just three possible values for each trial, too, if you wish.  The SPERT template is just a template, just a starting place.  You can adapt it to better fit your own needs.  Changing the meaning of a most likely outcome that is “nearly certain” to occur is well within the Statistical PERT framework.

Leave a Reply

Your email address will not be published. Required fields are marked *