Category Archives: @Risk

Statistical PERT Comparison with @Risk

From the very beginning of the creation of Statistical PERT, I have used Palisade’s excellent risk analysis Excel add-in program, @Risk, to compare Statistical PERT’s probabilistic results with those obtained by @Risk using, mostly, the RiskPERT function that comes with @Risk.  RiskPERT is a special form of the beta distribution that models the kind of probability curve traditionally associated with PERT.  I very often gauge Statistical PERT’s accuracy against the results from a Monte Carlo simulation using @Risk.

As I ready the release of Statistical PERT – Normal Edition Version 2 and Statistical PERT – Beta Edition Version 0.6, I wanted to compare the results of these new versions with what I would get using the RiskPERT function.  To make this comparison, I have to set Statistical PERT to use “Medium Confidence” in the Most LIkely outcome to obtain the closet match to a RiskPERT distribution curve.

In performing a sample of six different three-point estimates (only one of which implied a symmetrical bell-curve, one which implied a left-skewed curve, and four which implied right-skewed curves), I was very pleased with the results.

The Normal Edition, of course, extends the implied probability curve to infinity both to the left and to the right; hence, the comparison results with RiskPERT tends to be very different at the extreme ends (1%, 5%, 95% and 99%).  But the Normal Edition performed admirably even though it fits asymmetrical bell-curves to a symmetrical normal curve.

The Beta Edition, as expected, did very well tracking to RiskPERT even at the extreme ends, and it tended to track more closely to RiskPERT overall than what the Normal Edition could do.  This is, in fact, the key reason I wanted to develop a Beta Edition of Statistical PERT was to improve the implied fitting of a probability curve to a 3-point estimate that implied a skewed uncertainty.

Overall, the difference between an estimate from @Risk and a SPERT estimate was usually around 2% or less, and very rarely was it more than 3.5%.  SPERT-Beta’s average difference with RiskPERT was 1.6% across all the 3-point estimates I tested and never greater than 2.2% except when dealing with a perfectly symmetrical bell curve.  SPERT-Beta’s kurtosis for a normal curve is slightly lower than the kurtosis used by RiskPERT, which explains why SPERT-Normal did a better job tracking to RiskPERT for a normal bell-curve than what SPERT-Beta did.

Download an Excel spreadsheet showing a full comparison of @Risk’s RiskPERT function, Statistical PERT – Normal Edition, and Statistical PERT – Beta Edition.  And stay tuned:  these new releases of Statistical PERT happen on January 2, 2017!

 

Palisade Risk Conference 2016

palisade-risk-conference-2016

I’ll be heading to New Orleans for the first time in my life in a few weeks.  The Palisade Risk Conference 2016 is being held on November 1 & 2, 2016.  I have used Palisade’s DecisionTools Suite since my first year in graduate school (in the MSPM program at George Washington University) back in 2010.  I love the power and utility of @Risk, the Excel add-in program for doing Monte Carlo simulations.  @Risk has helped me create Statistical PERT because I use @Risk simulations to compare against and validate my Statistical PERT approach and results.  Without @Risk, there would be no Statistical PERT.

I liken Statistical PERT to a screwdriver:  simple, easy-to-use, solves a lot of problems, inexpensive (Statistical PERT is free, actually).  By contrast, @Risk is complex, not-as-easy-to-use (but still easy for rudimentary modeling), solves a whole lot more problems, but is expensive.  Whereas everyone has a screwdriver in their junk drawer or garage, not everyone has, say, a miter saw or some other, expensive power tool.

If you’re planning attending the Risk Conference in New Orleans, hit me up!