Statistical PERT will be impeccably accurate for uncertainties that are perfectly bell-shaped. But in reality, perfectly shaped, perfectly symmetrical probability curves are rare. Usually, if there is data upon which to draw a probability curve, the curve will have an irregular shape with jagged lines that first trend upwards, then downwards. When estimating uncertainties without data — like estimating a project task — there is usually a tendency for the probability curve to be skewed to the right.

Also, Statistical PERT uses the **normal **distribution, whereas PERT is based on a special form of the **beta **distribution. How compatible are these two distribution curves? How much inaccuracy do we accept using Statistical PERT with an uncertainty that is best modeled by using a beta distribution?

First off, because Statistical PERT uses the normal distribution, both the left- and right-side tails of the curve go into infinity. That means that, in spite of stipulating a minimum and maximum point-estimate for an uncertainty, Statistical PERT will spread the area under the curve to go beyond those two anchor points. From a practical standpoint, that doesn’t really matter much, because the probabilities associated with points that are before the minimum point-estimate and beyond the maximum point-estimate are very small, then very-very small, then infinitesimally small. It’s the area between the minimum and maximum point-estimates at issue here.

Let’s suppose we model a skewed bell-shaped curve using this three-point estimate: **5000, 12000, 26000**. The range between the minimum and most likely outcome (7000) is half as much as the range between the most likely outcome and the maximum (14000). How close can a SPERT estimate come to matching what’s possible using a different technique that relies on the beta distribution?

To answer that question, I ran an analysis in Excel and compared SPERT probabilistic estimates between 12000 and almost 28000. (Why 28000? Because SPERT calculations spread the area under the curve to nearly 28000 when I used “Medium confidence” in the most likely outcome; I used “Medium confidence” because that will most closely align with Palisade’s @Risk simulation add-in program using the RiskPERT function, which models a PERT distribution in Excel).

What I learned is that **Statistical PERT will come within 2.5% of RiskPERT estimates**, and for estimates that have a cumulative probability of 90% or greater, the difference is closer to just 1%.

**Is that accurate enough?** That depends upon your definition of accuracy. For many estimation scenarios, creating SPERT estimates that are within 1-2% of an estimate obtained using the beta distribution is very likely “good enough.”

To put this kind of accuracy in perspective, think about this in terms of temperature. I like drinking hot tea at 180 degrees Fahrenheit (82 degrees Celsius). If SPERT estimated the temperature of water at 180 degrees, but the actual temperature was 176 degrees (80 degrees Celsius), would it matter? Would the SPERT estimate be “good enough” for planning purposes?

For me, the SPERT estimate would be accurate enough. My hot tea would be hot. I probably wouldn’t notice a difference of 4 degrees Fahrenheit (2 degrees Celsius). If I thought such a difference between the SPERT estimate and the ideal estimate would be important, then I would need to use a different estimation technique than Statistical PERT for my estimation needs.

Have a look at the spreadsheet, below. It shows the differences between SPERT estimates for different values of *X* compared with Monte Carlo simulation results using Palisade’s @Risk program and the RiskPERT function, where the range between the minimum point-estimate and the most likely outcome is half as much as the most likely outcome and the maximum point-estimate.

Comparison between SPERT and Beta distribution (2x)