There are a couple of bigger problems to wrestle with than the fact that Statistical PERT relies upon the normal curve instead of any other, better-fitting probability distribution. Let’s talk about a few of them.

The first one that comes to mind is **the error in choosing the values for the minimum, most likely, and maximum outcomes** for a bell-shaped uncertainty.

Anytime someone makes a three-point estimate, you have to find out — from where did they obtain their three points? Are these obtained from historical records? Are they industry benchmarks? Are they by looking at other, equivalent projects (or teams, or whatever)? Was parametric estimation in-use? Or — more likely — were the three points chosen from the estimator’s own knowledge and expertise in what was being estimated?

In my day-to-day life as an IT project manager, project schedules are almost exclusively built using the project manager’s own knowledge and expertise, along with the expertise of others, such as the development team, other managers, and perhaps someone from the Project Management Office.

Whenever I choose a three-point estimate, I’m at risk of poorly choosing my point-estimates. If I look at a programming effort and I gauge it to take 24-40-60 hours, but I’ve greatly underestimated by wrongly presuming that I would get a seasoned developer to code that module — instead, I’m getting a new-at-the-job, junior developer — then SPERT’s 1% or 2% error is truly immaterial if I should have estimated 40-80-120 hours to begin with. Statistical estimation models like SPERT are completely useless with bad input data.

Estimates are only as good as the estimator behind the estimate. Whether we’re working with deterministic, single-point estimates, three-point estimates, or estimate ranges, the #1 concern ought to be in finding out how the estimates were chosen, and if a three-point estimate was used (in PERT, SPERT, a Monte Carlo simulation, or something else), the key question is, “How did you choose the basis for your estimate?”