Category Archives: Uncategorized

How Probable Are Your “Most Likely” Estimates?

Are you using “most likely” estimates in your project schedule?  If so, how likely are your “most likely” estimates?

This week, ProjectManagement.com published an article I wrote that shares the story of a PMO director who was frustrated with the late delivery of his projects.  Although his teams created “most likely” estimates that they asserted could be accomplished with 90% certainty, the reality was that most of the time, their “most likely” outcomes were too optimistic.

Learn how one PMO director used Statistical PERT Beta Edition to model schedule uncertainty, and see the difference between a “most likely” outcome and a “highly probable” outcome.

https://www.projectmanagement.com/articles/370549/Getting-Beta-with-Statistical-PERT

Annual Planning – Part 3

Suppose you had to create planning estimates for ten tasks, and each task had a planning estimate with 68% reliability.  For simplicity’s sake, let’s also suppose all ten tasks are normally distributed, have a standard deviation of 15, and the mean, median and mode are 50, and the range spans between 0 and 100.  The 68% reliable planning estimate for each task is one deviation to the right of the mean, or, simply, 65.  So, the project total would be 10 x 65 = 650 (hours, weeks, whatever).

If the first task finished in 50, and the planning estimate was 65, then your schedule (or budget, depending on what’s being estimated) has saved 15.  That 15 can be used to offset a different, later task that finishes later than the planning estimate.  But we expect that only 2 or 3 tasks will finish later than their planning estimates; most should finish on or before their planning estimates, since our planning estimates have 68% reliability.

Let’s simulate the ten tasks.  I’ll use Palisade’s @Risk Excel add-in to obtain normally-distributed actuals for the ten tasks:

ten_tasks

Of the ten simulated tasks, only one, task #4, exceeded the planning estimate.  The project’s planning estimate for all ten tasks was 650; the actual ten tasks took only 505.  So, there were ample other tasks besides task #4 (and #2, which had an actual result = planning estimate) that “covered” the late (over-budgeted) task #4.

We could simulate this hundreds of times and there would be very few instances where the project estimate of 650 would be exceeded by the sum of actual results for each of the ten tasks.  You might be tempted to think that 32% of projects should fail their schedule and/or budget if each task in the project was only 68% reliable, but that isn’t correct.  In fact, this theoretical project has a 99.9% probability that the actual results for all ten tasks will be equal to or less than the planning estimate of 650.  If you’re not familiar with statistics, this is a surprising result.  When tasks have planning estimates that are > 50% reliable, the project generates a buffer of sorts that can be used to offset the unlikely occurrence that a task will finish later than its planning estimate.

What I’m describing is a theoretical project — not a real one, though.  In tomorrow’s post, I’ll touch on Parkinson’s Law, which explains why the theoretical project still has plenty of risk of exceeding its planned schedule and budget.

Annual Planning – Part 2

The ten project portfolio I’ve been examining (which is the SPERT example workbook on the download page) has a maximum cost of $6.1M.  That’s the sum of all the maximum values.  If our 3-point estimates are reasonably accurate, then we’d need just over $6M in the annual plan to fully fund all ten projects.

But doing that would be a bad idea, because the likelihood of needing $6M is extremely small.  We’d unnecessarily tie-up organizational resources in a project portfolio budget.

What we need is a portfolio budget that has a high likelihood of fulfilling the budgetary needs of all ten projects.  How high of a likelihood?  That’s a judgment call.  Asking for a portfolio budget that has a 95% likelihood of success seems like a good idea.  Or maybe just 90%.  You probably wouldn’t want to go much lower than 90% unless you’re willing to accept that the budget won’t be sufficient, and the resulting impacts of busting the budget aren’t too frightening.

Let’s suppose we want a portfolio budget that has a 95% likelihood of success.  That means that the budget will meet or exceed the actual needs of all ten projects 95% of the time.  (Of course, that statement is a little nonsensical because there is only “one time” we are executing these ten projects — in the next annual cycle.  So I’d be better off saying that we want a portfolio that has a 95% likelihood of meeting or exceeding the actual project needs for all ten projects).

Okay, if I want a portfolio budget that will fully fund all ten projects with 95% confidence, does that mean I need to budget each project with 95% confidence?  On the surface, that might make sense.  If every project has a budget that is 95% certain to meet the needs of the project, then wouldn’t the whole portfolio have a 95% chance of being sufficient?

The short answer is, no.  Statistics don’t work that way.  Since the mean and mode for a bell-shaped probability have about a 50% chance of meeting or exceeding the actual outcome for that uncertainty, budgeting at something greater than 50% will create a buffer, a sort of reserve, created by any instance when we didn’t use all the budget allocated for the project (which should happen with greater than a 50-50 probability)..

In a portfolio with ten projects, if they all are budgeted at, say, with a 75% confidence planning estimate, then we expect that one-fourth of the time, projects in the portfolio will bust their budgets (some by a very little, some by a lot).  However, if projects truly followed bell-shaped outcomes, then we’d have nearly half the projects in the portfolio cost less than either the most likely project cost (the mode) or the expected value of the project (the mean).  And those cost-saving projects that finish with actual costs that are less than either the mode or the mean should offset that 25% of projects that finish greater than the portfolio budget that had 75% confidence.  Make sense?

Unfortunately, projects rarely finish sooner than their planned finish dates, and they seldom cost less than their planned budgets.  (A topic for another day is what to do about that, but the short answer is in my Pluralsight course where I discuss the Unified Scheduling Method).

But here’s the happy result of using project planning estimates that are greater than the expected value (the mean):  you only have to create planning estimates with about 75% confidence to create a ten-project portfolio that has about 95% confidence.  Put another way, it’s highly unlikely that so many projects will exceed their 75% planning estimates that there’s nothing left in the portfolio budget.  We expect maybe two or three projects to exceed their planning estimates in a ten-project portfolio (25% of 10 is 2.5, right?), but it’s highly improbable that five of ten projects will fail their 75% planning estimates.

Let’s continue to look at this in the next post.

Annual Planning – Part 1

In the Statistical PERT example workbook for Annual Planning, I estimated the costs of ten projects in a portfolio of projects.  The smallest project has a most likely cost of just $35,000, while the biggest project has a most likely cost of $1.5M.  Pretty big difference!  And each project is estimated using a PERT-styled, three-point estimate.  This gives a range of possible project costs, and for most of the projects, the probability curve is skewed to the right, meaning that there is a greater likelihood that project costs will be higher than the most likely project cost rather than lower.

The most likely project cost for the entire project portfolio is $4.8M; that’s just the sum of the most likely project costs in Column C.  The expected value of the portfolio is higher, though.  We use the PERT formula to estimate the mean of the probability distribution, and that result is about $4.9M.  We expected the expected value of the portfolio to be higher than the most likely cost because of right-side skewing of the ten projects in the portfolio.

Now, if we just created a budget based upon either the most likely portfolio cost of $4.8M or the expected cost of $4.9M, we very likely would not have enough funds to pay for all the project-related work in the portfolio.  Why not?  Because, although the mode is at the top of the curve, and the median is somewhere near the top, those points on the probability curve equate to around just 50% of all possible outcomes for the uncertainty.  In other words, budgeting using just the mode or the mean will lead to about a 50-50 chance of success.  Do you want to be right half the time?  I don’t!

Using Monte Carlo simulation of the same 10 projects (and 100,000 trials in the simulation), look at all the area under the curve to the right of the $4.83M mean:

Annual_Planning_10_projects

Budgeting money to equal just the mode or just the mean is a recipe for disaster.

What we need is a budget that is very unlikely to be exceeded by the actual costs of the projects, but not more than what we really need to get the projects done, either.  We don’t want to tie-up organizational reserves unnecessarily by putting too much money in the project portfolio budget, but we don’t want too little in the budget, either.

This is where confidence intervals come into play, along with high-probability planning estimates.  At the end of the day, the portfolio budget ought to have a high probability of meeting the needs of all ten projects.  That figure, whatever it is, should have a high likelihood of leading to budget success.

New Statistical PERT for Annual Planning template!

Yesterday, I published a new Statistical PERT template for Annual Planning.  This is the fifth SPERT template in the Statistical PERT family.  What makes this template different from the others?

With each Statistical PERT template, the same SPERT technique is used:  enter a 3-point estimate, render a subjective opinion about how likely the most likely outcome really is, and choose a SPERT estimate that best fits your circumstances and willingness to accept risk.

But the goal for revenue is to hope for a maximum result while guarding against a minimum result.  That’s different from an expenses goal of hoping for a minimum result while guarding against a maximum result.  The two goals are, in a sense, opposites of each other — but both goals can utilize SPERT.  They just do that a little differently, which is why there is a SPERT template for revenue forecasting, and another SPERT template for expense forecasting.

With the new annual planning template, the goal is to forecast how much a portfolio of projects may cost.  To do that, an estimator has to first estimate how many projects are part of the portfolio, and secondly, the estimator has to estimate how costly each project in the portfolio might be.  Once that work is done, the estimator can use the new Statistical PERT template for annual planning to find a confidence interval for the entire portfolio.

Why do that?

By finding the probabilistic cost of the entire project portfolio, managers can make budgetary plans for securing the necessary funds to fully fund each project in the portfolio in such a way that all projects can be completed without running out of funds, and without over-budgeting, either.

My next few blog posts will discuss this in idea in greater detail.  For now, download Statistical PERT for Annual Planning and examine how a portfolio of projects can be estimated with ease and confidence

Use Statistical PERT on your iPhone or iPad

Did you know that Microsoft Excel for iPhone and iPad users is a free download on iTunes?  It is!

The current version of Microsoft Excel for iPhone/iPad users is 1.13.1.  And while the mobile version is understandably not as robust and full-featured as the regular desktop version, the mobile version is still amazingly good.  Best of all, the mobile version of Excel includes the built-in statistical functions on which Statistical PERT is based (NORM.DIST and NORM.INV).

Try downloading an example workbook from the Statistical PERT website using your iPhone or iPad.  (Be sure to download and install Excel on your iPhone/iPad firstly).  Then, open the SPERT download file in your mobile version of Excel, and start estimating!

Here’s what Statistical PERT looks like on my iPhone 5S (with a few columns hidden so I can make Planning Estimates and see their probabilities):

SPERT_Mobile_image

SPERT v RiskPERT (Part 3)

Let’s examine the difference between a Statistical PERT estimate using the normal distribution and a PERT estimate (which is a special form of the beta distribution) using the RiskPERT function in Palisade’s @Risk Excel add-in.  Only this time, lets look at an even more skewed probability curve.  This time, the range between the most likely outcome (12000) and the maximum point-estimate will be four times greater (40000) than the range between the minimum point-estimate (5000) and the most likely outcome.

If you crack open the spreadsheet, below, you’ll see that, at the worst, the difference between SPERT estimates and RiskPERT estimates is off by up to 5%.  The gap is worst around the top of the curve (naturally), and continues to shrink as you approach the right-side tail.  At the 80th percentile, the difference is around 3%, and at the 90th percentile, the difference is less than 2%.

I don’t advise using Statistical PERT (using the normal distribution — a new edition of SPERT is forthcoming using the beta distribution) where the range difference on one side of the curve is more than four times greater than the other side.  To me, when you approach a difference of 5% or more, then this technique may give estimates which could be misleading, depending on where along the probability curve you’re looking.  When I release the first stable version of Statistical PERT – Beta Edition this winter, the new edition, using the beta distribution, will do a better job modeling skewed uncertainties.

Also, another point I want to make is that there is nothing sacred about the RiskPERT results.  They provide one probability shape which may or may not be a good fit for the uncertainty you’re trying to model.  Don’t think that RiskPERT estimates are “right” and any deviation from them is “wrong.”  RiskPERT is one way to model risk.  Statistical PERT, using the normal distribution, is another way.  Whether one way is better than the other is an arguable point, and it mostly depends upon hard-to-identify properties associated with an uncertain outcome.

All that said, if your uncertainty has bell-shaped properties, and the skewing is no worse than 4x greater either to the left or to the right, than Statistical PERT will give good results provided you can accept estimation errors related to using a normal probability curve in place of a different probability distribution.

Comparison between SPERT and Beta distribution (4x)

SPERT v RiskPERT (Part 2)

In the first post of this series, I compared Statistical PERT with @Risk’s RiskPERT function, to see how close SPERT estimates, which use the normal distribution, come to PERT estimates using the beta distribution.  When I compared a skewed uncertainty where the range between the minimum point-estimate and the most likely outcome was half as much as the range between the most likely outcome and the maximum point-estimate, the SPERT estimates came within 2.5% of RiskPERT estimates.

For this post, I made a similar comparison, but this time I made the bell-shaped uncertainty even more skewed.  This time, the range between the most likely outcome and the maximum point-estimate was three times greater than the range between the minimum point-estimate and the most likely outcome.  The three-point estimate I used was:  5000-12000-33000.

When I ran the same analysis for this comparison scenario, I learned that SPERT estimates came within 4% of RiskPERT estimates.  The difference was most pronounced between the 50th and 75th percentile.  When the estimates were around the 80th percentile and above, the differences shrank to around 2.5% between SPERT and RiskPERT, and when the estimates were above the 90th percentile, the differences shrank to around 1.5%.

Have a look at this spreadsheet!

Comparison between SPERT and Beta distribution (3x)

SPERT v. RiskPERT

Statistical PERT will be impeccably accurate for uncertainties that are perfectly bell-shaped.  But in reality, perfectly shaped, perfectly symmetrical probability curves are rare.  Usually, if there is data upon which to draw a probability curve, the curve will have an irregular shape with jagged lines that first trend upwards, then downwards.  When estimating uncertainties without data — like estimating a project task — there is usually a tendency for the probability curve to be skewed to the right.

Also, Statistical PERT uses the normal distribution, whereas PERT is based on a special form of the beta distribution.  How compatible are these two distribution curves?  How much inaccuracy do we accept using Statistical PERT with an uncertainty that is best modeled by using a beta distribution?

First off, because Statistical PERT uses the normal distribution, both the left- and right-side tails of the curve go into infinity.  That means that, in spite of stipulating a minimum and maximum point-estimate for an uncertainty, Statistical PERT will spread the area under the curve to go beyond those two anchor points.  From a practical standpoint, that doesn’t really matter much, because the probabilities associated with points that are before the minimum point-estimate and beyond the maximum point-estimate are very small, then very-very small, then infinitesimally small.  It’s the area between the minimum and maximum point-estimates at issue here.

Let’s suppose we model a skewed bell-shaped curve using this three-point estimate:  5000, 12000, 26000.  The range between the minimum and most likely outcome (7000) is half as much as the range between the most likely outcome and the maximum (14000).  How close can a SPERT estimate come to matching what’s possible using a different technique that relies on the beta distribution?

To answer that question, I ran an analysis in Excel and compared SPERT probabilistic estimates between 12000 and almost 28000.  (Why 28000?  Because SPERT calculations spread the area under the curve to nearly 28000 when I used “Medium confidence” in the most likely outcome; I used “Medium confidence” because that will most closely align with Palisade’s @Risk simulation add-in program using the RiskPERT function, which models a PERT distribution in Excel).

What I learned is that Statistical PERT will come within 2.5% of RiskPERT estimates, and for estimates that have a cumulative probability of 90% or greater, the difference is closer to just 1%.

Is that accurate enough?  That depends upon your definition of accuracy.  For many estimation scenarios, creating SPERT estimates that are within 1-2% of an estimate obtained using the beta distribution is very likely “good enough.”

To put this kind of accuracy in perspective, think about this in terms of temperature.  I like drinking hot tea at 180 degrees Fahrenheit (82 degrees Celsius).  If SPERT estimated the temperature of water at 180 degrees, but the actual temperature was 176 degrees (80 degrees Celsius), would it matter?  Would the SPERT estimate be “good enough” for planning purposes?

For me, the SPERT estimate would be accurate enough.  My hot tea would be hot.  I probably wouldn’t notice a difference of 4 degrees Fahrenheit (2 degrees Celsius).  If I thought such a difference between the SPERT estimate and the ideal estimate would be important, then I would need to use a different estimation technique than Statistical PERT for my estimation needs.

Have a look at the spreadsheet, below.  It shows the differences between SPERT estimates for different values of X compared with Monte Carlo simulation results using Palisade’s @Risk program and the RiskPERT function, where the range between the minimum point-estimate and the most likely outcome is half as much as the most likely outcome and the maximum point-estimate.

Comparison between SPERT and Beta distribution (2x)

 

SPERT-Beta Development Release ‘A’ available

Well, this past week instead of blogging, I spent a huge chunk of my time working on the next edition of Statistical PERT:  Statistical PERT – Beta, which will use MS Excel’s built-in beta distribution functions to easily create probabilistic estimates with greater accuracy and for a greater number of uncertainties, even those that are too skewed to use Statistical PERT built with the normal distribution functions.

I’m really excited about this next edition of Statistical PERT.  This next edition will be a companion to, not a replacement for, the original edition of Statistical PERT using the normal distribution.  Here’s how I characterize the two editions:

Statistical PERT – Normal

  • Very easy to use
  • Very easy to modify
  • Very flexible
  • Easy to understand and explain
  • Can easily use the SPERT concepts without a pre-built SPERT template
  • Works with any bell-shaped uncertainty
  • Works with slightly-to-moderately skewed uncertainties
  • Accuracy is between 95% – 99.9%

Statistical PERT – Beta

  • Very easy to use
  • Not as easy to modify
  • Very flexible
  • Not as easy to understand or explain
  • Cannot easily use the SPERT concepts without a pre-built SPERT template
  • Works with any bell-shaped uncertainty
  • Works with even very severely skewed uncertainties
  • Works with non-bell-shaped uncertainties
  • Achieves very high accuracy (not sure exactly how accurate yet, but it is likely that accuracy will be > 99% for the uncertainties it is designed to estimate)

Want to learn more about the next edition of Statistical PERT?  Visit the new Statistical PERT Beta Blog to learn more and download a development release of the SPERT-Beta Excel workbook.  SPERT-Beta will only be available for downloading on the Statistical PERT Beta Blog until the stable Version 1 is released (this coming winter, 2016).