Monthly Archives: October 2015

Parkinson’s Law

Before returning to annual planning, let’s address Parkinson’s Law.

I wasn’t familiar with Parkinson’s Law formally until I began my graduate studies at George Washington University.  (By the way, don’t confuse Parkinson’s Law with Parkinson’s Disease; they are completely different!).  There, in one of my classes, I learned that there was an actual name for a phenomenon that intuitively and through experience I already knew:  that whatever the planning estimate is for a given task, there is very little chance in real-life execution of the project that the task will finish sooner than expected; instead, it most likely will finish on-time or late.

It doesn’t matter much how generously, how conservatively, how guardedly one estimates.  If I estimate a task with 80% reliability, then my estimate is on the right-side of the normal curve, somewhere on the downward slope.  The mean is somewhere at or very near the top of the normally distributed curve.  Theoretically, about one-half the time the task should finish sooner than the mean / most likely value.

But that never really happens!

What happens instead is that people tend to get lazy, people procrastinate, people code hours spent doing other tasks for other projects rather than properly coding time to the task that was planned with 80% reliability.  And for all these reasons and more, the task will finish on-time, in the nick of time — or it will finish late.

That’s the phenomenon encapsulated by Parkinson’s Law.  Simply stated, Parkinson’s Law  recognizes that work expands to fill the time allotted for it.  If I have an eight-hour task assigned to me, it will take me eight hours to complete.  If that task estimate is shaved to only seven or six hours, it will take me seven or six hours to complete.  If I’m given ten or twelve hours, it will take me ten or twelve hours to complete the task (on-time, of course, in each instance!).  What’s different?  The intensity of my effort, the methods and techniques and tools I use, the innovation I bring, the amount of distraction I allow.  Those things alter the way I work so I generally finish the task in whatever amount of time I’m allowed to complete the task.

As it pertains to annual planning, if planning estimates are all planned with 75% reliability, the whole portfolio ought to have a very high probability of success — much higher than 75%.  But projects are like tasks planned with 75% probability; Parkinson’s Law can affect them both.

Next week, we’ll continue our discussion on annual planning.

Annual Planning – Part 3

Suppose you had to create planning estimates for ten tasks, and each task had a planning estimate with 68% reliability.  For simplicity’s sake, let’s also suppose all ten tasks are normally distributed, have a standard deviation of 15, and the mean, median and mode are 50, and the range spans between 0 and 100.  The 68% reliable planning estimate for each task is one deviation to the right of the mean, or, simply, 65.  So, the project total would be 10 x 65 = 650 (hours, weeks, whatever).

If the first task finished in 50, and the planning estimate was 65, then your schedule (or budget, depending on what’s being estimated) has saved 15.  That 15 can be used to offset a different, later task that finishes later than the planning estimate.  But we expect that only 2 or 3 tasks will finish later than their planning estimates; most should finish on or before their planning estimates, since our planning estimates have 68% reliability.

Let’s simulate the ten tasks.  I’ll use Palisade’s @Risk Excel add-in to obtain normally-distributed actuals for the ten tasks:

ten_tasks

Of the ten simulated tasks, only one, task #4, exceeded the planning estimate.  The project’s planning estimate for all ten tasks was 650; the actual ten tasks took only 505.  So, there were ample other tasks besides task #4 (and #2, which had an actual result = planning estimate) that “covered” the late (over-budgeted) task #4.

We could simulate this hundreds of times and there would be very few instances where the project estimate of 650 would be exceeded by the sum of actual results for each of the ten tasks.  You might be tempted to think that 32% of projects should fail their schedule and/or budget if each task in the project was only 68% reliable, but that isn’t correct.  In fact, this theoretical project has a 99.9% probability that the actual results for all ten tasks will be equal to or less than the planning estimate of 650.  If you’re not familiar with statistics, this is a surprising result.  When tasks have planning estimates that are > 50% reliable, the project generates a buffer of sorts that can be used to offset the unlikely occurrence that a task will finish later than its planning estimate.

What I’m describing is a theoretical project — not a real one, though.  In tomorrow’s post, I’ll touch on Parkinson’s Law, which explains why the theoretical project still has plenty of risk of exceeding its planned schedule and budget.

Annual Planning – Part 2

The ten project portfolio I’ve been examining (which is the SPERT example workbook on the download page) has a maximum cost of $6.1M.  That’s the sum of all the maximum values.  If our 3-point estimates are reasonably accurate, then we’d need just over $6M in the annual plan to fully fund all ten projects.

But doing that would be a bad idea, because the likelihood of needing $6M is extremely small.  We’d unnecessarily tie-up organizational resources in a project portfolio budget.

What we need is a portfolio budget that has a high likelihood of fulfilling the budgetary needs of all ten projects.  How high of a likelihood?  That’s a judgment call.  Asking for a portfolio budget that has a 95% likelihood of success seems like a good idea.  Or maybe just 90%.  You probably wouldn’t want to go much lower than 90% unless you’re willing to accept that the budget won’t be sufficient, and the resulting impacts of busting the budget aren’t too frightening.

Let’s suppose we want a portfolio budget that has a 95% likelihood of success.  That means that the budget will meet or exceed the actual needs of all ten projects 95% of the time.  (Of course, that statement is a little nonsensical because there is only “one time” we are executing these ten projects — in the next annual cycle.  So I’d be better off saying that we want a portfolio that has a 95% likelihood of meeting or exceeding the actual project needs for all ten projects).

Okay, if I want a portfolio budget that will fully fund all ten projects with 95% confidence, does that mean I need to budget each project with 95% confidence?  On the surface, that might make sense.  If every project has a budget that is 95% certain to meet the needs of the project, then wouldn’t the whole portfolio have a 95% chance of being sufficient?

The short answer is, no.  Statistics don’t work that way.  Since the mean and mode for a bell-shaped probability have about a 50% chance of meeting or exceeding the actual outcome for that uncertainty, budgeting at something greater than 50% will create a buffer, a sort of reserve, created by any instance when we didn’t use all the budget allocated for the project (which should happen with greater than a 50-50 probability)..

In a portfolio with ten projects, if they all are budgeted at, say, with a 75% confidence planning estimate, then we expect that one-fourth of the time, projects in the portfolio will bust their budgets (some by a very little, some by a lot).  However, if projects truly followed bell-shaped outcomes, then we’d have nearly half the projects in the portfolio cost less than either the most likely project cost (the mode) or the expected value of the project (the mean).  And those cost-saving projects that finish with actual costs that are less than either the mode or the mean should offset that 25% of projects that finish greater than the portfolio budget that had 75% confidence.  Make sense?

Unfortunately, projects rarely finish sooner than their planned finish dates, and they seldom cost less than their planned budgets.  (A topic for another day is what to do about that, but the short answer is in my Pluralsight course where I discuss the Unified Scheduling Method).

But here’s the happy result of using project planning estimates that are greater than the expected value (the mean):  you only have to create planning estimates with about 75% confidence to create a ten-project portfolio that has about 95% confidence.  Put another way, it’s highly unlikely that so many projects will exceed their 75% planning estimates that there’s nothing left in the portfolio budget.  We expect maybe two or three projects to exceed their planning estimates in a ten-project portfolio (25% of 10 is 2.5, right?), but it’s highly improbable that five of ten projects will fail their 75% planning estimates.

Let’s continue to look at this in the next post.

Annual Planning – Part 1

In the Statistical PERT example workbook for Annual Planning, I estimated the costs of ten projects in a portfolio of projects.  The smallest project has a most likely cost of just $35,000, while the biggest project has a most likely cost of $1.5M.  Pretty big difference!  And each project is estimated using a PERT-styled, three-point estimate.  This gives a range of possible project costs, and for most of the projects, the probability curve is skewed to the right, meaning that there is a greater likelihood that project costs will be higher than the most likely project cost rather than lower.

The most likely project cost for the entire project portfolio is $4.8M; that’s just the sum of the most likely project costs in Column C.  The expected value of the portfolio is higher, though.  We use the PERT formula to estimate the mean of the probability distribution, and that result is about $4.9M.  We expected the expected value of the portfolio to be higher than the most likely cost because of right-side skewing of the ten projects in the portfolio.

Now, if we just created a budget based upon either the most likely portfolio cost of $4.8M or the expected cost of $4.9M, we very likely would not have enough funds to pay for all the project-related work in the portfolio.  Why not?  Because, although the mode is at the top of the curve, and the median is somewhere near the top, those points on the probability curve equate to around just 50% of all possible outcomes for the uncertainty.  In other words, budgeting using just the mode or the mean will lead to about a 50-50 chance of success.  Do you want to be right half the time?  I don’t!

Using Monte Carlo simulation of the same 10 projects (and 100,000 trials in the simulation), look at all the area under the curve to the right of the $4.83M mean:

Annual_Planning_10_projects

Budgeting money to equal just the mode or just the mean is a recipe for disaster.

What we need is a budget that is very unlikely to be exceeded by the actual costs of the projects, but not more than what we really need to get the projects done, either.  We don’t want to tie-up organizational reserves unnecessarily by putting too much money in the project portfolio budget, but we don’t want too little in the budget, either.

This is where confidence intervals come into play, along with high-probability planning estimates.  At the end of the day, the portfolio budget ought to have a high probability of meeting the needs of all ten projects.  That figure, whatever it is, should have a high likelihood of leading to budget success.

New Statistical PERT for Annual Planning template!

Yesterday, I published a new Statistical PERT template for Annual Planning.  This is the fifth SPERT template in the Statistical PERT family.  What makes this template different from the others?

With each Statistical PERT template, the same SPERT technique is used:  enter a 3-point estimate, render a subjective opinion about how likely the most likely outcome really is, and choose a SPERT estimate that best fits your circumstances and willingness to accept risk.

But the goal for revenue is to hope for a maximum result while guarding against a minimum result.  That’s different from an expenses goal of hoping for a minimum result while guarding against a maximum result.  The two goals are, in a sense, opposites of each other — but both goals can utilize SPERT.  They just do that a little differently, which is why there is a SPERT template for revenue forecasting, and another SPERT template for expense forecasting.

With the new annual planning template, the goal is to forecast how much a portfolio of projects may cost.  To do that, an estimator has to first estimate how many projects are part of the portfolio, and secondly, the estimator has to estimate how costly each project in the portfolio might be.  Once that work is done, the estimator can use the new Statistical PERT template for annual planning to find a confidence interval for the entire portfolio.

Why do that?

By finding the probabilistic cost of the entire project portfolio, managers can make budgetary plans for securing the necessary funds to fully fund each project in the portfolio in such a way that all projects can be completed without running out of funds, and without over-budgeting, either.

My next few blog posts will discuss this in idea in greater detail.  For now, download Statistical PERT for Annual Planning and examine how a portfolio of projects can be estimated with ease and confidence

Use Statistical PERT on your iPhone or iPad

Did you know that Microsoft Excel for iPhone and iPad users is a free download on iTunes?  It is!

The current version of Microsoft Excel for iPhone/iPad users is 1.13.1.  And while the mobile version is understandably not as robust and full-featured as the regular desktop version, the mobile version is still amazingly good.  Best of all, the mobile version of Excel includes the built-in statistical functions on which Statistical PERT is based (NORM.DIST and NORM.INV).

Try downloading an example workbook from the Statistical PERT website using your iPhone or iPad.  (Be sure to download and install Excel on your iPhone/iPad firstly).  Then, open the SPERT download file in your mobile version of Excel, and start estimating!

Here’s what Statistical PERT looks like on my iPhone 5S (with a few columns hidden so I can make Planning Estimates and see their probabilities):

SPERT_Mobile_image

SPERT v RiskPERT (Part 3)

Let’s examine the difference between a Statistical PERT estimate using the normal distribution and a PERT estimate (which is a special form of the beta distribution) using the RiskPERT function in Palisade’s @Risk Excel add-in.  Only this time, lets look at an even more skewed probability curve.  This time, the range between the most likely outcome (12000) and the maximum point-estimate will be four times greater (40000) than the range between the minimum point-estimate (5000) and the most likely outcome.

If you crack open the spreadsheet, below, you’ll see that, at the worst, the difference between SPERT estimates and RiskPERT estimates is off by up to 5%.  The gap is worst around the top of the curve (naturally), and continues to shrink as you approach the right-side tail.  At the 80th percentile, the difference is around 3%, and at the 90th percentile, the difference is less than 2%.

I don’t advise using Statistical PERT (using the normal distribution — a new edition of SPERT is forthcoming using the beta distribution) where the range difference on one side of the curve is more than four times greater than the other side.  To me, when you approach a difference of 5% or more, then this technique may give estimates which could be misleading, depending on where along the probability curve you’re looking.  When I release the first stable version of Statistical PERT – Beta Edition this winter, the new edition, using the beta distribution, will do a better job modeling skewed uncertainties.

Also, another point I want to make is that there is nothing sacred about the RiskPERT results.  They provide one probability shape which may or may not be a good fit for the uncertainty you’re trying to model.  Don’t think that RiskPERT estimates are “right” and any deviation from them is “wrong.”  RiskPERT is one way to model risk.  Statistical PERT, using the normal distribution, is another way.  Whether one way is better than the other is an arguable point, and it mostly depends upon hard-to-identify properties associated with an uncertain outcome.

All that said, if your uncertainty has bell-shaped properties, and the skewing is no worse than 4x greater either to the left or to the right, than Statistical PERT will give good results provided you can accept estimation errors related to using a normal probability curve in place of a different probability distribution.

Comparison between SPERT and Beta distribution (4x)