The “L” in LEAPfrog

The “L” in LEAPfrog stands for “Lean.”

“Lean” is a popular business buzzword. Everything is either “agile” or “lean” nowadays. And while I’m a little reticent about using Lean the buzzword, I’m not at all reticent about Lean the concept: eliminate waste.

So in LEAPfrog, the approach is to eliminate waste by using critical chain project management concepts, that is, to introduce buffers into the project schedule. What we want to do is eliminate or reduce the waste that arises from Parkinson’s Law and Student Syndrome (among other issues).

To do that, it starts by estimating high-level activities using a probabilistic approach. Since I created Statistical PERT (SPERT), I’m of course going to use SPERT to model the uncertain nature of each project activity.

The uncertainty I’m accounting for is both the normal variation that occurs in any activity, like driving to work each day, and the non-normal variation that occurs from risks that arise and adversely affect a project. Every activity has multiple risks that could cause the activity to take longer–sometimes much longer– than what would be normally observed. It’s like a car accident on the interstate when you’re driving to work, and the interstate is bumper-to-bumper as a result.

Now, we could use a normal risk management approach, but that takes time and expertise, neither of which may be available. If they are, fine, use good risk management practices to identify, analyze, and plan for when risks affect your project.

But failing that, at least we can account for the wider variation of activity performance that can occur because of unforeseen events that arise that cause an activity to take much longer than expected. In a high-level activity to procure vendor services, there might be a snag during contract negotiations that cause the process to go from a normal 4 weeks to an non-normal 8 or even 12 weeks. Include this variation when you estimate this high-level activity.

So in LEAPfrog, every activity is assessed as an independent, random variable with many possibilities and probabilities of occurrence. This is easily done using SPERT Beta Edition and the new SPERT Scheduler feature. Model each activity with a 3-point estimate (minimum, most likely, maximum) and a subjective judgment about how likely the most likely outcome really is. The SPERT spreadsheet offers an infinite number of probabilistic estimates. You’ll probably choose estimates that are between 50% and 95% probable, depending on your context, project, and activity.

But rather than unthinkingly choosing a high probability estimate, instead, choose a lower reliability estimate and then build a schedule contingency–a schedule buffer–to account for those times when your low-reliability estimate is exceeded. Later, I’ll explain how to create that schedule buffer.

Why choose a low-reliability estimate, you ask?

To fight against Parkinson’s Law and Student Syndrome. Instead of giving a too-generous amount of time to complete an activity, shrink the estimate so people working on that activity don’t sense that they have any buffer to waste. They have to immediately start working on the activity! And since work expands to fill the time allotted to it, give less time to do the activity and you’ll save time on the project because the activity will be finished in less time.

But since you’ll anticipate some activities will exceed their low-reliability estimates, you’ll also create a right-sized schedule contingency to guard against schedule failure.

The “E” in LEAPfrog is Emergent. We’ll look at that next in my next blog post.

Project Planning Using LEAPfrog

I’m planning a large, Enterprise Resource Planning (ERP) upgrade project at work. This is a big project with many integrations to other systems. Since it’s a “buy” solution and not a “build” solution, we’re using a traditional, plan-drive (“waterfall”) approach to planning.

But, not exactly “traditional.”

There are many well-known pitfalls using a plan-driven project. Among them:

So I’ve created a new project planning approach: LEAPfrog.

That’s the Lean-Emergent Approach to Planning for realistic, optimized guesswork.

My approach to planning is Lean. I’m using a high-level, probabilistic schedule to identify what the probabilistic duration is of each critical path activity. And then, instead of building a schedule where each activity has a very high reliability, I’m building a project schedule where activities are only 50% reliable, but I’ll use buffering to ensure that if an activity takes longer than planned, we’ve anticipated that using a schedule buffer.

By using this approach to planning, there is less chance that Parkinson’s Law and Student Syndrome will adversely affect the project.

My approach is Emergent. I’m first creating a high-level project schedule without developing an up-front, detailed, task-level, resource-loaded project schedule for the entire project. I want to allow both the planning and the real work to emerge, but that means the project team I’m on needs to do a good job of assessing the high-level, critical path activities, and create a right-sized buffer to not just buffer for aleatory risks, but also epistemic risks, too.

My idea is, let’s create a high-level, probabilistic project schedule using just the critical path activities of the project. Then, let’s create a detailed, task-level schedule that matches to the minimum duration estimates of each high-level activity.

When the task-level schedule exceeds the minimum duration estimate of its parent activity, that’s okay; the high-level, activity-level schedule was built using 50% reliable estimates for each activity. There is a built-in, activity-level buffer. Creating a task-level schedule that matches to the parent activity’s minimum duration estimate fights against both Parkinson’s Law and Student Syndrome.

And when the task-level schedule has tasks that, in sum, exceed the 50% reliable estimates of their parent activity, that’s still okay because I’ve built a whole-project schedule contingency to anticipate that event, too.

In the coming posts, I’ll explain more fully how LEAPfrog works, and I’ll share what successes—and failures—I encounter along the way.

Don’t Be Afraid to Use the Normal Distribution

At work, I’m planning a big upgrade to my employer’s enterprise resource planning (ERP) system. It’s been a decade since the last upgrade, and our vendor support is running out next year.

It’s early in the project lifecycle. We just selected a partner to help us with the upgrade. We’re now going through the procurement phase (detailed SOW and contract).

Before we engage with this vendor to do discovery work, I’m doing project planning with my project team. We identified 11 major activities that this project will plan and execute. Our project sponsor wants to know, how long will this project take?

To answer that question, our project team has modeled the 11 activities using both SPERT Normal Edition, SPERT Beta Edition and we used Monte Carlo simulation (Palisade’s @Risk program).

We used this global heuristic for the 11 activities: the minimum duration is 25% less than the most likely duration, and the maximum duration is 50% greater than the most likely duration. If we wanted, we could alter the heuristic results for each activity. And because we’re using SPERT, we can apply our subjective judgment, too, to express how likely will the most likely outcome really occur.

This is going to be a year-long project.

What’s interesting to me is that, of course, the beta distribution is a better fit to the way we’ve modeled the duration uncertainty of each activity. And yet, the SPERT Normal Edition calculates nearly the same result as the SPERT Beta Edition (or a Monte Carlo simulation, too, for that matter).

SPERT Beta Edition:

  • 50% probable duration is 245 days
  • 80% probable duration is 276 days
  • 90% probable duration is 292 days
  • 95% probable duration is 304 days

SPERT Normal Edition:

  • 50% probable duration is 247 days
  • 80% probable duration is 278 days
  • 90% probable duration is 294 days
  • 95% probable duration is 307 days

The normal distribution can handle mild-to-moderately skewed duration uncertainties like these. You don’t necessarily have to use a best-fitting probability distribution to model your project’s uncertainties.

Sometimes, using the normal distribution is “good enough” to make a good decision. And the normal distribution is among the easiest to use in Excel (NORM.DIST, NORM.INV functions).

Version 3 of SPERT Beta Edition

Yesterday I began working on Version 3 of Statistical PERT (SPERT) Beta Edition. Since all of my spring and summer travel plans have all been scuttled, I thought I’d get started working on Version 3 sooner rather than later.

Since beginning the Beta Edition three years ago, this edition has always been in catch-up mode to the original, Normal Edition (which uses the normal distribution to model uncertainties). And that’s no different with Version 3 of SPERT Beta Edition, with one notable exception.

Like Version 4 of SPERT Normal Edition, I’m going to have a new worksheet that models uncertainties using Monte Carlo simulation (but with the beta distribution instead of the normal distribution). SPERT users can experiment with simulating an uncertainty they’ve modeled using 10,000 trials just by pressing F9.

Unlike Version 4 of SPERT Normal Edition, Version 3 of SPERT Beta Edition won’t have an agile burn-up chart. That chart is well-suited to using the normal distribution, so I don’t see creating a similar burn-up chart in the Beta Edition.

Version 3 of SPERT Beta Edition will get a new feature that’s not in Version 4 of SPERT Normal Edition (yet; this feature will be part of next year’s Version 5 of SPERT Normal Edition). And that is, SPERT Scheduler.

SPERT Scheduler is a feature added on to the existing Mixed Entry worksheet. The Scheduler will allow users to model a plan-driven project’s critical path activities using calendar work days. The specific Excel function is the WORKDAY function, which has been available since Excel 2007.

Using the SPERT Scheduler, a modeler can see how long a project duration will be using different choices for the probabilistic duration estimates for each activity.

The key limitation of this feature is it can only model a project’s critical path, and it can’t address merge bias, which is what a full Monte Carlo simulation would do. But for many people who are simply looking for high-level project estimates based upon sequential activities, this will be a great solution for them.

If you’re interested, you can try-out Version 3 by visiting my GitHub repository, and selecting the Version 3 branch to download the latest iteration of SPERT Beta Edition Version 3. Just be aware that I haven’t fully tested spreadsheets in this branch; they’re still under development!

The April 2020 Webinar on Monte Carlo Simulation

Last week, I held another free webinar on probabilistic estimation. This time, it was on Monte Carlo simulation.

A lot of project managers and businesspeople have heard about Monte Carlo simulation, but not nearly as many people know exactly what it is (and what it isn’t), or how to use it to inform decision-making.

I created this spreadsheet and used it during this month’s webinar. In it, I explain how we can calculate the probability of rolling a “7” using a pair of dice just by simulating the problem.

Now, we know that the probability of getting a “7” is 16.7% (6 possible ways of rolling a “7” divided by 36 total possible outcomes from rolling a pair of dice). But what if we didn’t know how to solve the problem using a math formula? What if creating a math formula was too complicated? What if it were impossible? That’s when Monte Carlo simulation comes into play.

By simulating complex problems thousands of times, we can learn the probabilities of all kinds of possible outcomes. Then, we can use that information to help make decisions about what to do today.

I included some instructions inside the spreadsheet, and I also have an included worksheet that shows how to simulate a single project task, too.

Questions? Feel free to contact me to learn more about Monte Carlo simulation, and check out my future free, monthly webinars. I’ll be doing this particular webinar again probably later in 2020.

SPERT and the Excel Histogram chart

Since releasing Version 4 of the Statistical PERT® Normal Edition this past January, I’ve had a few people contact me saying that they couldn’t use the new Monte Carlo simulation tab that I’ve included in that release.

After doing some investigation, I now think I know what the problem is. The Monte Carlo simulation worksheet in V4 uses Excel’s “Histogram chart” to visually depict the 10,000 trials of the simulation. However, Microsoft only introduced the histogram chart in Excel 2016. Excel 2010 and Excel 2013 don’t have this chart, so users of these older versions of Excel see an error message where the histogram is supposed to be.

If you’re using either Excel 2010 or Excel 2013, you can manually create the histogram chart yourself by Googling “histogram chart Excel 2013” and you’ll see results that explain a manual process for creating a histogram chart. Here are some of the result links I found:

Of course, you can always save yourself the trouble of manually creating a histogram chart by upgrading to the latest release of Excel.

BTW, Microsoft’s lifecycle support for Excel 2010 ends October 2020.

Version 2.1 of SPERT® Beta Edition Released

Yesterday I released a minor update to Version 2 of the Statistical PERT® Beta Edition. This minor release improves some formatting of the Vlookup tab so it uses the same, yellow-shading + blue text format for cells eligible for end-user inputs.

I also added another feature that was included in Version 4 of Statistical PERT® Normal Edition: “Click for help” buttons. On all interactive worksheets, there is a prominent, orange “Click to call” button to make it easy for anyone who wants free, email support to get it. The button links to my site’s “Contact Me” page where they can fill-in their name, email account, and question/issue.

This summer I’ll be working on Version 3 of SPERT® Beta Edition. Version 3 will include the same feature set as what the SPERT® Normal Edition has with its Version 4 release. That is, it will include the ability to use a specified standard deviation, it’ll have a Monte Carlo simulation worksheet, and have up to three scenarios on the Agile Forecast tab.

If you want to be notified whenever I release new versions of Statistical PERT, I’ve got a new way to do that: add yourself to my new email newsletter. Once a month, I’ll send a short newsletter offering a tip or trick for how to use Statistical PERT, plus I’ll send out notifications whenever I release new versions of these spreadsheets.

SPERT Normal Edition Version 4 Released!

Today I released a new version of Statistical PERT® Normal Edition! Version 4 of SPERT Normal Edition adds these new/enhanced features:

  • New Monte Carlo simulation tab for simulating 10,000 trials of a single, random variable
  • New Agile Burnup chart for creating an agile release forecast
  • More subjective judgment options (10) to choose from when determining how likely the most likely outcome really is
  • Revised ratio scale modifiers for use with the SPERT Standard Deviation formula
  • Enhanced agile forecast tab
  • Added a way to use your own, explicit standard deviation, which is especially useful if you have historical data
  • Minor formatting enhancements
  • A one-click way to ask for help using my free support service

Version 4 is currently available on the Statistical PERT download page, or you can download Version 4 from GitHub, too.

Agile Forecasting with SPERT Beta Edition V2

Today I added a new Agile Forecast tab to the forthcoming Version 2 of Statistical PERT Beta Edition. This Agile Forecast tab is very similar to the Agile Forecast tab in the SPERT Normal Edition Version 3. I introduced the Agile Forecast tab last year in Version 2 of SPERT Normal Edition.

What makes the Agile Forecast tab different in the SPERT Beta Edition is that it, obviously, uses the beta distribution to model the uncertainty around what each Scrum sprint (or agile iteration) will accomplish. This opens up the possibility to model this uncertainty using shape parameters that very nearly create a uniform distribution (in SPERT Beta Edition, this is called a “Guesstimate” about how likely the most likely outcome really is).

I compared some results between SPERT Normal and SPERT Beta Editions, and found that both editions calculate the same release date, just in different ways. Notably, I used “Medium-Low Confidence” in SPERT Normal, but had to use “Very Low Confidence” in SPERT Beta to obtain a close match of the standard deviation. By default, SPERT Normal uses only 7 subjective phrases for how likely the most likely outcome is, but SPERT Beta uses 10. The two editions treat the subjective phrases differently, even though they share the same word phrases.

Try downloading the latest in-development SPERT Beta Edition Version 2 (currently, version 2.0b) from GitHub and check-out the Agile Forecasting tab!

Version 2 of SPERT Beta Edition

I’ve begun working on the next version of Statistical PERT Beta Edition. Version 2 will have the same data visualization features as what SPERT Normal Edition received earlier this year. Namely:

  • Line Sparklines to show the implied, bell-shaped curve for each row containing a SPERT estimate
  • New, tri-colored combo charts to show the bell-shaped curve for summarized rows (on the 1-point, 3-point, and Mixed entry tabs)
  • New, tri-colored combo chart on the Charts tab, which uses the beta distribution to show skewed distributions
  • New, tri-colored pie chart on the Charts tab, to show an easy distribution of areas below, above and between the boundary limits
  • And perhaps a few others ideas, too….

If you’d like to download and try-out Version 2, you can it on my new GitHub repository. Be sure to download from the “version_2” branch to see the Version 2 SPERT Beta Edition. And keep in mind that until I officially release Version 2, this should be regarded as an in-development version, not a final, stable version.

If you have any comments or suggestions on how to further improve Statistical PERT, comment below or contact me.