The Two Metrics That Predict Program Failure Better Than RAG Status

Your portfolio dashboard shows green. Your risk team is flagging amber. And your gut says something’s wrong. Here’s what you should actually be measuring.


The Dashboard Problem We All Know

If you’re running enterprise PMO in financial services, you’ve lived this nightmare:

Your flagship digital transformation is reporting green for the third month running. The vendor’s progress reports show 127 tasks completed, only 23 remaining. Milestone 4 is “on track” for delivery in six weeks.

Then your risk team presents to the executive committee. Same program. Amber rating. Control gaps. Compliance concerns. Delivery questions.

After the meeting, the CEO pulls you aside: “Michael, which one is true?”

And here’s the thing, they’re both true. The PMO is measuring task completion. Risk is measuring governance quality. Neither is measuring what actually predicts whether this program will deliver value or die trying.

After 25 years fixing troubled transformations across three continents, I can tell you that RAG status is a lagging indicator. By the time it turns red, you’re in remediation mode, not prevention mode.

But there are two leading indicators that predict program failure months before your dashboard catches up. And most PMOs don’t track either one.


Metric #1: The Assumptions Decay Rate

What it measures: How quickly your foundational assumptions are being invalidated by reality.

Every business case starts with assumptions:

  • “Integration will take 12 weeks”
  • “Current system has accurate customer data”
  • “Branch staff will adopt digital tools within 3 months”
  • “Vendor’s pre-built components will work for superannuation flows”

Here’s what I’ve learned the hard way: Programs don’t fail because of poor execution. They fail because they’re executing against a plan built on assumptions that stopped being true months ago.

How to Measure It

In your project documentation, you’ll find 20-40 critical assumptions that underpin the delivery plan and budget. Track these three things weekly:

  1. Assumption Status:
  2. Impact Magnitude:
  3. Response Time:

Your Assumptions Decay Rate = (Number of medium/high impact invalidated assumptions) ÷ (Weeks since baseline)

What the Numbers Tell You

  • <0.5 per week: Healthy program or excellent discovery phase
  • 0.5-1.0 per week: Normal complexity, manageable if caught early
  • 1.0-2.0 per week: Red flag – baseline was built on optimistic assumptions
  • >2.0 per week: Program is executing a fantasy, not a plan

Real Example

I reviewed a core banking replacement last year. Eight months in, the PMO dashboard showed green. When we audited their original assumptions:

  • 17 of 32 critical assumptions had been invalidated
  • Average time to acknowledge invalidation: 6.2 weeks
  • Average time to update plan: 12.8 weeks
  • Total assumptions decay rate: 2.1 per week

Translation: Reality was dismantling their plan faster than they could replan.

The program was reporting green because they were measuring adherence to the original plan, not validity of the plan itself.

Six weeks later it crashed. Board demanded independent review. $4.5M in remediation costs.

Why This Matters to You

If you’re stretched thin across six concurrent programs, drowning in conflicting status reports, this metric gives you something your current dashboard can’t: early warning that a program is drifting from viable to fictional.

It doesn’t matter if tasks are complete if you’re completing the wrong tasks based on invalidated assumptions.


Metric #2: Decision Escalation Velocity

What it measures: How quickly your program can make and implement real decisions when things deviate from plan.

This one’s subtle but devastating.

Most governance frameworks are designed around the illusion of control. You’ve got steering committees, change control boards, project boards, risk committees. Lots of meeting cadence. Lots of RACI charts.

But here’s the question that matters: When something unexpected happens, how many days until someone with actual authority makes an actual decision and the program adjusts course?

How to Measure It

Track every decision that requires escalation beyond the delivery team. For each one, record:

  1. Detection date: When did the team realize a decision was needed?
  2. Escalation date: When was it formally raised to governance?
  3. Decision date: When was the actual decision made?
  4. Implementation date: When did delivery actions change?

Your Decision Escalation Velocity = Average days from detection to implementation

What the Numbers Tell You

  • <7 days: High-performing governance, empowered delivery team
  • 7-14 days: Standard corporate governance, manageable delays
  • 14-30 days: Governance is slowing delivery, risk of drift
  • >30 days: Governance is actively killing the program

The Hidden Killer

Here’s what makes slow decision velocity so dangerous: it doesn’t show up in RAG status.

Your program reports green because:

  • Tasks are being completed (just not the right tasks)
  • Milestones are being hit (just not with quality outcomes)
  • People are busy (just working around the unresolved decisions)

But underneath, you’re accumulating what I call decision debt – a growing pile of unresolved issues that eventually collapse the program all at once.

Real Example

I was brought into a superannuation fund portfolio last year. Six programs, $45M annual spend. They had textbook governance: monthly steering committees, weekly project boards, defined escalation paths.

When we tracked decision velocity:

  • Average time from issue detection to steering committee: 23 days
  • Average time from steering discussion to actual decision: 11 days
  • Average time from decision to implementation: 8 days
  • Total decision escalation velocity: 42 days

Meaning: When something unexpected happened (and it happened constantly), it took six weeks to course-correct.

During those six weeks, the program kept executing the original plan, completing tasks, burning budget, reporting green, while reality diverged further.

Two programs were zombie projects. Everyone knew they should stop, but no one would make the call. Decision velocity of infinity.

Why PMOs Miss This

Traditional PMO frameworks track:

  • Number of decisions made
  • Number of risks escalated
  • Number of change requests processed

But they don’t track the thing that matters: how quickly decisions turn into different actions.

Fast decision velocity = resilient program that adapts to reality.

Slow decision velocity = brittle program marching toward a predictable cliff.


How to Implement These Metrics

I know what you’re thinking: “This sounds great, but I’m already drowning. How do I add two more metrics to track?”

The answer is you don’t add them to your current reporting. You replace meaningless metrics with meaningful ones.

For Assumptions Decay Rate:

  1. Pull your business case and project charter
  2. Extract the 30-40 assumptions (they’re usually buried in there)
  3. Put them in a simple tracker with status/impact/date columns
  4. Review weekly with delivery leads (15 minutes)
  5. Report monthly to steering committee (one slide)

For Decision Escalation Velocity:

  1. Take your current issues/risks register
  2. Add four date columns (detection/escalation/decision/implementation)
  3. Calculate average velocity monthly
  4. When it crosses 14 days, have a governance conversation

This isn’t more process. It’s replacing theatre with diagnosis.


What This Really Tells You

These two metrics answer the questions your executives actually care about:

Assumptions Decay Rate answers: “Is this program still viable, or are we executing a fantasy?”

Decision Escalation Velocity answers: “Can this program actually adapt, or are we pretending to have control?”

Together, they give you something traditional PMO metrics can’t: advance warning that a ‘green’ program is about to turn red.

And if you’re stretched across six programs with a team of four (like most enterprise PMOs I work with), these metrics help you answer the hardest question: Which programs do I focus my scarce talent on?

Focus on the programs with high assumptions decay and slow decision velocity. Those are the ones about to crash, regardless of what your RAG dashboard says.


The Independence Question

Here’s the uncomfortable truth: your delivery teams can’t objectively track assumption invalidation. They’re too invested in the original plan. Your vendors certainly can’t, their contracts are priced against those assumptions.

And decision velocity measurement only works if someone can say “this isn’t working” without political consequences.

This is where independent transformation assurance earns its keep. Not as another layer of governance, but as someone who can look at assumption validity and decision velocity without career risk attached to the answer.

When I walk into a troubled program, I’m not asking “are you following the plan?” I’m asking “is the plan still connected to reality?” and “can you actually make decisions, or just have meetings?”

Those questions sound simple. But they’re career-threatening to ask from inside the organization.


What to Do Next

If you’re running portfolio governance for programs over $10M, try this experiment:

  1. Pick your highest-risk program (the one your gut says is in trouble despite green status)
  2. Pull the original business case
  3. List the 10 most critical assumptions
  4. Ask the delivery lead: “Are these still true?”
  5. Track the last 5 escalated decisions and calculate average velocity

I’ll bet you find:

  • At least 3-4 invalidated assumptions no one’s acknowledged
  • Decision velocity over 20 days
  • A growing gap between dashboard status and program reality

That gap is the space where programs fail.

The executives who see failure coming, and survive with their credibility intact, are the ones who measured the right things.

RAG status tells you where you’ve been.

Assumptions decay and decision velocity tell you where you’re going.


If this resonates and you’d like to pressure-test your portfolio health, I’m happy to share the assumption tracker template I use. No obligation, just practical tools that help PMO leaders sleep better.

Connect with me: dave@rainmanadvisory.com.au

Dave Lockley spent 8 years running portfolio assurance at Australia’s second-largest super fund (Australian Retirement Trust), overseeing $200-300M in annual transformation spend. He now helps mid-sized financial services firms build programs that actually deliver.

No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *