The Three Questions Your Board Should Ask Before Approving Any Major Program

You know the pattern. Business case looks solid. Benefits are compelling. Vendor is credible. Board approves. Six months later: “How did we not see this coming?”


Here’s what actually happens in most board approvals for major transformation programs:

Executive sponsor presents. Big benefits. Clear timeline. Strategic imperative. Board asks about benefits realization, governance framework, regulatory compliance.

Approved.

Then six months in, you’re in a steering committee meeting staring at a dashboard that doesn’t match reality. The Program says amber. The PMO says red. The vendor says “green, just some minor scope adjustments.”

And someone asks the question no one wants to answer: “How did our board approve this?”

Here’s the uncomfortable truth: Your board didn’t approve a bad program. They approved a good-looking business case built on untested assumptions.

The failure wasn’t in the decision. It was in the questions asked before making it.

Most boards ask about governance, benefits, and timelines. Those questions produce reassuring answers. But they don’t predict outcomes.

Because they don’t test whether the program can actually be delivered.

There are three questions that do. They’re uncomfortable. Almost no one asks them. And they’re the difference between programs that deliver and programs that devour capital.


The Pattern Most Boards Miss

Walk into any board meeting where a major program is up for approval. You’ll hear some version of these questions:

“What are the expected benefits and when do they materialize?” (Translation: Tell us the optimistic scenario)

“What’s the governance structure?” (Translation: Assure us there will be meetings and reporting)

“Have we done proper due diligence on the vendor?” (Translation: Confirm they’ve done this before somewhere else)

“What are the key risks and how are we mitigating them?” (Translation: Show us a risk register where everything is “managed”)

These questions feel rigorous. They produce detailed answers. They create the appearance of thorough oversight.

But they all share the same flaw: they accept the program’s fundamental premise without testing it.

They’re asking “how will you deliver this?” before confirming “can this actually be delivered?”


Question #1: “If We Had to Stop This Program in 12 Months, What Would Make That Decision Obvious?”

No one wants to ask this question. It sounds defeatist.

But it’s the most important question a board can ask.

Here’s why: Every major program gets approved with an implicit assumption that stopping isn’t an option. It’s strategic. It’s committed. The regulator expects it. Competitors have it. “Too important to fail.”

That assumption removes your ability to make rational decisions once you’re in motion.

So when things start going wrong at month 6, every steering committee meeting becomes an exercise in hoping things improve. No one will say “we should stop” because stopping feels like failure.

Instead, you keep investing. “We’ve already spent $12M, we can’t stop now.” Then it’s $18M. Then $27M. Then you’re explaining to the board how a $35M program became a $58M program.

This happens because you never defined what “stop” looks like before you started.

What This Question Actually Does

When you ask “what would make stopping obvious?” at the approval stage, two things happen:

First, you identify the genuine program-killers.

Not the comfortable risks everyone’s happy to discuss (“key resource leaves” or “vendor misses a milestone”). The existential ones:

  • “We discover core data quality is 40% accurate instead of 80%, and remediation would triple the timeline”
  • “Regulatory framework changes and invalidates the business case”
  • “Vendor demonstrates they fundamentally misunderstood our compliance requirements”
  • “We’re 12 months in, 40% over budget, with less than 30% scope delivered”

Second, you create decision triggers instead of hope-based governance.

Instead of: “We’ll monitor closely and escalate if needed”

You get: “If we hit month 12 with more than 4 of our 10 critical assumptions invalidated, or we’re 30%+ over budget with under 40% scope delivered, we stop and reassess rather than doubling down.”

What This Looks Like in Practice

Regional bank. $35M digital banking program. Board approved with standard questions about benefits and governance.

Four months in, pattern starts emerging: vendor is consistently underestimating integration complexity. What they quoted as 2 weeks is taking 5 weeks. What they said was “standard” requires custom development.

But there’s no decision trigger. So every steering committee discusses whether this is “normal program dynamics” or “a fundamental issue.”

By month 9, it’s clear: original 14-month timeline is actually 26 months. $35M budget is now $58M.

CEO faces the question: “Do we proceed at $58M over 26 months, or cut our losses at $10M sunk cost?”

Without pre-agreed stop criteria, this becomes an emotional debate about “failure” and “sunk costs” and “strategic importance.”

What should have happened: At approval, board asks “what would make stopping obvious?” Discovers the answer is: “If vendor demonstrates they don’t understand our integration complexity, which we’ll know by month 4 when we see their first integration delivery.”

Month 4 comes. Evidence is clear. Board has a rational conversation based on pre-agreed criteria. Stops at $10M. Pursues alternative approach.

The difference between $10M and $58M wasn’t execution quality. It was having honest decision criteria before approving the program.


Question #2: “Who Owns Delivery Risk, Us or the Vendor?”

This sounds like a procurement question. It’s not.

It’s the question that determines whether your governance actually governs or just generates reports.

Here’s the pattern:

Most transformation programs are Time & Materials contracts. Vendor bills for hours. You own the risk of scope, complexity, and integration.

Which means when something takes longer than expected (and it always does), the vendor says “that’s more complex than anticipated, here’s the variation” and you write another cheque.

Your PMO tracks progress. Your steering committee reviews status. Your board receives updates.

But none of that is risk ownership. That’s risk observation.

The vendor has no financial incentive to be efficient. They’re billing hourly. Complexity is revenue.

Your internal team has no authority to stop or redirect. They’re coordinating, not controlling.

So who actually owns the risk that this program succeeds or fails?

In most cases: no one. Risk is diffused across governance structures, contractual relationships, and stakeholder groups.

Which means it’s not managed. It’s just reported.

The Question That Cuts Through

“Show me the contract structure. If the vendor underestimates complexity by 40%, who carries the cost, them or us?”

If the answer is “us,” you don’t have a delivery partner. You have a time-billing resource.

Which is fine for genuine discovery where you don’t know what you don’t know.

But it’s toxic for delivery, where you need someone with commercial skin in the game.

What Good Risk Ownership Looks Like

Smart approach: Split the program into two phases.

Phase 1: Discovery (Time & Materials, 8-12 weeks) Vendor’s job: Understand actual complexity, validate assumptions, produce fixed-price proposal for delivery based on what they’ve learned

Phase 2: Delivery (Fixed Price or Outcome-Based) Vendor’s job: Deliver defined scope for defined price. Complexity is their problem, not yours.

This structure puts delivery risk where it belongs—with the people who control delivery.

The Pattern When Risk Ownership Is Unclear

Superannuation fund. $42M core platform replacement. Time & Materials contract. “We’ll govern it tightly.”

Eight months in: $6M over budget.

Every month: vendor reports progress, steering committee reviews status, budget climbs.

Finance director asks the uncomfortable question: “Who’s commercially motivated to bring this in on budget?”

Answer: No one. Vendor is billing $400K per month. Efficiency means less revenue.

They convert remaining scope to fixed-price delivery. Vendor agrees because alternative is competitive tender and losing the work entirely.

Program finishes at $49M instead of the projected $67M it was heading toward.

The $18M difference wasn’t delivery quality. It was risk ownership.

When your board asks “who owns delivery risk,” they’re really asking: “Have we structured this so someone is motivated to deliver efficiently, or have we just hired expensive observers?”


Question #3: “What Have We Assumed That We Haven’t Validated?”

Every business case is built on assumptions. That’s unavoidable.

But most business cases treat assumptions as facts:

  • “Integration will take 12 weeks”
  • “Current data is 80% accurate”
  • “Branch staff will adopt within 3 months”
  • “Vendor’s pre-built components work for Australian compliance”

These aren’t facts. They’re assumptions. And when boards approve programs, they’re approving these assumptions without stress-testing them.

Then 6 months in:

  • Integration takes 26 weeks, not 12
  • Data is 40% accurate, not 80%
  • Staff are resisting, not adopting
  • Vendor’s components don’t handle Australian tax law

Your 18-month program is now 32 months. Your $35M budget is $61M.

Board asks: “Why didn’t we know this at approval?”

Because no one asked which assumptions were critical and which were validated.

The Question That Forces Honesty

“Walk me through the top 10 assumptions underpinning this plan. For each one, tell me:

  1. How confident are we? (Validated / Likely / Hopeful / Unknown)
  2. If it’s wrong, what’s the impact? (Minor / Moderate / Severe / Fatal)
  3. When will we test it? (Before approval / First 8 weeks / Never)

Then show me the assumptions marked ‘Hopeful’ or ‘Unknown’ with ‘Severe’ or ‘Fatal’ impact. Those are the program killers. What’s the plan to validate them before we commit $50M?”

This forces intellectual honesty.

Either you validate critical assumptions before approval (smart), or you acknowledge you’re approving a program with known unknowns and provision accordingly (honest), or you proceed anyway and pretend uncertainty doesn’t exist (common, expensive).

What This Looks Like When No One Asks

Credit union. Digital transformation. $22M budget, 16 months delivery. Business case looks solid. Board approves.

Four months later, independent review extracts 38 assumptions from the business case and delivery plan.

Result:

  • 11 assumptions marked “Unknown” with “Severe” or “Fatal” impact
  • 6 of those about vendor capability for their specific product complexity
  • None tested before board approval

In other words: $22M approved based on 11 untested assumptions, any one of which could double timeline or cost.

They pause. Spend $85K on 8-week validation phase. Discover 7 of the 11 assumptions are wrong.

Re-plan with validated assumptions. New budget: $31M over 22 months.

If they’d proceeded with original plan based on hope? Would have hit reality at month 6 and burned $47M over 34 months.

The $85K validation investment saved $16M.


Why Boards Don’t Ask These Questions

These three questions are uncomfortable.

They challenge the narrative that programs are “strategic imperatives” that must proceed.

They introduce doubt. They slow momentum. They make sponsors defensive.

They require board members to push back on executive recommendations, which feels confrontational.

But that discomfort is the point.

If a program can’t survive honest questions about stop triggers, risk ownership, and untested assumptions, it shouldn’t be approved in its current form.

The board’s job isn’t to support management’s preferred initiatives. It’s to ensure capital is deployed wisely.

And deploying $50M based on untested assumptions, diffused risk ownership, and no decision triggers isn’t wisdom. It’s hope dressed up as strategy.


What Actually Changes

When boards start asking these three questions, here’s what happens:

Programs get better before approval. Sponsors validate assumptions, clarify risk ownership, think through decision triggers because they know they’ll be asked.

Boards make informed decisions. They understand what they’re approving and what could go wrong, not just what should go right.

Steering committees become effective. They’re tracking decision triggers and assumption validity, not just reporting RAG status.

Programs fail faster and cheaper. Because stop criteria are pre-agreed, not negotiated under crisis.

Organizations get better at transformation. Because intellectual honesty becomes normal, not exceptional.


The Independence Problem

Here’s the final uncomfortable bit:

Executive sponsors can’t answer these three questions objectively.

Not because they’re dishonest. Because they’re invested in approval. Their credibility is attached to the program. They genuinely believe it will work.

That’s not bad faith. It’s human nature.

Which is why these questions need to be asked by someone with no skin in the game.

Not the program, they report to the sponsor. Not your vendor, they want the work. Not your PMO team, they’re checking compliance boxes, not assessing delivery viability.

Someone who can look at the business case, the assumptions, the contract structure, and the decision framework and say “this is ready” or “this needs work” without career consequences.

That’s what independent assessment does. Not adding governance. Testing the delivery thesis before you commit the capital.


What to Do Next

Next time a major program comes to your board for approval, try these three questions:

  1. What would make stopping this program obvious in 12 months?
  2. Who owns delivery risk—us or the vendor?
  3. What have we assumed that we haven’t validated?

If the answers are “we haven’t thought about stopping,” “it’s T&M so we do,” and “we don’t have an assumptions register”, you’re being asked to approve hope, not a plan.

And hope is the most expensive strategy in transformation.

No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *