The 10 Dimensions of a Transformation Diagnostic

When we run a diagnostic on a transformation program, we assess 10 dimensions. Not three. Ten.

Schedule, budget, and scope matter. But they’re only part of the picture. A transformation touches strategy, governance, people, culture, quality, risk, and value. If you’re only checking the delivery metrics, you’re only seeing a fraction of what determines whether the thing succeeds.

Each of these 10 dimensions of a transformation diagnostic represents a condition that needs to be in place for a transformation to deliver what it promised. Some will be familiar. Others might not be. That’s the point.

1. Strategic Coherence

This tests whether the transformation has a clear, stable definition of the problem it’s solving.

That means the business case, the scope, the benefits, and the program design all align. The objectives are unambiguous. The boundaries are explicit. What’s in scope serves the core problem. What’s out of scope is out for a reason.

When strategic coherence is strong, people across the program can make good decisions locally because they understand what the transformation is trying to achieve. That alignment doesn’t happen by accident. It has to be built and maintained.

2. Decision Authority

This tests whether governance works as a decision-making system.

Are the right decisions being made by the right people, at the right level? Are they being made quickly enough to keep pace with delivery? When a decision is made, does it translate into updated plans and accountabilities?

Good governance isn’t about how many forums you have or how often they meet. It’s about whether material decisions happen in time to influence outcomes.

3. Baseline Realism

This tests whether the approved baseline for cost, schedule, scope, and benefits is realistic enough to govern against.

A baseline is a control point. If it’s built on optimistic assumptions, incomplete planning, or insufficient contingency, then every measurement taken against it will be unreliable. You can’t tell whether you’re off track if the track itself was never credible.

A good baseline is honest about what it will take. It reflects real dependencies, realistic resourcing, and enough flexibility to absorb the disruptions that always come.

4. Delivery Capability

This tests whether the organisation has the skills, experience, and capacity to actually deliver.

That includes the program team and the business. It’s not enough to have roles filled on an org chart. The question is whether the people in those roles have enough bandwidth to do the work, whether critical capabilities are covered, and whether the program can absorb the loss of key individuals without stalling.

Delivery capability is about real capacity, not nominal headcount.

5. Truth Velocity

This tests how quickly and accurately emerging risks and concerns move from the people who see them to the people who can act on them.

Every transformation has risks. That’s normal. The question is whether those risks are surfaced early, reported honestly, and escalated to governance in time to make a difference.

When people feel safe raising concerns, and when reporting values accuracy over reassurance, the organisation can intervene early. That’s the goal. Not to have a perfect risk register. To have a system where reality reaches decision-makers while there’s still time to respond.

6. Value and Financial Discipline

This tests whether the transformation has a clear value logic, supported by disciplined financial control.

A transformation is an investment. The diagnostic looks at whether the sources of value are explicit, whether benefits can be traced to specific interventions, whether someone owns each benefit, and whether the spend profile still makes sense given what’s been learned.

Financial control is important. But financial control without value discipline protects cost without protecting return. Both need to work together.

7. Political Resilience

This tests whether the program understands and manages the political environment it operates in.

Transformations depend on continued support from sponsors, business leaders, boards, and sometimes regulators or external stakeholders. That support isn’t guaranteed. It can shift when priorities change, leadership turns over, or external conditions move.

A resilient program knows where its support is strong, where it’s fragile, and what it would do if key backing changed. It actively manages alignment rather than assuming it.

8. Adoption Accountability

This tests whether leadership has taken ownership of the hardest part of any transformation: getting people to actually work differently.

New systems and processes only create value when people adopt them consistently. That requires more than training and communications. It requires line leaders accepting accountability for the change in their areas, reinforcing new ways of working, and treating adoption as a business outcome rather than a project activity.

This is where the gap between technical delivery and real-world value is widest. Closing that gap is a leadership responsibility, not a change team exercise.

9. Execution Quality

This tests whether delivery outputs are being produced to the standard required.

That means clear completion criteria, effective quality controls, and credible evidence of acceptance. It looks at whether “done” actually means done, whether defects and rework are under control, and whether reported progress reflects genuinely complete work.

Quality issues accumulate quietly. Weak definitions of done, deferred testing, and unresolved defects create a growing gap between reported progress and actual readiness. The earlier that gap is visible, the easier it is to manage.

10. Integration Risk

This tests whether the transformation is being integrated coherently across all its moving parts.

Dependencies, interfaces, sequencing, transition to operations, ongoing support. Each of these needs to be actively managed, not assumed.

A program can have strong individual workstreams that still produce a result the organisation can’t operate, support, or sustain. Integration is the discipline that connects delivery into a workable whole. It isn’t a phase at the end. It runs through the life of the program.

How it comes together

Each dimension is scored 1 to 5, from critically weak to robust. The scoring is qualitative and evidence-led. It supports judgement rather than creating false precision.

The value isn’t in any single score. It’s in the pattern across all 10. That pattern shows where the program is strong, where the risks are, and where attention will have the most impact.

The diagnostic runs as a fixed-fee engagement over 10 days. At the end, you’ll have a clear picture of program health, an honest assessment of what needs attention, and practical options for what to do next.

Then you decide.

No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *