Why this work exists
 Most organisations invest in AI because they want better outcomes. Better decisions. Greater efficiency. More value.
Very few intend for AI systems to solve problems in one place while quietly creating new ones elsewhere. And yet, that’s what happens far more often than leaders expect.
Value is created locally. But accountability becomes blurred. Trust erodes. Decisions become harder to explain - not easier to stand behind.
These consequences rarely appear on dashboards. They surface later, under scrutiny, when choices made in good faith become difficult to defend.
This gap - between AI capability and leadership responsibility - is where my work lives.
Who I am
I have spent over 26 years working at the intersection of data, AI, leadership, and organisational change.
Across more than 35 organisations, my work has helped generate over $650M in defensible value - not by chasing technology, but by helping leaders make better decisions about how technology is used, governed, and owned.
Earlier in my career, I held senior executive and board-level roles inside large organisations, where I saw first-hand how often AI and data initiatives succeed technically while failing organisationally.
Not because the models didn’t work - but because the trade-offs weren’t made explicit.
That experience reshaped how I think about AI value.
What I believe
Â
AI value is not determined by technical sophistication.
It is shaped by the decisions leaders make - and the ones they leave implicit.
Accountability
Decisions diffuse when governance and ownership are not explicit. When outcomes cannot be clearly owned, trust erodes - even if performance improves.
Risk
Risk does not come only from failure.
It often emerges from success deployed without sufficient foresight or boundary-setting.
TrustÂ
Trust is not a communications problem.
It is a consequence of whether systems behave in ways leaders can explain and defend.
Value
Value that cannot be justified under scrutiny is fragile. Enduring value requires decisions that balance efficiency with responsibility.
Human Impact
AI systems always encode assumptions about people. If those assumptions are not examined, harm is often discovered too late.
What I do
I work with boards, policymakers, and senior leadership teams who are navigating the shift from AI experimentation to AI responsibility.
My role is not to sell tools or accelerate adoption.
It is to help leaders:
-
surface the trade-offs that AI introduces
-
clarify ownership before consequences arrive
-
align value creation with human and societal expectations
-
make decisions they can explain, defend, and sustain
Sometimes that work happens through advisory engagements.
Sometimes through coaching, speaking, workshops, or facilitated dialogue.
Sometimes through writing and public reflection.
The formats vary.
The lens does not.
Why this matters now
Â
AI is no longer confined to experimentation.
Across sectors, it is shaping real-world outcomes at scale.
Accountability
From capability to consequence
Leaders are increasingly accountable not for what AI can do,
but for what it does — and what it displaces.
Risk
From optimisation to ownership
Efficiency gains are no longer sufficient justification
when downstream impacts affect people, rights, and institutions.
TrustÂ
From intent to evidence
Good intentions are being replaced by a demand for explainability,
traceability, and defensible decision-making.
Â
This shift is structural, not cyclical - and it changes what leadership requires.
Writing, research, and institutions
I’m the author of Making Data Work, Value Driven Data, and The Values of Artificial Intelligence, where I explore why AI value is ultimately a leadership and governance challenge, not a technical one.
I’m also the Founding Steward of the AI Values Institute, a space for serious engagement on AI value, governance, accountability, and human impact - before consequences become unavoidable.
Alongside this, I serve as Strategic Architect of the AI Value OS, focused on helping organisations make AI trade-offs visible and actionable across value, risk, and trust.
Who this is for
My work is most relevant for:
-
board directors
-
C-suite leaders
-
senior policymakers
-
transformation and risk leaders
Especially those who sense that:
“Some of the AI decisions being made today will be difficult to justify tomorrow.”
A final note
If you’re looking for speed, reassurance, or validation of decisions already taken, this may not be the work you need.
But if you’re trying to navigate the harder questions - about value, responsibility, and long-term consequence - then you’re in the right place.
If you’d like to explore this work further, you’ll find writing, tools, and ongoing reflections - shared to help leaders think more clearly about AI value, responsibility, and consequence.