A figure jumped out at me from a recent industry briefing. Of all the companies pouring money into artificial intelligence, only one in a hundred would describe themselves as "AI-mature". One percent. Everyone else is somewhere between curious and confused, often spending heavily while quietly wondering when the promised returns will arrive.
That number sits inside a much bigger one. Global enterprise spending on AI is on track to hit $665 billion this year, according to analysis released by ExcelMindCyber Institute. And roughly 73% of those investments fail to deliver the return their boards were promised. So the question is no longer whether to invest. It is why so much investment leads to so little outcome.
The answer is uncomfortable, because it is not really about the technology.
Most enterprises I work with assume their AI problem is technical. They want better tools, faster integrations, smarter agents. So they buy. Then they hire. Then they announce a transformation. Six months later, the dashboards look impressive and the work feels much the same.
Tolulope Michael, who leads ExcelMindCyber, puts it plainly. He says the failure point sits with the systems, people and structures wrapped around the model. The real question, he argues, is "who controls it, what risk is acceptable, and how quickly decisions can be made without breaking what matters." That is a governance question, not a procurement one.
The numbers back him up. Just 43% of organisations have a formal AI governance policy, according to the PEX Report 2025/26. The majority deploying autonomous AI, the kind that triggers workflows and executes decisions without a human signing off in real time, have no agreed framework for accountability or risk. When something goes wrong, the question "who approved that?" has no clean answer. Errors compound silently. Trust erodes quickly.
When I hear "AI governance," I often picture a fifty-page policy that lives on an intranet and gets read once, by the person who wrote it. That version of governance is theatre. It satisfies an audit and changes nothing about how decisions are made on a Tuesday morning.
Real governance is closer to plumbing. It is woven into how teams work, how risks are escalated, how decisions are recorded, and how people are trained to spot when an AI output should not be trusted. It needs to be specific to the business, because what counts as risk in a hospital does not look like risk in a hedge fund.
This is also where the regulatory clock starts to tick. The EU AI Act's high-risk compliance requirements activate this year, with fines reaching up to 7% of global turnover, and over 1,100 AI-related bills were introduced across the United States in 2025 alone. Whatever your view on regulation, "we will get to it later" is no longer a viable position.
The leaders making real progress tend to share three habits. They start with workflows, not tools. They map where decisions are actually made, who is accountable, and where AI is being asked to take on judgement that humans should be retaining. Only then do they pick the technology.
They build in human-in-the-loop checkpoints where the cost of being wrong is high. That actually speeds them up, because mistakes are caught early rather than discovered in a board paper three quarters later.
And they invest in the literacy of the people using the systems, not just the people building them. Training is treated as infrastructure, not a tick-box exercise. In one programme I worked on, daily AI use among non-technical staff rose to 86% inside six months, because people had been taught how to think with the tools rather than just operate them.
One thing to try this week: Pick a single AI-assisted decision your organisation made in the last month. Ask three questions about it. Who approved it? What would have flagged it as wrong? And could you reconstruct that trail tomorrow if a regulator asked? If any of those answers are awkward, you have found your starting point. The $665 billion is going to be spent either way. The companies that pull ahead will be the ones who decided, early, that governance is the product.
If those answers feel uncertain, the AI Capability Scorecard is a useful next step. It benchmarks exactly those governance dimensions, strategy, skills, data, and accountability, so you can see where you stand before the spend decides for you.