Skip to content
Leadership

Deloitte's 2026 State of AI Report: Governance Is the Differentiator

Jamie Bykov-Brett Jamie Bykov-Brett · 09 April 2026 · 5 min read

Here is a number that should keep a few executive teams awake tonight: two-thirds of organisations are now reporting productivity gains from AI, but only one in five has translated that into actual revenue growth.

That is a significant gap. And it tells us something important about where we are in the AI adoption curve.

If you have spent the past two years rolling out copilots, automating workflows, and watching your teams get faster at tasks they were already doing, you are not alone. Most organisations have done the same. The tools work. People are saving time. Reports get written faster. Data gets summarised more quickly. Emails get drafted with less friction.

But here is the uncomfortable question: where is that time going? And what is it producing?

The productivity gains are real. I have seen them first-hand in the organisations I work with. A team that used to spend four hours preparing a weekly brief now does it in forty minutes. That is genuine. What is less clear is whether anyone has thought carefully about what that team should now be doing with the other three hours and twenty minutes. In many cases, the answer is more of the same. Faster hamster wheels, but hamster wheels nonetheless.

This is not a technology problem. It is a structural one.

The governance gap

Deloitte's findings on governance are worth sitting with. The report makes clear that organisations where senior leadership actively shapes AI governance achieve significantly greater business value than those who delegate it to technical teams.

That sentence sounds obvious when you read it quickly. But think about what it implies.

If your AI strategy is being driven primarily by IT, or by a technical function running experiments in a sandbox, you may be building capability without building coherence. You are getting faster at things without asking whether those things are the right things to be fast at. And you are probably not having the harder conversations about which decisions should stay with humans, which processes need redesigning rather than automating, and what governance looks like when machines start making choices that used to require judgement.

This is not about slowing things down. It is about pointing them in the right direction before you accelerate.

I have worked with leadership teams who were genuinely surprised to discover that their AI initiatives had no clear line of accountability. The tools had been deployed. People were using them. But no one had defined what good looked like, or what would happen when something went wrong. That is not a failure of the technology. It is a failure of oversight.

Beyond surface-level adoption

The report breaks organisations into three categories: those making surface-level changes, those redesigning processes around AI, and those using AI to create entirely new products or reinvent their business models. Only about a third fall into that last group. The rest are, in one way or another, bolting AI onto existing ways of working.

There is nothing wrong with starting there. But staying there is a problem.

If you are using generative AI to make your existing workflows marginally faster, you are likely to see marginal returns. If you are using it to ask harder questions about what work should exist in the first place, you are in a different conversation entirely.

This is where governance becomes strategic, not administrative. The question is not just "how do we manage risk?" It is "who decides what AI is for, and how do we hold them accountable for the answer?"

Skills are still the barrier

The report also names insufficient worker skills as the biggest barrier to AI integration. That tracks with what I see on the ground. Organisations are investing in tools but underinvesting in the literacy required to use them well.

And I do not mean technical literacy. I mean the ability to think clearly about what a task requires, to understand what AI can and cannot do, and to know when human judgement is non-negotiable. That is not a training course. It is a capability that needs to be built over time, with intention.

The organisations doing this well are not just running workshops. They are redesigning jobs. They are creating space for people to experiment and fail safely. They are treating AI fluency as a leadership competency, not a technical skill to be outsourced to the enthusiasts.

What this means for you

If you are a senior leader reading this, the question is not whether your organisation is using AI. It probably is. The question is whether you are actively shaping how it is governed, where it is applied, and what success looks like.

If that work is happening somewhere else in your organisation, without your input, this report suggests you are likely to see the same pattern: productivity up, revenue flat, and a growing sense that something is missing.

The tool is not the differentiator. The thinking behind it is.

Share this article
LinkedIn X Email
Jamie Bykov-Brett

Jamie Bykov-Brett

Listed as one of Engatica's World's Top 200 Business and Technology Innovators, Jamie is an AI and automation consultant who helps organisations move from curiosity to confident daily use. As founder of Bykov-Brett Enterprises and co-founder of the Executive AI Institute, he designs AI upskilling programmes that have delivered 86% daily adoption rates and a 9.7/10 NPS. His work sits at the intersection of technology implementation and human development, with a focus on responsible governance, practical tooling, and making AI accessible to every level of an organisation.

Get AI Insights Delivered

Practical perspectives on AI adoption and the future of work. No spam.

Related Articles