Skip to content
Artificial Intelligence

Why AI Governance Now Belongs to Learning Leaders, Not Just IT

Jamie Bykov-Brett Jamie Bykov-Brett · 08 April 2026 · 5 min read

Here's what I'm seeing in most organisations right now: paralysis. Everyone knows AI is transforming work. Everyone has concerns about security, quality, bias, job displacement. And those concerns have been so overwhelming that many leadership teams have done precisely nothing. No policy. No training plan. No clear guidance.

Meanwhile, employees got on with it.

Shadow AI is now a massive headache that most organisations can't get rid of. People have been using ChatGPT, Claude, Gemini, and a dozen other tools for months. They've developed preferences. They've built workflows. They've figured out what works for them. And now the corporate offer arrives, usually Co-pilot, and it often doesn't stand up to the models people have gotten used to. The result is predictable: employees nod politely at the approved tool and keep using what actually helps them do their job.

This is the context that iVentiv's 2026 L&D trends report drops into. The report argues that AI governance is becoming a core L&D responsibility. That learning culture prioritisation among Chief Learning Officers jumped from 16% in 2022 to 48% in 2025. That L&D leaders are positioning themselves at the centre of how organisations adapt and decide.

I read that and thought: which organisations are they talking about?

Because in my experience, most learning officers don't have a training plan for AI. Not a real one. They might have a compliance module somewhere. They might have added "AI awareness" to the leadership development catalogue. But a coherent strategy for building AI capability across the organisation? A framework for helping people use these tools well? That's rare. And the reason it's rare is that L&D leaders, like everyone else, have been immobilised by the same concerns that froze their executive teams.

The iVentiv report suggests the AI conversation has moved from "what can AI do?" to "what should it do?" That's true in conference rooms and strategy documents. But on the ground, most organisations are still stuck on "what are people already doing with AI that we don't know about?"

This matters because the window for shaping AI use is closing. When shadow AI becomes entrenched, you're no longer designing governance. You're negotiating with reality. You're trying to bring order to practices that have already taken root. That's harder. It requires admitting that the organisation didn't move fast enough.

L&D strategist Kristy Kitson argues in the report that L&D's value lies in designing "conditions where capability keeps renewing", shaping structures, norms, and workflows. She's right about the destination. But too many learning leaders are still standing at the starting line, waiting for permission or clarity that isn't coming.

The uncomfortable truth is that foresight was needed eighteen months ago. The organisations that moved early, that started experimenting, that gave people frameworks for thinking about AI use, are now in a position to govern thoughtfully. The ones that didn't are playing catch-up while employees have already made their own decisions.

So where does that leave L&D leaders who want to step into this space?

First, accept reality. Shadow AI isn't a problem to be eliminated. It's information about what your people actually need. Instead of trying to shut it down, find out what tools people are using and why. That tells you where the official offering falls short.

Second, stop waiting for perfect policy. You do not need a comprehensive AI governance framework to help people use these tools more effectively. Begin with two practical steps: clear guidance and a pilot programme.

For practical guidance, consider: what information is appropriate to share with AI systems? How can you verify AI outputs? When should you trust the system, and when should you double-check? These questions do not require board approval, they need someone to initiate the discussion.

For the pilot, implement AI in low-risk teams or departments where sensitive data is unlikely to be involved. Enable these teams to test, learn what works and what does not, and identify what yielded the best results. Crucially, share both successes and lessons from failures. Your organisation needs insights from real users on how AI best supports and underpins business operations... not just from the IT guy.

Third, acknowledge the gap between aspiration and capability. If your team doesn't understand AI well enough to train others, that's the first problem to solve. You can't build organisational capability from a position of uncertainty.

The iVentiv report suggests CLOs must now influence "how the organisation works," not just "how people learn." That's a significant expansion of scope. But it only happens if learning leaders actually step forward. Not with perfect strategies, but with the willingness to engage with messy reality.

The organisations that figure this out won't be the ones with the best policies. They'll be the ones where someone had the nerve to start before they had all the answers.

Share this article
LinkedIn X Email
Jamie Bykov-Brett

Jamie Bykov-Brett

Listed as one of Engatica's World's Top 200 Business and Technology Innovators, Jamie is an AI and automation consultant who helps organisations move from curiosity to confident daily use. As founder of Bykov-Brett Enterprises and co-founder of the Executive AI Institute, he designs AI upskilling programmes that have delivered 86% daily adoption rates and a 9.7/10 NPS. His work sits at the intersection of technology implementation and human development, with a focus on responsible governance, practical tooling, and making AI accessible to every level of an organisation.

Get AI Insights Delivered

Practical perspectives on AI adoption and the future of work. No spam.

Related Articles