Here is a statistic that should worry anyone responsible for workforce capability. According to D2L's analysis of workplace learning trends, AI is emerging as a core training theme but without clear implementation roadmaps. Organisations are rolling out tools. What they are not rolling out is purpose.
This is the problem I see constantly in my work with leadership teams. Someone in procurement signed off on enterprise licences. IT provisioned the accounts. HR sent an email. And then everyone waited to see what would happen next.
What happened next, in most cases, was nothing useful.
The permission gap
Giving someone access to a tool is not the same as giving them permission to use it. Access is technical. Permission is cultural. And the gap between the two is where AI adoption goes to die.
Think about what it actually means to use AI in your job without clear guidance. You do not know if you are allowed to paste client data into a prompt. You do not know if the output counts as your work or needs to be disclosed. You do not know if experimenting will be seen as innovative or as slacking off. You do not know if your manager even understands what you are trying to do.
So you do nothing. Or you do something quietly, without telling anyone, which is arguably worse from a governance perspective.
The research bears this out. D2L's findings show that organisations are treating AI training as a box to tick rather than a capability to build. There is urgency in the conversation but inconsistency in execution. And that inconsistency is not random. It follows a pattern. Organisations that lack clear learning infrastructure struggle to turn any strategic priority into actual behaviour change. AI is just the latest example.
This is not a technology problem
I have said this before and I will keep saying it until it stops being true. AI is not separate from us. It is a mirror. It reflects the clarity or confusion of the organisation deploying it.
If your people do not know what they are supposed to do with AI, that is not because AI is complicated. It is because your strategy is unclear, your governance is absent, or your culture punishes experimentation. Probably all three.
The organisations making progress, the 36% that D2L classifies as "career development champions", share something in common. They have embedded learning into talent strategy. They have invested in infrastructure that scales. And they have built enablement structures that help managers and employees actually engage with programmes rather than just acknowledge they exist.
Those three things sound obvious. But most organisations have done none of them for AI.
What managers are not being told
Here is where the gap becomes painful. Managers are the primary mechanism through which learning actually happens in most organisations. They coach, they allocate time, they signal what matters. But they are not being equipped to do any of this for AI.
If your managers cannot answer basic questions about what AI tools are approved, what use cases are encouraged, and what boundaries exist, they will default to silence. Silence from a manager reads as discouragement. Discouragement reads as prohibition.
So your AI adoption strategy, such as it is, gets killed in the middle layer of your organisation by people who were never told what to do.
The uncomfortable question
I keep coming back to a question I ask leaders in every engagement. Did you give your people permission to use these tools? Not access. Permission.
Permission means clarity about expectations. It means explicit guidance on what good looks like. It means protected time to learn and fail. It means psychological safety to ask questions without looking incompetent. It means managers who can model the behaviour because they have been trained themselves.
If you have not done those things, you have not enabled AI adoption. You have created the conditions for expensive confusion.
What actually works
The organisations I see succeeding are not the ones with the fanciest tools or the biggest budgets. They are the ones that treated AI capability the way they would treat any other strategic priority. They asked what behaviour change they needed. They built the infrastructure to support it. They trained the managers. They measured outcomes, not just participation.
Most importantly, they made a decision. Not a vague commitment to "exploring AI" but a clear articulation of what they expected people to do differently and why it mattered.
That is what is missing in most AI rollouts. Not technology. Decision.
So before you run another prompt engineering workshop or add another tool to your stack, ask yourself a simpler question. Do your people know what you want them to do? And have you made it safe for them to try?