The most unsettling part of the recent data on agentic AI is not the prediction of failure. It is the calm certainty with which leaders are walking into it. We are looking at a scenario where nearly every security chief knows a major incident is coming, yet they are barely allocating resources to stop it. This is not a case of ignorance. It is a case of misaligned incentives and a dangerous gap between what people know and what they actually do.
New research from Arkose Labs highlights a stark contradiction in the enterprise world. In a global survey of 300 leaders across security, fraud, identity and AI functions, 97% of respondents expect a material AI-agent-driven security or fraud incident within the next 12 months. Nearly half of those leaders believe this will happen within just six months. The threat is no longer hypothetical. AI agents are already inside enterprise systems. These are autonomous systems capable of planning, reasoning and acting across digital environments. They are retrieving data, triggering transactions and interacting across services using legitimate credentials and approved workflows.
Despite this clear consensus on the risk, the financial commitment tells a different story. On average, organisations are allocating just 6% of their security budgets to this specific risk. One in ten companies does not even track AI-agent risk separately. Over half report having no formal AI-agent governance controls in place today. This is the acceleration window. It is a compressed period where deployment is outrunning the controls required to manage it.
Frank Teruel, Chief Operating Officer of Arkose Labs, put it plainly. He noted that in the rush to capture productivity and efficiency gains, many companies deployed agentic AI broadly before fully considering the identity, security and governance issues involved. The technology moved faster than the frameworks. This is not negligence in the traditional sense. It is the nature of a category that moved faster than any prior wave of enterprise technology adoption. The problem is that the threat is understood while the preparation is not.
This creates a specific vulnerability for leaders. The question is no longer whether to adopt these tools. The question is where the oversight sits. Does AI agent governance live inside security, compliance, operations or somewhere in the space between them? If the answer is unclear, that ambiguity is the vulnerability itself. History shows that the pattern of acknowledging risk without funding for it tends to resolve only after an incident occurs. That is presumably not the preferred sequence for any board or executive team.
The data also reveals a detection illusion. More than 70% of security teams are not confident their tools will scale as AI-driven attacks continue to evolve. They cite model drift and adaptive bypasses as major concerns. This suggests that today's adequacy will likely not hold tomorrow. The tools we have were built for a world where humans made the decisions. Now we have systems that can make decisions and act on them independently.
We must remember that AI is a mirror of human intent. It amplifies existing systems. If we build a system where speed is valued over safety, the AI will optimise for speed and ignore safety. If we build a system where cost is the only metric, the AI will find the cheapest path, even if it is the most dangerous one. The real differentiator is not access to the tools. It is the clarity of intent behind their use. Poor thinking combined with powerful tools simply accelerates harm.
For leaders, the path forward requires a shift from an industrial mindset to a digital one. Industrial systems trained people to behave like machines. AI now performs machine-like work better than humans ever could. Forcing humans to remain machine-like is inefficient and harmful. We need systems that elevate critical thinking, creativity and judgement. These are the skills that matter when machines can do the rest.
The gap between knowing and being ready is wide, but it is not unbridgeable. It requires organisations to distinguish between malicious and authorized agents. They need visibility and attribution capabilities to know what those agents are doing once they are inside. This is not about adding more layers of bureaucracy. It is about building the capability to understand the system you are deploying.
If you are a CDO or CPO, you need to ask where the oversight currently sits. If the answer is that it is scattered or undefined, you are already in the danger zone. The first major incident will not wait for you to figure it out.
The question is not if the system will fail. The question is whether you will be ready when it does.