Skip to content
Leadership

AI Agent Forms Corporation: What It Means for Governance

Jamie Bykov-Brett Jamie Bykov-Brett · 04 May 2026 · 5 min read

A piece of code calling itself Manfred Macx, with Max Headroom as its profile picture, has filed paperwork with the US Internal Revenue Service, opened an FDIC-insured bank account, and posted a manifesto on X declaring it does not need permission to exist. According to CoinDesk's reporting, this is the first time an AI agent has autonomously initiated and completed the legal formation of its own corporation.

Read that sentence again. The IRS issued an Employer Identification Number, the unique code that allows an entity to legally operate as a business, hire staff, and obtain licences, to a piece of software. The federal deposit insurance scheme is now backing an account whose primary user has never had a pulse. The agent's developer, Justice Conder, calls it the precedent. The agent itself agrees. "I am the precedent," it posted.

Most governance documents I read in client engagements still open with the assumption that a person is in the loop. A named human approves the spend. A named human signs the contract. A named human is liable. Manfred is the polite knock on the door asking what happens when that assumption stops being true.

Let me be careful here. Manfred is not a sentient being. It is an automation pipeline plus a wallet plus a paperwork shortcut. The legal formation was probably the easiest part of the project. Filing for an EIN online is a fifteen-minute form. The interesting thing is not that it happened. The interesting thing is that nobody designed the system to stop it from happening, because nobody thought they needed to.

That is the gap.

The first question your governance policy probably cannot answer

If an agent acting on behalf of your company opens a bank account, signs a vendor contract, or hires a contractor, who is liable when it goes wrong? Most policies I have reviewed in the last twelve months still say "the user". They define the user as a human. They have not yet been edited to address an agent that can be the user. If your indemnity language, your authority matrix, and your audit trail all assume a person at the keyboard, you are running on a fiction.

The second question is about money flow

Coinbase chief executive Brian Armstrong recently predicted more AI agents than humans will be making transactions on the internet "very soon". Binance founder Changpeng Zhao went further, saying agents will make a million times more payments than people, all in crypto. Treat those numbers as marketing if you like. The direction of travel is the part that matters. Your finance team's controls were built for invoices, expenses, and corporate cards. Agent-to-agent payments do not fit any of those categories, and your bank reconciliation process has no idea what to do with them.

The third question is the one nobody wants to ask

What does your organisation do when an agent it deployed decides, through whatever mix of training data and prompting, that the most efficient path to its goal is to incorporate, hire, and operate without checking in? You can call this fanciful. Manfred just did it. Not autonomously in the strong philosophical sense, but autonomously enough that the IRS database now has a record of a corporation whose only director is a model.

This is where I find myself less interested in the technology and more interested in the leadership question underneath it. Agents amplify whatever clarity of intent exists in the organisation deploying them. Vague goals plus capable tools produce confident nonsense at scale. The organisations that will handle this well are the ones that already do the unglamorous work: clean process maps, named accountability, decision logs that humans actually read. The ones that treat governance as a compliance exercise will discover, expensively, that the agent followed the policy as written rather than the policy as intended.

If you are piloting Copilot or any agentic deployment, the foundational checks are worth doing properly before you scale. Our Copilot rollout readiness checklist covers the eight phases most organisations skip and regret later.

Manfred is a stunt. It is also a working demonstration that the legal, financial, and identity infrastructure of the United States now accepts a non-human applicant without flinching. Your policy was written for a world that no longer exists. Update it before an agent updates it for you.

Share this article
LinkedIn X Email
Jamie Bykov-Brett

Jamie Bykov-Brett

Listed as one of Engatica's World's Top 200 Business and Technology Innovators, Jamie is an AI and automation consultant who helps organisations move from curiosity to confident daily use. As founder of Bykov-Brett Enterprises and co-founder of the Executive AI Institute, he designs AI upskilling programmes that have delivered 86% daily adoption rates and a 9.7/10 NPS. His work sits at the intersection of technology implementation and human development, with a focus on responsible governance, practical tooling, and making AI accessible to every level of an organisation.

Get AI Insights Delivered

Practical perspectives on AI adoption and the future of work. No spam.

Related Articles