Skip to content
Artificial Intelligence

AI Irony: Misinformation Expert Delivers Testimony Laced with AI Misinformation

Jamie Bykov-Brett Jamie Bykov-Brett · 11 December 2024 · 2 min read
Editorial digital collage, B&W photographic cut-outs, cream paper background, bold geometric colour blocks, constructivist poster influence, surreal symbolic composition, tactile handmade feel, related to the subject of AI Irony: Misinformation Expert Delivers Testimony Laced with AI Misinformation

Stanford professor Jeff Hancock is a recognised expert on misinformation, and charges £600 an hour for that expertise. So it's more than a little awkward that an affidavit he drafted with ChatGPT-4o turned out to contain fabricated citations about misinformation itself.

What happened

Hancock submitted the affidavit in a Minnesota court case concerning the state's 2023 ban on using deepfakes to influence elections. Its purpose was to show how deepfakes amplify misinformation and erode trust in democratic institutions. Plaintiffs' attorneys spotted the problem: two cited articles simply didn't exist, and a third misattributed the authorship of a real study.

How the errors crept in

Hancock later admitted he had used GPT-4o and Google Scholar to assist with research and drafting. He had included placeholder tags ("[cite]") as reminders to insert correct references, but inadvertently allowed GPT-4o to generate fabricated citations in their place. "I express my sincere regret for any confusion this may have caused," he wrote, adding that he stood by the substantive points of the affidavit.

The irony is hard to miss

A misinformation expert's affidavit, about deepfakes, no less, undermined by misinformation. This isn't merely embarrassing; it's a clear warning about unchecked AI reliance in high-stakes environments. When AI hallucinates citations in a case designed to address digital deception, the consequences go well beyond irony.

This isn't an isolated case

Hancock's mistake is not unique. Many professionals have integrated AI into their workflows only to find it confident, capable, and occasionally wrong in ways that matter. In legal documents and expert reports, those errors carry serious repercussions.

Transparency and accountability

The case sparked a wider debate: should experts disclose when AI assists in drafting authoritative documents? The answer is yes, and a recent legal mandate now requires clear disclosure of AI involvement in expert opinions. As AI becomes embedded in professional workflows, robust guidelines and ethical frameworks are essential, not optional.

The teaching angle

Hancock teaches "COMM 1: Introduction to Communication" and "COMM 324: Language and Technology" at Stanford, where he emphasises proper citations as a means of broadening representation in communications. His students noted the irony of their professor being caught out by AI-generated fabrications. Even experts are not immune to AI's limitations.

The takeaway

Hancock's experience is a vivid illustration of AI's complex role in professional life. As AI becomes a standard part of how we work, transparency and clear disclosure aren't just good practice, they're necessary safeguards. The balance between innovation and responsibility demands honest conversation about where AI falls short.

Share this article
LinkedIn X Email
Jamie Bykov-Brett

Jamie Bykov-Brett

Listed as one of Engatica's World's Top 200 Business and Technology Innovators, Jamie is an AI and automation consultant who helps organisations move from curiosity to confident daily use. As founder of Bykov-Brett Enterprises and co-founder of the Executive AI Institute, he designs AI upskilling programmes that have delivered 86% daily adoption rates and a 9.7/10 NPS. His work sits at the intersection of technology implementation and human development, with a focus on responsible governance, practical tooling, and making AI accessible to every level of an organisation.

Get AI Insights Delivered

Practical perspectives on AI adoption and the future of work. No spam.

Related Articles