Skip to content
All posts

AI Irony: Misinformation Expert Delivers Testimony Laced with AI Misinformation

In a world where Artificial Intelligence (AI) seems to be at our beck and call—like an eager but occasionally misguided intern—we're constantly entangled in the paradox of its brilliance and blunders.

Picture this: Stanford's Jeff Hancock, a communication professor and renowned expert on misinformation, with an hourly rate of £600 (great work if you can get it), tasks ChatGPT-4o with drafting a legal affidavit. The result? A declaration riddled with inaccuracies and "hallucinated" citations. Ironically, some of these errors pertained to misinformation itself. It's a plot twist worthy of a courtroom drama—and not the good kind. When the expert on misinformation ends up inadvertently spreading it, the stakes couldn’t be higher, especially in a case involving deepfakes, the poster child for digital deception.

Hancock's affidavit, submitted in a Minnesota court case regarding the state’s 2023 ban on using deepfakes to influence elections, intended to illustrate how deepfakes amplify misinformation and erode trust in democratic institutions. Yet, plaintiffs’ attorneys pointed out that the statement included citations to two articles that simply didn’t exist. Another error misattributed authorship in an existing study—a scholarly faux pas that’s hard to overlook.

Hancock later admitted to the oversight, explaining in a court filing that he had used GPT-4o and Google Scholar to assist with research and drafting. He’d included placeholder tags (“[cite]”) in his initial draft to remind himself to add correct references but inadvertently allowed GPT-4o to generate its own fabricated citations. “I express my sincere regret for any confusion this may have caused,” he wrote, clarifying that he stood firmly behind the substantive points in the affidavit. Despite the apology, the episode highlights the ethical tightrope we walk when integrating AI into critical workflows.

And let’s not miss the irony here: a misinformation expert’s affidavit—about deepfakes, no less—undermined by misinformation itself. It’s the kind of scenario that would make Alanis Morissette rewrite "Ironic". This isn’t just an embarrassing oops; it’s a neon sign flashing the dangers of unchecked AI reliance in high-stakes environments. When AI conjures hallucinated citations in a case meant to tackle the very essence of digital deception, it’s not just ironic—it’s alarming.

Hancock’s mistake—while humbling—is not unique. Many professionals have shared the stage with AI, only to find its understanding capabilities akin to a confused stage manager: brilliant at organising props but prone to improvising lines in the script of human intention. This becomes particularly perilous in high-stakes contexts like crafting legal documents or expert reports, where errors can carry serious repercussions.

The broader implications are equally thought-provoking. Hancock’s case sparked a debate about transparency and accountability in AI usage. Should experts disclose when AI assists in drafting authoritative documents? The answer is a resounding “yes,” especially after a recent legal mandate requiring clear disclosure of AI’s involvement in expert opinions. As AI systems become embedded in professional workflows, the need for robust guidelines and ethical frameworks is more pressing than ever.

Interestingly, Hancock’s role as a professor further underscores the paradoxical nature of this episode. While he teaches “COMM 1: Introduction to Communication” and “COMM 324: Language and Technology” at Stanford, emphasising the importance of proper citations to broaden representation in communications, students found it “ironic” that he’d been caught out by fabricated AI-generated citations. It’s a vivid reminder that even experts are not immune to AI’s quirks and limitations.

As we navigate the choppy waters of the AI paradox, engaging openly about responsible AI use is paramount. It’s like dancing at a party—exciting but requiring mindfulness of your dancing partner. Transparency must be the thread that weaves through our interactions with AI to avoid stepping on ethical toes. Achieving the balance between innovation and responsibility demands collaboration, dialogue, and, dare I say, a touch of humility.

Hancock’s experience is a vivid tableau of AI’s complex role in the professional world—a modern-day fable on the stage of technology. As we edge towards a future where AI becomes an indispensable co-passenger, we must wrestle with its challenges. Engage in conversations, share ideas, and together we can craft a tapestry where innovation and responsibility coexist harmoniously.

I invite you, dear reader, to chime in with your AI anecdotes or thoughts on its role in professional life. How do you envision a regulatory landscape that embraces AI progress while safeguarding against its pitfalls? Your insights are the linchpin in this discourse as we strive towards a harmonised future with AI.