Stanford professor Jeff Hancock is a recognised expert on misinformation, and charges £600 an hour for that expertise. So it's more than a little awkward that an affidavit he drafted with ChatGPT-4o turned out to contain fabricated citations about misinformation itself.
Hancock submitted the affidavit in a Minnesota court case concerning the state's 2023 ban on using deepfakes to influence elections. Its purpose was to show how deepfakes amplify misinformation and erode trust in democratic institutions. Plaintiffs' attorneys spotted the problem: two cited articles simply didn't exist, and a third misattributed the authorship of a real study.
Hancock later admitted he had used GPT-4o and Google Scholar to assist with research and drafting. He had included placeholder tags ("[cite]") as reminders to insert correct references, but inadvertently allowed GPT-4o to generate fabricated citations in their place. "I express my sincere regret for any confusion this may have caused," he wrote, adding that he stood by the substantive points of the affidavit.
A misinformation expert's affidavit, about deepfakes, no less, undermined by misinformation. This isn't merely embarrassing; it's a clear warning about unchecked AI reliance in high-stakes environments. When AI hallucinates citations in a case designed to address digital deception, the consequences go well beyond irony.
Hancock's mistake is not unique. Many professionals have integrated AI into their workflows only to find it confident, capable, and occasionally wrong in ways that matter. In legal documents and expert reports, those errors carry serious repercussions.
The case sparked a wider debate: should experts disclose when AI assists in drafting authoritative documents? The answer is yes, and a recent legal mandate now requires clear disclosure of AI involvement in expert opinions. As AI becomes embedded in professional workflows, robust guidelines and ethical frameworks are essential, not optional.
Hancock teaches "COMM 1: Introduction to Communication" and "COMM 324: Language and Technology" at Stanford, where he emphasises proper citations as a means of broadening representation in communications. His students noted the irony of their professor being caught out by AI-generated fabrications. Even experts are not immune to AI's limitations.
Hancock's experience is a vivid illustration of AI's complex role in professional life. As AI becomes a standard part of how we work, transparency and clear disclosure aren't just good practice, they're necessary safeguards. The balance between innovation and responsibility demands honest conversation about where AI falls short.