Did Anthropic’s own AI generate a ‘hallucination’ in legal defense against song lyrics copyright case?

Photo credit: Koshiro K / Shutterstock.com

A federal judge has ordered Anthropic to respond to claims that a data scientist it employed relied on a fictitious academic article, likely generated by artificial intelligence, in a court filing against major music publishers.

During a hearing on Tuesday (May 13), US Magistrate Judge Susan van Keulen described the situation as “a very serious and grave issue,” Reuters reported the same day.

Van Keulen directed Anthropic to respond by Thursday (May 15).

The disputed filing, lodged April 30, is part of an ongoing copyright lawsuit brought by Universal Music Group, Concord, and ABKCO against the AI company.

The music publishers are accusing Anthropic of using copyrighted song lyrics without permission to train its AI chatbot Claude.

The plaintiffs filed an amended copyright infringement complaint against Anthropic on April 25, about a month after they were dealt a setback in their initial proceedings against Anthropic.

Subsequently, on April 30, Anthropic data scientist Olivia Chen submitted a filing citing an academic article from The American Statistician journal as part of Anthropic’s attempt to argue for a valid sample size in producing records of Claude’s interactions, particularly how often users prompted the chatbot for copyrighted lyrics.

“I understand the specific phenomenon under consideration involves an exceptionally rare event: the incidence of Claude users requesting song lyrics from Claude.”

Olivia Chen, Anthropic

“I understand the specific phenomenon under consideration involves an exceptionally rare event: the incidence of Claude users requesting song lyrics from Claude. I understand that this event’s rarity has been substantiated by manual review of a subset of prompts and outputs in connection with the parties’ search term negotiations and the prompts and outputs produced to date,” Chen said in the filing.

In the eight-page declaration, Chen provided a justification for that sample size, citing statistical formulas and multiple academic sources. Among them was the now-disputed citation, an article purportedly published in The American Statistician in 2024 and co-authored by academics.

“We do believe it is likely that Ms. Chen used Anthropic’s AI tool Claude to develop her argument and authority to support it.”

Matt Oppenheim, Oppenheim + Zebrak

Oppenheim + Zebrak’s Matt Oppenheim, an attorney representing the music publishers, told the court he had contacted one of the named authors and the journal directly and confirmed that no such article existed.

Oppenheim said he did not believe Chen acted with intent to deceive but suspected she had used Claude to assist in writing her argument, possibly relying on content “hallucinated” by the AI itself, according to Reuters‘ report.

The link provided in the filing points to another study with a different title.

“We do believe it is likely that Ms. Chen used Anthropic’s AI tool Claude to develop her argument and authority to support it,” Oppenheim was quoted by Reuters as saying.

“Clearly, there was something that was a mis-citation, and that’s what we believe right now.”

Sy Damle, Latham & Watkins

Meanwhile, Anthropic’s attorney, Sy Damle of Latham & Watkins, disputed the claim that the citation was fabricated by AI, saying the plaintiffs were “sandbagging” them by not flagging the accusation earlier. Damle also argued the citation likely pointed to the wrong article.

“Clearly, there was something that was a mis-citation, and that’s what we believe right now,” Damle reportedly said.

However, Judge van Keulen pushed back, saying there was “a world of difference between a missed citation and a hallucination generated by AI.”

The alleged “hallucination” in Anthropic’s declaration marks the latest in court cases where attorneys have been criticized or sanctioned by courts for mistakenly tagging fictional cases.

In February, Reuters reported that US personal injury law firm Morgan & Morgan sent an urgent email to its more than 1,000 lawyers warning that those who use an AI program that “hallucinated” cases would be fired.

That’s after a judge in Wyoming threatened to sanction two lawyers at the firm who included fictional case citations in a lawsuit against Walmart, according to Reuters.


Universal Music Group, Concord, and ABKCO sued Anthropic in 2023, alleging that the company trained its AI chatbot Claude on lyrics from at least 500 songs by artists, including Beyoncé, the Rolling Stones, and the Beach Boys, without permission.

In March, the music publishers said they “remain very confident” about ultimately winning against Anthropic.

Music Business Worldwide

Related Posts