The RIAA, NMPA, and six other key music industry groups have thrown their weight behind the music publishers suing Anthropic over AI copyright infringement.
The coalition, which also includes A2IM, SoundExchange, SONA, BMAC, the Music Artists Coalition, and the Artist Rights Alliance, filed an amicus brief on Monday (March 30) urging a federal court to reject Anthropic’s fair use defense in the case brought by Universal Music Publishing Group, Concord Music Group, and ABKCO in October 2023.
A second amicus brief, filed by the Association of American Publishers (AAP), News/Media Alliance, the International Association of Scientific, Technical & Medical Publishers (STM), and the Authors Guild, makes similar arguments from the perspective of the book, news, and academic publishing industries.
Both briefs, filed on Monday (March 30) in the Northern District of California, support the publishers’ motion for partial summary judgment in the original Concord v. Anthropic case, filed last week, which asks Judge Eumi K. Lee to rule that Anthropic’s unlicensed copying of copyrighted song lyrics to train its Claude AI chatbot is not fair use.
The music industry brief, obtained by MBW, and which you can read in full here, was filed by Pryor Cashman LLP on behalf of the RIAA et al. It zeroes in on the fourth fair use factor — market harm — which the Supreme Court has called “undoubtedly the single most important element of fair use.”
Its core argument is twofold: that AI-generated music already acts as a direct market substitute for human-created works, and that a functioning licensing market for AI training already exists — one that Anthropic has chosen not to participate in.
On the substitution point, the brief draws on data that will be familiar to MBW readers. It cites Deezer‘s disclosure that the streaming platform was receiving over 50,000 fully AI-generated tracks per day by late 2025 (As MBW reported in January, that figure has since risen to over 60,000 per day with synthetic content now accounting for roughly 39% of all music delivered to the platform daily). The brief also references a Deezer/Ipsos survey finding that 97% of listeners could not distinguish AI-generated music from human-made tracks.
On licensing, the brief documents a string of recent deals to argue that a workable market is not hypothetical but real. These include UMG and Warner Music Group‘s agreements with Stability AI and Udio; KLAY Vision’s licensing deals with all three major label groups; Kobalt’s agreement with ElevenLabs; Merlin‘s partnership with Udio; and Musixmatch’s lyric licensing deals with Sony Music Publishing, Universal Music Publishing Group, and Warner Chappell Music.
Anthropic, it notes, is raising money at a $380 billion valuation following a $30 billion Series G round — and “indisputably has the means to compensate copyright owners.”
The amicus filings arrive at a moment of intensifying legal and commercial pressure on AI companies over the use of copyrighted material.
As MBW has reported, the original Concord v. Anthropic case — filed in October 2023 over 499 works — has since been joined by a second, far larger lawsuit filed in January 2026 covering more than 20,000 songs and seeking over $3 billion in statutory damages.
BMG also filed its own separate suit against Anthropic earlier this month.
Meanwhile, the broader music industry has moved rapidly to establish licensed AI partnerships — creating precisely the kind of market infrastructure that the amicus briefs argue Anthropic is undermining by refusing to participate.
The publishing coalition’s brief, which you can also read here, reinforces this point from outside the music world, presenting an extensive chart of AI licensing deals for textual works involving companies including OpenAI, Google, Meta, Microsoft, Amazon, and Perplexity — naming dozens of publisher partners from the Associated Press and the New York Times to Condé Nast and HarperCollins.
It characterizes Anthropic as a “holdout” that has opted to scrape copyrighted content rather than license it like its peers.
That brief also cites recent academic research from early 2026 challenging AI companies’ claims that training data is not retained in their models, pointing to studies showing that LLMs memorize copyrighted texts and can reproduce them — including one paper finding that fine-tuning can reactivate verbatim recall of copyrighted books even when safety measures are in place.
Both briefs push back on the reasoning in Bartz v. Anthropic, an earlier case in which Judge Alsup compared AI training to teaching schoolchildren to write — an analogy the music industry brief calls inadequate, citing Judge Chhabria’s observation in Kadrey v. Meta that the two scenarios are “not remotely” comparable.Music Business Worldwide




