Google was once at the vanguard of ethical AI music development. That may no longer be the case.

MBW Views is a series of op/eds from eminent music industry people… with something to say.

The following MBW op/ed comes from Ed Newton-Rex (pictured), CEO of the ethical generative AI company, Fairly Trained, and the former VP Audio at Stability AI.

Newton-Rex announced in November that he was quitting his role at Stability AI due to concerns over the company’s opinion that training generative AI models on copyrighted works is ‘fair use’. Prior to joining Stability AI in 2022, Newton-Rex founded the pioneering music-making AI platform Jukedeck before selling it to TikTok/ByteDance in 2019.

He subsequently became Product Director of TikTok’s in-house AI Lab, before becoming Chief Product Officer at music app Voisey (sold to Snap in late 2020).


Billboard reported last month that Google’s powerful music generation model, Lyria, was trained on copyrighted music without rights-holders’ permission. There is a danger we are becoming inured to stories like this, given the lawsuits piling up in the text and image generation worlds. But, if true, this is a particularly interesting case, because it would mark a reversal in approach for a company that was once at the vanguard of fair music generation models.

To understand how Google’s approach to training data in AI music generation has changed, we have to journey back to 2016. AI music was in its infancy. I ran one of a tiny handful of startups in the space. Out of nowhere, Google announced Magenta, a group working on AI creativity in a number of domains, including music. Before long, they were releasing interactive demos, making Ableton Live plugins, publishing papers and open-sourcing their code. They played a huge part in bringing AI creativity into the public consciousness.

And here’s the thing – like other AI music startups at the time, they were respectful of creators’ rights. Their models were trained on data they had the rights to use, be it classical music that had entered the public domain or music datasets they commissioned themselves and released to the public. I know several members of the Magenta team from back then, and it’s clear they took this approach because of a deep respect for musicians themselves, drawn in part from their own considerable experience making music.

But something, somewhere in the company, seems to have altered this philosophy.

In November 2023 Google announced Lyria, their latest AI music generation effort. Rumours had been circulating for a while about this secretive music model that had astonished everyone who had heard it – and it didn’t disappoint. Lyrics generation, vocal generation, high-quality instrumentals, style transfer – it had it all. 2023 had already felt like a turning point for AI music, and Lyria seemed to confirm that this would be remembered as the year everything changed.

One of the interesting things about Lyria’s announcement was the fact that music industry partners were so prominent. Artists from Charlie Puth to John Legend licensed their voices, and YouTube’s Music AI Incubator, which counts Universal Music Group as a partner, was involved in its development. The messaging was clear: Google was the company doing this ethically. Didn’t these partnerships prove that?

“In my opinion – and I imagine this is shared by many in the creative industries – this ‘ask for forgiveness, not permission’ approach is not how to go about acquiring training data for generative AI. It doesn’t give rights-holders a level playing field for negotiations.”

Enter the Billboard story. Four sources report that “Google trained its model on a large set of music — including copyrighted major-label recordings — and then went to show it to rights holders, rather than asking permission first”.

In my opinion – and I imagine this is shared by many in the creative industries – this ‘ask for forgiveness, not permission’ approach is not how to go about acquiring training data for generative AI. It doesn’t give rights-holders a level playing field for negotiations: “Negotiating with a company as massive as YouTube was made harder because it had already taken what it wanted”, according to sources familiar with the ensuing label discussions who spoke to Billboard.

It also begs the question of whether Google retrained the model having removed data from rights-holders who said no. The permission-agnostic approach they apparently took to the initial training, along with the company’s documented argument that generative AI training is covered by fair use, doesn’t inspire a great deal of confidence.

These are not academic concerns. While Lyria isn’t widely accessible yet, it is available in beta in DreamTrack, a YouTube Shorts experiment, and demos of a much wider set of functionalities have been prominently and intentionally shared around the web, no doubt shoring up investors’ faith that Google remains a leader in AI.

It is hard to argue that it is not already being used for commercial purposes. And when you loudly announce and demo a product, as well as going into beta, you’re putting further pressure on the rights-holders on the other side of the negotiating table. Any rights-holder who opts out is denying consumers something they’ve already been promised.

Lyria was presented by Google as responsible generative AI: the announcement claimed they were “protecting music artists and the integrity of their work” and “[setting] the standard for the responsible development and deployment of music generation tools”. I suspect people might view these claims differently following Billboard’s revelations.

Why the volte-face in Google’s position? How does a company go from being one of the most respectful of musicians’ rights to one that takes this approach in just seven years? It could be a combination of factors.

“Google is not the worst offender when it comes to music training data… But their claim that they are setting the standard for responsible development of these tools is not one that stands up to scrutiny.”

The Google Brain team behind Magenta was merged with DeepMind earlier in 2023, so it’s possible that the initial core of the Magenta team became diluted, along with their musician-centric philosophy. Competitive pressure may have led the team to feel they had to sacrifice training data ethics to stay ahead. Or, perhaps most likely, they just saw teams in other domains, either internally or externally, training on copyrighted work and getting away with it (for now), and the company’s assessment of the legal risks changed.

Whatever the case, they’ve obviously decided it’s worth the risk. I’m sure I’m not the only one who hopes this is a decision they ultimately reverse.

There are many AI music generation companies today who have stayed true to Magenta’s original philosophy of respecting musicians’ rights – we have certified eight of them at Fairly Trained. For individuals and companies who care about human musicians, the choice right now of which models to use should be obvious. And I suspect the artists who partnered with Google for the launch of Lyria aren’t thrilled about Billboard’s revelations, either.

Google is not the worst offender when it comes to music training data. In their defence, they are at least making some effort to get licences before making their platform widely available, which can’t be said of every music AI company out there (naming no names). But their claim that they are setting the standard for responsible development of these tools is not one that stands up to scrutiny.

I have huge respect for the technological progress the music AI researchers at Google have made over the years. But I hope that those who have been there since the Magenta days can fight what may be formidable internal opposition and return the company to Magenta’s original philosophy, which truly did set the standard for responsible AI music development.Music Business Worldwide

Related Posts