Major record companies hate AI voice-cloning platforms that don’t pay. The one they hate most was created by a 20-year-old UK student.

MBW Explains is a series of analytical features in which we explore the context behind major music industry talking points – and suggest what might happen next. Only MBW+ subscribers have unlimited access to these articles.
What’s happened?

For years, the Recording Industry Association of America (RIAA)’s annual submission to the Office of the US Trade Representative’s ‘Review of Notorious Markets for Counterfeiting and Piracy’ has consisted of a long list of pirate sites known to rip off copyrighted music.

However, in its latest submission, for the USTR’s 2023 report, the RIAA has added a new category of copyright infringer: AI vocal cloning services.

“The year 2023 saw an eruption of unauthorized AI vocal clone services that infringe not only the rights of the artists whose voices are being cloned but also the rights of those that own the sound recordings in each underlying musical track,” stated the RIAA submission, which can be read in full here.

“This has led to an explosion of unauthorized derivative works of our members’ sound recordings which harm sound recording artists and copyright owners.”

The report only specifically names one such service: the UK-registered

RIAA logo

“[The site] includes AI vocal models of sound recording artists, including Michael Jackson, Justin Bieber, Ariana Grande, Taylor Swift, Elvis Presley, Bruno Mars, Eminem, Harry Styles, Adele, Ed Sheeran, and others.”

RIAA complaint about

“This site markets itself as the ‘#1 platform for making high-quality AI covers in seconds!’ and includes AI vocal models of sound recording artists, including Michael Jackson, Justin Bieber, Ariana Grande, Taylor Swift, Elvis Presley, Bruno Mars, Eminem, Harry Styles, Adele, Ed Sheeran, and others, as well as political figures including Donald Trump, Joe Biden, and Barack Obama,” the report states.

“The service stream-rips the YouTube video selected by the user, copies the acapella from the track, modifies the acapella using the AI vocal model, and then provides to the user unauthorized copies of the modified acapella stem, the underlying instrumental bed, and the modified remixed recording.

“This unauthorized activity infringes copyright as well as infringing the sound recording artist’s rights of publicity.”

In naming in particular, the RIAA has, in essence – when it comes to voice-copying AI, anyway – identified the recording industry’s public enemy No.1.

This status primarily may have to do with’s popularity: according to the RIAA’s research, the site had 8.8 million visitors over the past year.

As you’ll read more about later in this piece, wasn’t created by a veteran serial copyright infringer, nor an international criminal enterprise.

It was made by a 20-year-old British computer science student.

Searching for a Taylor Swift voice clone on

What’s the context?

The RIAA isn’t exaggerating when it says we’ve seen an “eruption” of AI-cloned vocals this year.

Some of this has been legitimate (if still somewhat controversial) work, such as using AI to “extricate” the late John Lennon’s vocals from a low-quality cassette for a “new” Beatles recording.

Some of it has been borderline, such as French DJ David Guetta cloning an Eminem song based on one AI algorithm that wrote the lyrics, and another that generated the vocals. (“Let me introduce you to… Emin-AI-Em!” Guetta quipped on Twitter.)

But what worries the industry is all the unauthorized activity – from the viral “fake Drake” track, featuring the cloned vocals of Drake and The Weeknd, to an unauthorized cover of Beyonce’s Cuff It “performed” by Rihanna, to a cloned Bad Bunny-Rihanna mash-up.

The music industry appears to be standing on the precipice of a potential new wave of piracy, where prominent artists’ vocals – and potentially even their own visual identity – are stolen to create content for which these artists (and other rightsholders) are never paid.

What happens next?

The big question within the music business today is: Can unauthorized AI clones of artists be monetized by music’s rightsholders? Can there be a way for Drake to earn royalties from the “fake Drake” track (or others like it on services like

That’s the approach increasingly favored by many in the industry, including Warner Music Group CEO Robert Kyncl, who – in his previous role as Chief Business Officer at YouTube – saw firsthand how Alphabet’s video streaming service partnered with music companies to monetize unauthorized copyrighted uploaded by users.

At YouTube “we made a very important decision, which was to go above and beyond the law, and build a fingerprinting software that allowed us to track the copyright on our platform,” Kyncl said last month at the Code Conference in California.

“Out of that we built a multi-billion-dollar business, which now is a multi-billion-dollar business per year. And it was an incredible new revenue stream for everyone. AI is that with new super tools.”

Kyncl was referring to YouTube’s Content ID system, which identifies copyrighted materials in YouTube user-uploaded videos today, before alerting the copyright owner/s, giving them the chance to monetize each video, or have it removed.

Interestingly, this “fingerprinting” approach to AI-cloned vocals in music also seems to be favored by Ghostwriter, the handle of the composer behind the landmark “fake Drake” track, Heart On My Sleeve.

In a new interview with Billboard, Ghostwriter – who chooses to remain anonymous – comments: “The Ghostwriter project — if people will hopefully support it — is about not throwing art in the trash. I think there’s a way for artists to help provide that beauty to the world without having to put in work themselves. They just have to license their voices.”

Credit: Warner/press

“We built a multi-billion-dollar business, which now is a multi-billion-dollar business per year. And it was an incredible new revenue stream for everyone. AI is that with new super tools.”

Robert Kyncl, Warner Music Group

Numerous tech firms are working to develop AI-detection tools, not least YouTube owner Alphabet, whose Google division recently released an AI image detector.

YouTube itself is striking partnerships with major music companies, in what appear to be the first steps to developing commercial partnerships around new AI music tools.

YouTube and Universal Music Group announced a deal in August to jointly develop AI tools that offer “safe, responsible and profitable” opportunities to music rights holders.

Sir-Lucian-Grainge, Universal Music Group

“Central to our collective vision is taking steps to build a safe, responsible and profitable ecosystem of music and video.”

Sir Lucian Grainge on YouTube and Universal’s joint program to develop AI tools in music

At the time, Sir Lucian Grainge, Universal Music Group CEO & Chairman, said of Universal and YouTube’s joint objective: “Central to our collective vision is taking steps to build a safe, responsible and profitable ecosystem of music and video — one where artists and songwriters have the ability to maintain their creative integrity, their power to choose, and to be compensated fairly.”

On top of an “AI music incubator” that will involve feedback and guidance from UMG-signed artists, YouTube also announced a set of guiding principles for AI development that will “include appropriate protections and unlock opportunities for music partners.”

A final thought…

It seems likely that, over the coming months or years, will face one or more legal challenges from music rightsholders.

The RIAA says in its ‘Notorious Markets’ submission that it believes’s owner/registrant “is a UK resident”. But in truth it’s not very hard to find out more about him.

Aditya Bansal is credited as’s founder on LinkedIn. A computer science student at Southampton University, Bansal even confirmed this fact to the Financial Times in May.

“It’s a lot…”

Aditya Bansal, creator of, on the amount of money the platform had generated as of May this year (speaking to the FT)

Aged just 20 years old, Bansal said that he’d already seen the popularity of Voicify go “worldwide”.

Bansal claimed that multiple record labels had contacted him wanting to make models of their own artists for demo tracks, which the FT said were intended to be used “as sketches before the full recording process”.

A subscription to Voicify in the UK costs users anywhere from GBP £7.99 per month through to GBP £89.99 per month.

The FT asked Bansal in May how much he was earning from Voicify at that point. “It’s a lot,” he replied – accompanied by what the publication reported as a “smile shading from bashful to gleeful”.

If the record industry does decide to legally pursue Bansal, the bigger question will be precisely what they’re pursuing him for.

The RIAA’s statement on and similar services makes it clear that it sees cloning of artists’ voices as a violation of the right of publicity.

This refers to an intellectual property right that protects against the unauthorized use of a person’s likeness, voice or other aspects of their identity.

The problem here is that – unlike copyright laws, which exist in most jurisdictions – the right of publicity isn’t uniformly recognized under the law worldwide.

In the US, for example, it’s a matter of state law, and those state laws vary widely.

Of 50 US states, 19 have a law explicitly recognizing the right to publicity in some form, including California, New York and Florida, while another 11 states have recognized publicity rights as a matter of common law.

In the UK, the right of publicity isn’t directly enshrined in law. However, UK copyright law allows for people to assert a copyright over the use of their own likeness.

It’s this inconsistency of law that likely prompted Universal Music Group’s General Counsel and Executive VP for Business and Legal Affairs, Jeffrey Harleston, to call for a national US law on the right of publicity in the summer.

The RIAA’s submission, which does not name Mr Bansal as the site’s registrant

Any legal challenge to’s activities will likely involve “venue shopping” – the practice of filing a lawsuit in a particular jurisdiction to take advantage of favorable laws – and will be experimental to an extent, as it’s still largely unknown how courts will apply copyright law to unauthorized AI-generated works.

More importantly, the last quarter-century of digital piracy has taught the music business that fighting sole copyright-infringing companies and individuals in court rarely does much to halt piracy as a whole. If loses in court, there will always be another ready to take its place.

Instead, the most effective approach – when it comes to an explosion of user-generated activity using music copyrights – is to monetize unauthorized content via the platforms that host it.

At the end of the day, the cooperation of major platforms like YouTube with music rightsholders could be all that’s needed to ensure copyrights are sufficiently protected and policed in the age of widespread generative AI.

If that optimistic outcome arrives, the cooperation of individual music-AI disruptors – the Voicifys and Ghostwriters of the world – may simply cease to matter.

JKBX (pronounced "Jukebox") unlocks shared value from things people love by offering consumers access to music as an asset class — it calls them Royalty Shares. In short: JKBX makes it possible for you to invest in music the same way you invest in stocks and other securities.Music Business Worldwide

Related Posts