YouTube is clamping down on AI-generated clones of superstars

Chubo / Shutterstock

Watch out, Ghostwriter, FlowGPT and others creating AI-generated clones of musical artists – YouTube‘s ban hammer is waiting.

Alphabet-owned YouTube announced on Tuesday (November 14) that it’s developing a system for music partners to request the removal of content on its platform that “mimics an artist’s unique singing or rapping voice.”

For the time being, this new takedown request system will be available to labels or distributors who represent artists participating in YouTube’s AI music experiments… but YouTube says it plans to expand access to other labels and music distributors in the coming months.

YouTube formed a partnership with Universal Music Group (UMG) this past August to jointly develop AI music tools, with plans to “include appropriate protections and unlock opportunities for music partners.”

As part of this effort, YouTube set up an “AI Music Incubator” where artists will work with YouTube developers to create new AI tools. Among the artists participating in the incubator are Brazilian star Anitta (recently signed to UMG’s Republic Records), Björn Ulvaeus of ABBA fame, producer and hitmaker Louis Bell, fast-rising artist d4vd, the Frank Sinatra estate and neo-classical composer Max Richter, among others.

Additionally, YouTube’s parent company, Google, was reported to be in talks with both UMG and Warner Music Group (WMG) to develop a product that would enable rights holders to collect payment on revenue generated by fan-created AI deep fakes.

YouTube indicated that takedown requests made under its new system won’t be granted automatically.

“In determining whether to grant a removal request, we’ll consider factors such as whether content is the subject of news reporting, analysis or critique of the synthetic vocals,” YouTube Product Management Vice Presidents Jennifer Flannery O’Connor and Emily Moxley wrote in a blog post.

Well before announcing its new takedown system, YouTube was cooperating with at least some rights holders’ requests for takedowns of AI-generated music that mimicked known artists. For instance, in April, the DSP issued a copyright strike against user Grandayy over an AI-generated video that mimicked Eminem singing about cats.

Not long thereafter, YouTube was among the many DSPs that pulled down Heart On My Sleeve, the notorious “fake Drake” track featuring mimicked vocals by Drake and The Weeknd, apparently after a request from UMG, with whom both artists are affiliated.

Additionally, YouTube announced a new policy that will require creators to label AI-generated content as such when uploading it to the streaming site.

“We’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools,” O’Connor and Moxley stated in the blog post.

“When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material. For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do.”

“Creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties.”

Jennifer Flannery O’Connor and Emily Moxley, YouTube

YouTube will add a label in the video description panel that will alert viewers that the content is AI-generated or synthetic, and for videos involving “sensitive topics,” the label will appear on the video panel itself. (This policy is not unlike the one announced in September by TikTok, which has asked creators to label AI-generated content uploaded to the platform.)

YouTube warned that creators “who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties.”

The streaming service also said it’s working on a system to allow individuals to request takedowns of videos that feature “synthetic or altered content that simulates an identifiable individual, including their face or voice.” Those requests will be made via the platform’s existing privacy request process.

YouTube noted that it won’t necessarily honor all takedown requests, and will evaluate each one according to a set criteria, which “could include whether the content is parody or satire, whether the person making the request can be uniquely identified, or whether it features a public official or well-known individual, in which case there may be a higher bar.”

Those standards echo closely the sorts of considerations that courts take into account when determining whether unauthorized use of copyrighted material is acceptable under fair use provisions.


Notably, YouTube didn’t mention in its blog post whether it’s working on technology that could automatically detect AI-generated content, as TikTok has said it’s doing.

Music streaming service Believe announced on an earnings call last month that it has developed AI-detection tools that can recognize a deep fake audio file with 93% accuracy.

A YouTube spokesperson told The Verge that the platform is “investing in the tools to help us detect and accurately determine if creators have fulfilled their disclosure requirements when it comes to synthetic or altered content.”

In their blog post, O’Connor and Moxley said YouTube is still “at the beginning of our journey to unlock new forms of innovation and creativity on YouTube with generative AI,” and the platform is “taking the time to balance these benefits [of AI] with ensuring the continued safety of our community at this pivotal moment.”Music Business Worldwide

Related Posts