YouTube extends deepfake detection tool access to celebrities and talent agencies

Credit: FotoField/Shutterstock

YouTube is expanding its AI likeness detection tool to the wider entertainment industry, opening access to celebrities and talent agencies for the first time.

The expansion comes six months after YouTube started rolling out the AI detection tool in October, which was limited at the time to a specific set of creators with YouTube channels.

The company announced Tuesday (April 21) that talent agencies, management companies, and the celebrities they represent are now eligible to enroll, regardless of whether they have a YouTube channel. The expansion was developed with support from talent agencies and management companies, including Creative Artists Agency (CAA), United Talent Agency (UTA), WME, and Untitled Management.

YouTube had earlier announced in December 2024 that it brought in talent from CAA to help it build the AI deepfake detection tool. CAA provided feedback to help YouTube build the system and refine controls, the tech company said at the time. The agency represents numerous artists including Ariana GrandeBeyonceBob DylanBruce SpringsteenCharli XCXDua LipaMiley CyrusPeso PlumaQueenSabrina CarpenterShaboozey, and The Weeknd, among many others.

YouTube explained on Tuesday that the detection system works similarly to YouTube’s Content ID, which rightsholders use to flag unauthorized use of copyrighted material.

“This works similarly to Content ID, except the scan searches for a creator’s likeness rather than copyrighted content.”

YouTube

“This works similarly to Content ID, except the scan searches for a creator’s likeness rather than copyrighted content,” according to YouTube’s Help page.

YouTube said the tool performs a one-time scan of newly uploaded videos on YouTube to identify content that could contain the face of each creator who has set up Likeness detection.

“To find matches of these creators’ faces, our system will scan and detect faces of other individuals who are present, including adults and children, in addition to any signed-up creators.” Youtube noted that scans are immediately deleted right after, and cannot be used to identify anyone other than creators who have signed up to the feature. Non-enrolled individuals are not identified by the technology.

Enrollment requires a government-issued ID and a brief selfie video, which YouTube uses to verify identity and build a facial likeness template. The verification process takes up to five days. Once enrolled, participants can authorize agents, managers, or other representatives to review flagged content on their behalf without going through verification themselves.

The platform noted that it stores likeness templates and identity information for up to three years from an enrolled person’s last login, or until they withdraw consent or delete their account.

“Currently, the feature is only used to detect visual matches of an enrolled creators’ face. We aim to extend Likeness detection to audio in the near future.”

YouTube

YouTube noted that the tool is still experimental, “and we’re still tuning the software.” The company acknowledged that it may not show every instance of AI-generated likeness content and encouraged participants to report misses through YouTube’s privacy complaint process.

However, audio detection is not yet part of the feature. YouTube says it is working to extend likeness detection to voice in 2026.

“Currently, the feature is only used to detect visual matches of an enrolled creators’ face. We aim to extend Likeness detection to audio in the near future.”

The AI likeness detection tool is an extension of YouTube’s privacy tools amid the era of deepfakes. In July 2024, YouTube updated its privacy policies to allow people to request the removal of AI-generated content that simulates their appearance or voice.

“If someone has used AI to alter or create synthetic content that looks or sounds like you, you can ask us to remove it. To qualify for removal, the content should show a realistic altered or synthetic version of your likeness,” according to its updated privacy guidelines.

In 2023, YouTube announced that it was developing a system to enable its music partners to request the removal of content that “mimics an artist’s unique singing or rapping voice.”

That came in the wake of a number of musical deepfakes going viral in 2023, including the infamous “fake Drake” track that garnered hundreds of thousands of streams before it was pulled down by media platforms.

The wider music industry has been clamping down on deepfakes in recent months. Last month, Sony Music asked streaming platforms to take down more than 135,000 songs it says were created by fraudsters using generative AI to impersonate artists on its roster.

Dennis Kooker, President, Global Digital Business & US Sales, at Sony Music Entertainment, said as per BBC that “deepfakes” cause “direct commercial harm to legitimate recording artists… In the worst cases, [the deepfakes] potentially damage a release campaign or tarnish the reputation of an artist.”

Meanwhile, Spotify last month said it was piloting a new opt-in feature that lets artists review and approve eligible releases before they go live. The move is designed to give artists a new way to protect their profiles from AI deepfakes and misattribution.

Music Business Worldwide

Related Posts