Taylor Swift has applied to register her voice and likeness as federal trademarks in the United States, in an apparent effort to combat future AI-generated deepfakes.
The artist has been targeted by deepfakes in recent years. In 2024, X had to block searches of “Taylor Swift” on the platform as pornographic deepfake images of the artist circulated online, making Swift the most famous victim of deepfakes at the time.
The trademark applications, first spotted by intellectual property attorney Josh Gerben of Gerben IP, include voice recordings of Swift saying “Hey, it’s Taylor” and “Hey, it’s Taylor Swift,” as well as an image showing her holding a pink guitar with a black strap and wearing a multi-colored iridescent bodysuit with silver boots. The image was used as a poster for her Eras Tour concert series.
Swift’s trademark applications, submitted to the US Patent and Trademark Office on Friday (April 24), list TAS Rights Management, LLC, a Tennessee-based entity, as the owner. The filings also include a signed consent from Taylor Swift to register her “likeness (including but not limited to signature, voice and image) as a trademark or as a part of a trademark in the US and abroad.”

The applications come as AI tools have made it easier to generate deepfakes of artists and celebrities, with Swift being a recurring target.
In 2024, lawmakers described explicit deepfakes targeting Swift as “sexual exploitation.” New York Representative Joe Morelle said in an X post at the time: “The spread of AI-generated explicit images of Taylor Swift is appalling — and sadly, it’s happening to women everywhere, every day.”
Separately, New York Rep. Yvette Clarke wrote: “What’s happened to Taylor Swift is nothing new. For yrs, women have been targets of deepfakes w/o their consent. And w/ advancements in AI, creating deepfakes is easier & cheaper.”
“This is an issue both sides of the aisle & even Swifties should be able to come together to solve.”

Swift isn’t the first celebrity to trademark their voice and image. In January, The Wall Street Journal reported that actor Matthew McConaughey has had eight trademark applications approved by the US PTO featuring him staring, smiling and talking.
Lawyers for the actor said the trademarks are designed to prevent AI apps or users from generating copies of McConaughey’s voice or likeness without permission. The trademarks include a seven-second clip of the actor on a porch, a three-second clip showing him sitting in front of a Christmas tree and an audio of McConaughey saying, “Alright, alright, alright,” a line from the 1993 film Dazed and Confused, according to the report.
McConaughey told the WSJ in an email: “My team and I want to know that when my voice or likeness is ever used, it’s because I approved and signed off on it. We want to create a clear perimeter around ownership with consent and attribution the norm in an AI world.”
Jonathan Pollack, one of McConaughey’s attorneys, told the newspaper: “In a world where we’re watching everybody scramble to figure out what to do about AI misuse, we have a tool now to stop someone in their tracks or take them to federal court.”
YouTube recently expanded its AI likeness detection tool to the wider entertainment industry, opening access to celebrities and talent agencies for the first time. The detection system works similarly to YouTube’s Content ID, which rightsholders use to flag unauthorized use of copyrighted material.
The expansion came six months after YouTube started rolling out the AI detection tool in October, which was limited at the time to a specific set of creators with YouTube channels.
The tool was developed with support from talent agencies and management companies, including Creative Artists Agency (CAA), United Talent Agency (UTA), WME, and Untitled Management.
In the music industry, deepfakes have become prevalent in recent years. In 2023, the infamous “fake Drake” track garnered hundreds of thousands of streams before it was pulled down by media platforms.
As MBW previously reported, the wider music industry has been clamping down on deepfakes in recent months. Last month, Sony Music asked streaming platforms to take down more than 135,000 songs it says were created by fraudsters using generative AI to impersonate artists on its roster.
Meanwhile, Spotify last month said it was piloting a new opt-in feature that lets artists review and approve eligible releases before they go live. The move is designed to give artists a new way to protect their profiles from AI deepfakes and misattribution.
Music Business Worldwide



