Sony Music has asked streaming platforms to take down more than 135,000 songs it says were created by fraudsters using generative AI to impersonate artists on its roster.
That was first reported by the BBC, which cited Dennis Kooker, President, Global Digital Business & US Sales, at Sony Music Entertainment. The target artists reportedly include Beyoncé, Queen and Harry Styles, with Bad Bunny, Miley Cyrus and Mark Ronson also likely affected.
Sony’s disclosure came Wednesday (March 18) at the launch of the music industry’s Global Music Report by the International Federation of the Phonographic Industry (IFPI).
Kooker said, according to the BBC, that the “deepfakes” cause “direct commercial harm to legitimate recording artists… In the worst cases, [the deepfakes] potentially damage a release campaign or tarnish the reputation of an artist.
“The problem with deepfakes are they are a demand-driven event. They are taking advantage of the fact an artist is out there promoting their music.”
Sony says the 135,000 tracks it has identified so far are likely only a fraction of what has actually been uploaded. Since last March 2025 alone, the company flagged roughly 60,000 songs falsely attributed to artists from its roster, according to the report. In a submission to the government’s consultation on AI and copyright law, obtained by the Financial Times and The Sunday Times at the time, Sony flagged more than 75,000 AI-generated deepfakes.
“The problem with deepfakes are they are a demand-driven event. They are taking advantage of the fact an artist is out there promoting their music.”
Dennis Kooker, Sony Music Entertainment
The volume is rising as AI tools become cheaper and more accessible, the BBC said.
The IFPI reported that recorded music revenues grew 6.4% YoY in 2025 to USD $31.7 billion — an improvement on the 4.7% rate of growth posted in 2024 — marking the global industry’s eleventh consecutive year of growth.
The report highlighted two key themes shaping the industry’s next chapter: AI innovation and streaming fraud.
IFPI CEO Victoria Oakley is quoted by the BBC as saying: “I think we’ve seen a lot of governments really grappling with this issue because they are trying to square a circle: They are trying to protect creativity and at the same time encourage innovation.
“I’m very optimistic that… in the UK, they [have] decided to pause and think again.”
Streaming fraud, also called streaming manipulation, involves uploading songs under “fake” artists to websites like Spotify, YouTube, Instagram and Apple Music, and artificially inflating play counts to collect royalty payments. The IFPI says AI has “supercharged” the practice significantly, the BBC reported.
Unofficially, the industry believes up to 10% of content across all streaming platforms is fraudulent. Oakley says she wants streaming platforms to deploy tools that can detect AI-generated or fake music at the point of upload. “I hate to say it, but it’s very simple to fix,” she said.
“I think we’ve seen a lot of governments really grappling with this issue because they are trying to square a circle: They are trying to protect creativity and at the same time encourage innovation.”
Victoria Oakley, IFPI
She added: “The challenge of identifying and labelling AI material is absolutely the next critical challenge.”
Sony’s Kooker noted that French streaming company Deezer already has software doing this. The executive said Deezer now categorizes 34% of songs submitted to its platform as AI-generated.
“Transparency shouldn’t be optional, it’s the foundation of a fair and sustainable music ecosystem.”
Dennis Kooker, Sony Music Entertainment
“Is it perfect? I’m sure it’s not, but it’s open and it’s transparent, and it allows people to understand what is happening,” Kooker was quoted by the BBC as saying.
“Without proper identification, fans can’t distinguish between genuine human creativity versus unauthorized, AI‑generated content, which risks creating confusion, undermining trust, and impacting user experiences.”
“Transparency shouldn’t be optional, it’s the foundation of a fair and sustainable music ecosystem,” Kooker added.
Last year, Deezer filed two patents for an AI detection tool, which it said can discover “fully AI-generated tracks.” Deezer has since published periodic updates on how many tracks the tool has flagged.
Nearly two months ago, Deezer said it was now receiving over 60,000 fully AI-generated tracks every day, and is moving to license its AI detection tool to the wider music industry.
For its part, researchers at Sony Music Group‘s parent company, Japan-headquartered Sony Group, have reportedly developed technology to identify copyrighted music embedded in AI-generated tracks. Financial news outlet Nikkei Asia reported last month that the methods outlined in the research could open a path for songwriters to claim compensation when their work is used without authorization.
The deepfake takedowns highlight Sony’s stance against AI in music. Sony Music was among the major music companies that sued AI music generators Suno and Udio in 2024 for “mass infringement” of copyrighted material.
While Universal Music Group and Warner Music Group have settled lawsuits with Udio and partnered with the AI firm on licensing, Sony has remained notably silent. Warner Music also settled with Suno in November.
Music Business Worldwide





