
The music industry’s response to AI so far, like its response to Napster all those years ago, has not been without anxiety and fear.
Meanwhile, gen AI companies, with millions of dollars of investment, have already built large businesses on the backs of our songwriters – without paying independent music publishers a penny. (No other sector of entertainment treats songwriters quite like the tech industry!)
Does anyone truly believe that gen AI companies and their lawyers thought they had a ‘fair use’ defense when their entire enterprise was built on the presumption they could use our songs without permission or compensation to build billion-dollar businesses?
Regardless, we now stand at the precipice of a whole new income stream built solely on the music of human creators… and most songwriters I talk to are extremely fearful. They have a right to be. Nobody is giving them enough assurance that we have their backs. We’re at a complicated fork in the road – and we need to ramp up our understanding of how gen AI actually works.
There are two income sources in the equation, and each brings its own complexities.
1) Training the models
Royalties will be paid for the use of human-written songs to build the generative models that power AI-generated outputs. Yet a critical sub-issue here is whether gen AI companies should be permitted to use synthetic data – artificially-created information that mimics the patterns of real songs – to develop their models in the first place.
Synthetic data makes it far harder to identify which songs were used in any given output. And make no mistake: every AI-generated output is derived from human songs – their melody, rhythm, lyrics, and harmonies. If human music weren’t essential to these models, Suno and Udio wouldn’t have persisted in using songs without permission to build models. Every AI output is, in some form, a derivative work. If we can’t trace it back to the source songs, compensating songwriters and publishers becomes difficult, if not impossible.
2) Paying for outputs
The second income area is compensation for the AI outputs themselves – the tracks generated by users. Each of these outputs draws on some combination of songs baked into the model. But the only way to accurately determine which human songs are present is through an attribution engine: technology that can analyze how the system generated each output… and what percentage of which songs contributed to it.
This raises a threshold question: how many ‘source songs’ should be considered for attribution? The top three? Five? More? Taken to its logical extreme – attributing an output to every song that seeded it – no one would receive a meaningful royalty. One approach might be percentage-based: allocate, say, the top 60% of attribution to the songwriters and publishers whose works make up that share. This demands serious consideration.
Users of these platforms can generate thousands of outputs per month. If even one of these tracks becomes popular, the songwriters who contributed to it should be compensated. I don’t believe gen AI companies care particularly who they pay; accurate payment is our concern, which is why we should be fully invested in understanding every attribution engine on the market. Remember, songs are owned fractionally – often unequally – across multiple songwriters (and their publishers).
For example: an AI output might draw 60% from one song with 12 publishers in varying shares, 25% from another with two publishers split equally, and 15% from a third with fourteen publishers in unequal shares. Not
only must we accurately identify the source songs, but we must also then ensure the proportional shares are correctly divided for every single output. The computation isn’t conceptually difficult, but it must be set up right – and when mistakes happen, publishers need the ability to audit.
The process risks becoming completely unmanageable if AI outputs are released into the wild. The “walled garden” concept – keeping gen AI outputs strictly within a gen-AI company’s own platform – is a key negotiating point. UMG and WMG have secured this from Udio in their licensing deals with the service, and it matters enormously.
If AI outputs flood onto Spotify, Apple Music and the rest, they will dilute the royalty pool for every human songwriter on those platforms. In addition, transferring these complex payment formulas (songwriter splits etc.) across systems would be extraordinarily difficult. Note: one reason Universal and Sony have been unable to reach an agreement with Suno (at the time of going to press, anyway!) is that Suno refuses to keep its outputs within a walled environment.
Learning from attribution pioneers
Several attribution engines are now on the market. One gaining particular traction is sureel.ai, which STIM, the Swedish performing rights society, has adopted as its attribution partner for a pioneering AI licensing framework.
The results of this testing should be available soon, giving us real-world data on how accurately these engines identify ‘source’ songs – and sort out correct publishing shares. It may also surface new issues we haven’t yet anticipated.
Bottom line: we need to prioritize learning as much as we can about gen AI technology and then act on that knowledge. The biggest mistake we made during the Napster-era physical-to-digital transition was refusing to embrace new technology for too long, creating licensing problems we still deal with today. We cannot make the same mistake twice.

All MBW+ subscribers get digital access to our new Music Business Worldwide magazine, with six issues released each year. Music Business Worldwide




