MBW Views is a series of exclusive op/eds from eminent music industry people… with something to say. The following comes from Andy Chatterley (pictured inset), the founder and CEO of MUSO, a London-headquartered technology firm providing anti-piracy services and market analytics for music and entertainment companies.
It’s interesting how much heat the Ghostwriter track is generating this week.
So a couple of questions:
- How can we be certain the ‘fake Drake’ track is AI and not a canny marketing tool? We only have ‘Ghostwriter’s’ word for it and no evidence of what program was used. The source data is self-evident – but that’s all we know.
- If this is indeed AI, then it leads to some really interesting philosophical and, inevitably, legal debate. Ultimately, if musicians and/or content creators are being used as source data for an AI model, should they be compensated?
The problem of course is that MILLIONS of individual splices of music are already being used to hone AI music modelling.
Unless it’s blindingly obvious or self-disclosed for marketing purposes – as in the case of the Ghostwriter track – how do you prove, as a creator, that your work has been used as source material for AI?
On the possibility of AI-made music legally infringing human copyright: as far as I am aware there is currently no law to stop an actual human who is a soundalike making a completely new track that has production in the style of another artist.
Indeed, there was a booming industry for a while of near-identical covers of current hits – but with cheaper licensing costs – being used in shops, restaurants etc.
The music industry is basically a business built on influences.
A modern example: Celeste sounds uncannily (and wonderfully) like a cross between Billie Holiday and Nina Simone, even going so far as to sample Simone in her hit Stop This Flame. What’s the difference between Celeste’s work wearing its influence on its sleeve and an AI literally doing the same?
Further, who owns the AI in any given case? And how do you sue something that has no name, no social security number and no company number?
Do you sue the prompt engineer who inputs the command to make the track? Will every human now need to assume some kind of guarantor guardianship for every computer they own?
Something is either artificial or it’s not.
There may be a grey area here that talented legal minds could tackle, and that’s the question of intent: could it be argued that something was made via AI with the express intent of depriving a ‘source’ artist of their rightful income?
At MUSO we frequently deal with ‘artist hijacking’ situations in today’s music business, where it’s abundantly clear that bad actors are attempting to divert streams/revenue on digital services by pretending to be the official artists.
Whether it’s by AI or by a human, fake advertising of artists’ names on DSP accounts (‘hijacking’) constitutes an infringement and/or trademark issue.
Then there is the current trend (especially on TikTok) for remixers speeding up or slowing down well-known tracks and uploading them as new copyrights online.
Again, these are straightforward breaches of copyright and we have ways to remedy them quickly.
The problem with AI, as I see it right now, is the familiar story of the music industry. We are bolting the stable door when the horse is already long gone.
Attempting to ban AI services from accessing music on DSPs for modelling doesn’t address the wide availability of music outside of the major platforms – not to mention how AI actually works.
The many millions of records, songs, symphonies and such that have already been fed into data-hungry AI models are not publicly identified; the process is shrouded in mystery and fully understood only by behind-the-scenes programmers.
“Nobody could ever tally the full list of things that influences artists or their art; all we know is that when something really connects it’s because the sum is greater than its parts.”
We have no idea whose work has been used, or how much. We’ll probably never know (even though all of us in some way are this data).
Yet how different is this AI replication process from how human creators produce their work?
Nobody could ever tally the full list of things that influences artists or their art; all we know is that when something really connects it’s because the sum is greater than its parts.
AI replication in music, therefore, is not a cut-and-dry copyright issue; rather, it is a new form of interpolation. Yet, much like sampling, it will ultimately likely lead to a requirement for some form of rightsholder clearance.
One could argue that AI tools offer us the democratisation of music, so that the workaday draftsmen tools of track construction are available to more people than ever before – at a fraction of the cost.
One could also argue, however, that this will lead to a crisis of homogeneity in the creative arts.
(Some might say we are there already – with or without artificial intelligence.)
Things are moving at an unprecedented pace in the world of AI.
Still, while anyone who has seen it will tell you that ABBA Voyage is an extraordinary live experience, they usually add that they “can’t believe it’s all done with avatars’.
Meanwhile, Taylor Swift – who to the best of my knowledge is still a human – is selling enormous shows out in record time.
Humans still gonna human.
The younger generation may well lean more into AI’s relationship with music as technology improves.
Anecdotally, my 12-year-old hip-hop aficionado son heard the Ghostwriter track and said “the beat’s great, tho the compression was off, Drake verse went hard, Weeknd part was low-key, kinda mid, and the Metro Boomin producer tag was the old one from 2016 in The Life of Pablo“.
In 12-year-old, that means it’s pretty good but not incredible.
I asked him whether he thought AI would kill production and creativity; whether people would bother with the grunt work of music-making.
He was adamant the opposite would happen – that the fun, the whole magic of making music was in playing around with sounds and beats, making things come to life. AI will hopefully just facilitate this even more.
“Can you even copyright the sound of a voice? Or copyright the tone of a voice? How do we legally define the difference?”
While nobody can predict the future, we would do well not to panic.
AI is just beginning and how we exploit it will fundamentally influence how it exploits us. From a production standpoint, it feels very much like MIDI or sampling did in the 1980s, but with the potential to do much, much more.
From a copyright point of view, labels will have to embrace AI, and be proactive as opposed to reactive. Rather than try to kill AI, they will have to find a way for AI music to co-exist with existing catalogue.
I anticipate a raft of acquisitions ahead, with major music companies buying AI companies to control the technology in the way that they took stakes in Spotify.
However, I doubt whether the music industry can, in reality, manage to monoplise and therefore control music AI modelling companies.
Eventually, in the same way that Autotune is everywhere in modern music, so AI will exist in some form or another.
In many ways this is a metadata issue and as such can be enforced that way. Can you even copyright the sound of a voice? Or copyright the tone of a voice? How do we legally define the difference?
Right now somebody could be typing into ChatGPT, “Write me an article in the style of Tim Ingham from Music Business Worldwide about how AI is going to affect the music industry.”
How would you know? How do you know that what you’re reading at this very moment isn’t that?
The music industry figured out a way to monetise Crazy Frog. They’ll figure out a way to monetise AI.
The question is how long it will take them and how much money will they leave on the table while they play catch-up with technology.Music Business Worldwide