Music-generating AI: Official art? Or artificial?

MBW Views is a series of exclusive op/eds from eminent music industry people… with something to say. The following comes from Nick Eziefula (pictured), partner at Simkins, the UK-based entertainment, media and commercial law firm. Eziefula has a background as a songwriter and recording artist, and advises clients across the entertainment and media sector on commercial contracts and intellectual property.


For the music industry, technology tends to be a double-edged sword. File-sharing decimated traditional business models, yet paved the way for the streaming era, which has revived the sector. But, just as the music industry seems finally to be prevailing in that struggle, a new battleground is opening up.

Sophisticated music-making AI can be a powerful tool to augment and streamline the creative process, generating new opportunities. However, its increasingly widespread use poses difficult questions. Who owns material written by AI? What protection is there for artists where AI mimics the style of their work, or their voice? At what point does an AI ‘tribute’ amount to unlawful false endorsement? Does the law need to change in order to address such questions in this rapidly-changing landscape?

As its name suggests, copyright law fundamentally protects against unauthorised copying. Anyone reproducing a substantial part of another’s copyright work without their permission will likely be infringing, as only the copyright holder has the exclusive right to reproduce, or to permit others to do so.

“It is hard to deny the sophistication of the technology, and the likelihood that it will continue to develop at a rapid pace.”

Music rights holders feel that this exclusive right to reproduce (and also potentially other exclusive rights such as the rights to create derivative works, to distribute to the public and to perform publicly) will be infringed where AI ingests and analyses recordings, compositions and lyrics as training data, in order to generate material.

Universal Music Group, for example, recently issued a statement appealing to DSPs to help ensure that AI-generated material, trained on copyright works without permission, will be removed from their platforms, and encouraging all stakeholders to ask themselves whether they wish to be “on the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation”.

Within days of the statement, an AI-powered track that mimics Drake’s voice appears to be the subject of copyright take-down action, having been removed from several DSPs, after some 15 million views on Tiktok, 625,000 plays on Spotify and over 250,000 plays on YouTube.

This follows a number of high-profile examples of the power of music-generating AI.

David Guetta has used AI tools to both write and perform an artificial, yet strikingly convincing, Eminem vocal. Nick Cave criticised a song written in his style by ChatGPT as “a grotesque mockery of what it is to be human”. 

Whilst the artistic merit of AI-generated music is debated, it is hard to deny the sophistication of the technology, and the likelihood that it will continue to develop at a rapid pace.

Swift technological change typically prompts speculation as to whether the law must adapt in order to keep pace.

Last year, the UK Government laid out proposals to stimulate the development of AI by implementing a copyright exception for text and data mining, effectively permitting free access to, and use of, copyright works by AI in certain circumstances. Much to the relief of the music industry, and the wider creative sector, those proposals were recently scrapped, with the Government now looking to rethink its approach to the new challenges, through a ‘pro-innovation’ lens.

Yet, as we look to the future, we hear echoes of the past.

One of the key legal challenges of the web 2.0 era was to balance the benefits of next-level communication via social platforms, and broad access to information through search engines, against the need to protect the creative sector’s ability to control and monetise its valuable intellectual property.

“Could we see a new generation of musical innovators arise, who will be to AI tools what J Dilla was to the sampler?”

Similarly, with the rise of generative AI, lawmakers are tasked with treading a fine line between safeguarding creativity and supporting technological development. Could AI-generated content and deepfakes be to the 2020s what file-sharing and stream-ripping were to the preceding two decades?

If so, in seeking solutions, perhaps we should learn from experience.

Online infringement was increasingly curbed throughout the noughties and 2010s, not by one particular method, but through the combined effects of legislative change, legal enforcement and the development and widespread adoption of consumer-friendly, lawful forms of access.

As a result, the illegal file-sharing debate is so last century – that ship has sailed, keeping pirates at bay, without safe harbour. As we chart a new course, we should remember that journey. Solutions are likely to require industry-wide, consolidated efforts, not just on how to clamp down on unlawful use, but also on how to implement licensing regimes to enable, and monetise, legitimate use of generative AI, in a consumer-friendly way.

Will we see artists ‘franchising’ their identity to those keen to replicate it, and copyright owners offering API-like facilities to enable AI-powered creators to leverage their works to create new material?

In such a context, AI tools could be seen as the next evolutionary step in music sampling technology, whereby the original material is not merely reproduced, but reimagined.

Early sample-based work was criticised as not being ‘real’ music, in much the same way as many are currently unimpressed by a ‘fake Drake’. Yet could we see a new generation of musical innovators arise, who will be to AI tools what J Dilla was to the sampler?

An AI-powered approach to music-making could also herald the next generation of legal disputes, where familiar bones of contention are fleshed out in new ways.

Human songwriters do not simply create new material out of nowhere; they are informed by the music that has come before them. Hence we have long struggled with Blurred Lines between inspiration and infringement, even before the Marvin Gaye estate’s famously successful copyright claim against Robin Thicke over a hit of that name.

Where two works are similar, it is not always easy to determine whether one has been copied from the other. Ed Sheeran gave a statement following his successful defence of a recent infringement claim relating to his hit song Shape of You, contrasting the limited number of musical notes with the huge volume of new material released via streaming platforms each day, stressing that similarities between songs are, therefore, bound to occur.

“Where two works are similar, it is not always easy to determine whether one has been copied from the other.”

Sheeran’s case turned on a finding that he had not, in fact, accessed the allegedly infringed work, rather than on an analysis of the similarities. But another recent case involves a seemingly intentional similarity, and demonstrates that you can mimic an artist even without using AI. Rapper Yung Gravy is being pursued in connection with a track that interpolates Rick Astley’s hit Never Gonna Give You Up.

Whilst the use of the underlying composition was properly licensed, and the original Astley recording was not used, the relevant parts of the Yung Gravy track have, seemingly deliberately, been performed and recorded in a manner that is highly reminiscent of Astley’s original recording.

Astley’s claim is therefore based more on concepts of false endorsement (effectively, the un-sanctioned misappropriation of his brand and creative identity), than on copyright infringement.

With increasingly widespread use of generative AI to mimic artists’ voices and writing styles in a similar way, disputes such as this could be interesting test cases. Perhaps laws on false endorsement (such as the English law doctrine of passing-off and consumer regulation in respect of misleading marketing communications) could be just as relevant to AI-related disputes as copyright laws.

Whilst technology disrupts, it may also resolve. If AI technology can create machine-generated music, surely it can be used to detect it, and help us to distinguish the official art from the artificial. And in an AI-saturated marketplace, blockchain technology’s power to verify digital assets could prove invaluable in authenticating the origin of content and identifying the underlying creative material. So perhaps we’ll meet soon in a metaverse where our avatars can party to AI-generated hits, verified through NFTs, so that the original creators and owners can be appropriately, and programmatically, remunerated in digital currency.

If current reports are to be believed, we’ll have plenty of time to kill anyway, as AI will already have taken our jobs, and will be doing them far better than we mere humans ever could.Music Business Worldwide

Related Posts