Why the power of AI may hold the key to distributing wealth throughout the music industry (Part 1)

MBW Views is a series of exclusive op/eds from eminent music industry people… with something to say. The following op/ed comes from Ran Geffen Levy, CEO of Amusica Song Management in Israel and Chief Future Officer at OG.studio, an extended reality content and business development agency facilitating transitions to web3.


Music Wars: Armageddon.

It is a time of great unrest in the world of intellectual property.

The beat of war drums echo from both sides: The developers of AI music, image, and video generation tools, and the powerful copyright owners who seek to control the use of their creative works.

In courtrooms across the land, the battle rages on, with lawsuits flying back and forth like deadly blaster fire. But amidst the chaos, a new threat is emerging – one that could spell the end of copyright ownership as we know it. The fate of intellectual property hangs in the balance.

Now, the battleground is filled with Avatars, Synthetics, Cyborgs, Creators, legions of tech titans and major companies. It is a clash between old ways and new, between tradition and progress.

The AI companies work tirelessly, utilizing cutting-edge technology to scan millions of images and recordings owned by major copyright owners. They compare notes and the results are staggering: Countless works of art and music, generating billions of dollars in revenue, are found to have integrated the creations of other artists without giving proper credit or compensation.

The AI companies know they had a bombshell on their hands – explosive information that could shake the very foundations of the industry. And so, they devise a plan to leak this information to the public, to expose the truth behind the facade of creative ownership. A website is launched, designed to track such activities and hold those responsible accountable for their actions. The stage is set for a battle of epic proportions – one that will decide the fate of copyright ownership for generations to come. Armageddon.

In the midst of the chaos, a call goes out to a powerful few: Coldplay, Sir Paul McCartney, Mark Zuckerberg, and Rishi Sunak are summoned to step forward and form a new Jedi council. The world waits with bated breath for their next move, as they forge a path towards a better future. May the force be with them.


Episode I: The AI Menace

From Eutopia to Dystopia, heaps of words have been written about the role of AI in the music industry. Mostly by humans. The philosopher Nick Bostrom is the head of the Future of Humanity Institute at Oxford University. I have asked ChatGPT to summarise the risk of AI replacing humans in the creation of music, based solely on Bostrom’s work:

“As AI technology continues to advance, AI-powered music composition and performance systems are developed. These systems are able to create and perform music at a level that is indistinguishable from that of human musicians. As a result, they become more and more popular, with musicians and music producers turning to AI for songwriting, composing, and live performance.

“As the demand for human musicians decreases, many lose their jobs, and those who remain are forced to adapt to the new AI-driven landscape. They have to learn how to compose and perform music that complements the capabilities of AI, rather than trying to compete with it.

“As AI systems continue to evolve, they eventually become capable of creating music that is not only indistinguishable from that of humans but also better, making it clear that human musicians are no longer necessary. Music production and performance become almost entirely automated and are performed by AI systems.

“The intelligence explosion that Bostrom talks about occurs when AI reaches a level where it can autonomously improve and reprogram itself, allowing it to evolve at a faster pace than human creators. This leads to AI-generated music that is more expressive, creative, and original than anything humans are capable of producing.”

Bostrom contributed to the formation of the 23 A.I. ethical principles signed by the likes of Bill Gates, Elon Musk, and Ilya Sutskever – the latter being the Research Director and Co-Founder of OpenAI, the company behind ChatGPT and Dell-E. This was in 2017 when OpenAI was a non-profit organization that transitioned to a capped for-profit, with a profit cap set to 100X on any investment. Backed by Microsoft, it is now valued at 29B USD with a projection to have 1B USD profit as soon as 2024.

Musk and other tech, science, and academic leaders have recently called “to immediately pause for at least six months the training of AI systems more powerful than GPT-4”. Currently, no one from Microsoft or OpenAI is on the list.  The showdown between Gates the current major investor in OpenAI who calls the Pause of AI “impractical” vs. Musk its previous Founder. The battleground between the tech titans is set. First, we take Italy and then we take the USA. If the proposed halt to AI development went ahead, it would only be in the Western Hemisphere. Nobody is going to stop AI development in China. If anything, China will push to gain an advantage while the rest of the world is regrouping. What would be the outcome? One possibility is “the end of the internet as we know it” according to Emily Taylor, the CEO of Oxford Information Labs, as published in The Guardian.

A glimpse into the confusing and erratic future ahead of us can be seen in the action taken by Shutterstock. Filing a lawsuit against Stability AI and integrating Dell-E art generation tool in their website. On one hand, Shutterstock’s CEO Paul Hennessy stated: “I think there are two choices in this world, be the blacksmiths that are saying, ‘Cars are going to put us out of the horseshoe-making business,’ or be the technical leaders that bring people, maybe kicking and screaming, into the new world.” On the other hand, they are kicking and screaming at the solution.

Possible next moves? Shutterstock owns an AI-driven music platform called Amper Music. The combination of ChatGPT with an artificial intelligence-based voice recognition system creates an ecosystem that lets you create art and music utilizing your voice through synthetic data.

In the words of the (cy)Borg, from the Star Trek universe, “Resistance is futile.”


Episode II: The Rise Of Avatars

The word Avatar derives from a Sanskrit word meaning “descent,” and when it first appeared in English in the late 18th century, it referred to the descent of a deity to earth. In the age of technology, the avatar has developed another sense — it can now be used for the image that a person chooses as their “embodiment” in an electronic medium.

The OGs of virtual bands in Western music are Damon Albarn and Jamie Hewlett, to the best of my knowledge. The “Gorillaz” are one of the most successful virtual bands ever, with two Brit Awards and two Grammy Awards, touring as holograms and lately as humans with an all-new immersive show. ABBA took the same Voyage, backwards. Standing in front of young, vibrant, AI-driven Benny, Björn, Agnetha, and Anni-Frid seems nothing short of witnessing a deity descending to earth.

As far as the music industry is concerned, avatar is the new black. As an extension of a live performing human, it has the potential to upend current practices and revenue models at the base of the business. If anything points to the inevitability of growth in this area – both Roblox and Fortnite have majors as shareholders.

So, soon artists will be able to take a vacation, use their AI replicant voices to sing, deepfake avatars to release videos, and AI text generators to draft social media posts. As a result, they will have more time to create, without the never-ending pressure to manufacture content and fan engagement. What a relief. Or is it?

Thanks to the same technology, artists may soon find their image and voice saying things they never said, singing songs they never sang, and doing things they never did. Search deepfake celebrity films and see how far it can go.


Episode III: Attack of the Synthetics

The Synthetics (Synths) are a fictional robotic species that appeared in the Canadian TV show Odyssey 5 (2002). In the music business, we call them virtual artists: non-humans, hyper-real avatars made with artificial intelligence.

From the signing/dropping of FN MEKA by Capitol Records, to the development of KINGSHIP, a Bored Ape super group by Universal‘s Web3 label 10:22 PM, in the post-ApeHype era, the Western music industry is on a learning curve when it comes to Synths. To see what’s coming, they need to look to Asia, where the future is now.

MBW has extensively covered developments in the AI space: Sony Japan’s PRISM Project is developing synthetic and hybrid talents (humans with AI avatars); Tencent released 1,000 songs performed by a Synth; HYBE acquired Supertone; ByteDance is on an AI shopping and hiring spree; and S. Korea-based Pulse9 launched Eternity, an 11 K-pop girl group of all virtual characters. Where does it leave the humans?

In the words of Park Jieun, the woman behind Eternity, and Pulse9’s CEO: “The scandal created by real human K-pop stars can be entertaining, but it’s also a risk to the business. The advantage of having virtual artists is that, while K-pop stars often struggle with physical limitations or even mental distress because they are human beings, virtual artists can be free from these. The business we are making with Eternity is a new business. I think it’s a new genre.” Check out Kakao’s Mave and follow the heavyweights’ moves into this space.

If you take the humans out of the performers’ equation, where does it leave the rest of them? What is the compensation for the humans that write and perform the music? Will musicians receive recognition for their work, or will they remain anonymous and work under NDAs and buyouts? Are we on the verge of creating musician sweatshops? Hey, at some point, AI will be able to replace them all.


Episode IV: The Cyborgs Awaken

A cyborg is a human, animal, or other being with electronic or bionic prostheses. The term was coined by Austrian neuroscientist Manfred Clynes in 1960 during a NASA convention. It applies to an organism that has restored function or, especially, enhanced abilities due to the integration of some artificial component or technology that relies on some sort of feedback.

Who are the present cyborgs in the music space? If you have used the enhanced abilities of AI in the process of music creation, you are one. For years, musicians have given feedback to perfect music AI creation tools by simply using them. Today, AI has accumulated enough data to create music with minimal human intervention. Every Darth Vader started as Anakin Skywalker.

Who are the cyborgs of the future? Practically everyone. This is the time of the ‘Generation Generation’, who will generate bespoke art rather than create it directly.

Look to the Middle East for the future: Anghami, a MENA-based music streaming service, offered its clients in the GCC area the ability to create a unique song based on the country they support in the world cup games and their taste in music as stored on the Anghami database. According to the company, over 170,000 songs were created by its users in the CGG. According to Statista, at the end of 2020, Anghami had 1.6 million users in the CGG area. Let’s assume that the number doubled in 2022. That would mean 1 in 5 engaged in the creation of bespoke music using AI. If we extrapolate to the 616.2 million who subscribe to music streaming services at the end of H1 2022, it’s reasonable to envisage a world where 30 million songs could have been created. In two weeks.

Mubert, the company that facilitated the technology to generate these songs, defines itself as “a limitless source of music co-created by humans and AI. Often described as ‘Google for music creation’”. The company has been operating an AI Music Streaming service since 2015 and claims it generated 21 million AI tracks that were streamed 62 million times by the end of 2021. What is the compensation for the humans that co-created the music?

The licensing agreement provided by Mubert states that Mubert is the sole owner of all economic rights to the remix, such as the so-called “master rights” to the recording, the so-called neighbouring/performer rights that may accrue to those who perform on the recording, and the rights to the musical composition that is embodied on the recording.

The writing is on the wall.


Episode V: The Return of the Creators

The rights of songwriters outlast the rights of masters. Some legends’ recordings from the 1950s and later are in the public domain. Stems from the original public domain recording can be used to create new recordings. Take, for instance, PNAU’s remix of Elvis Presley’s Suspicious Mind released in July 2022 as Don’t Fly Away.

A mashup of Mark James’ composition and Elvis’ voice could have been created using the vocal stems from the original recording released in 1969 (thus, in the public domain in the US). This new piece of music would require approval from the writer, publisher, and Elvis Presley Enterprises (for publicity, image, and likeness). RCA, the record label, is not on the list since it lost its ownership to the public domain.

This paves the way for a potential re-balancing of the entire music business: All it would take is for songwriters to join forces with artists to utilize recordings in the public domain, or create new recordings of their back catalogue using their voice with the help of AI and license the use of the new recording with 50/50 splits between the songwriters and performers. They would be the new master owners, thus creating a new paradigm in the music industry to mend the wrongs of the past. With a direct link to their fandom, they can push their version of the recordings.

It’s here that music labels could quickly find themselves on the back foot. It’s not a big stretch to imagine an AI DAW plugin that is able to recreate new, original music in the style of a specific, famous artist. The plugin could then be marketed and sold under the name of the musician. A license would be required to use the musician’s likeness and image rights. Crucially, these rights are typically owned by the musician themselves or their estate – not the labels or publishers.

This, therefore, would be a unique opportunity for artists, who could request a percentage of the income generated by such a plugin, using AI tools to identify potential hit songs and becoming more involved in the production process. They could also collaborate with their fanbase to create music and content. Then, by eliminating the need for record or publishing companies, musicians could keep more of the profits for themselves.  Thus begins the creator’s journey from creators to curators.

The companies creating these music plugins are becoming the new “labels” for successful musicians. As a result, digital audio workstations (DAWs) become the streaming service of the plugins and offer a significant opportunity for musicians to monetize their music in a different way. Check out Ed Sheeran’s vocal preset plugin add new technology and there it is. I wonder what is the revenue split of the sale and who licensed it.

Next in MusicWars Part II: The Attack of the Majors, The Titans Strike Back, The Last Jedis, A New Hope.

My thanks to Google’s algorithmic news feed for keeping me updated on my areas of interest; ChatGPT for being a creative wingbot; AI21s’ Wordtune for helping me to be more concise; Microsoft’s AI powered Bing for helping with final fact checking; Endel’s brain entertainment for helping me focus; Meta’s Horizon Workrooms that allowed me to work on a top of a skyscraper in space from my office in Tel Aviv and Omer Luz, aka, Peter Spacy who shared with me his knowledge around AI-based music production and art generation.

Music Business Worldwide

Related Posts