Rise of the Music-Making Machines

Credit: Ye Fan
Darius Van Arman

MBW Views is a series of op-eds from eminent music industry people… with something to say. The following MBW op/ed was written by Darius Van Arman, CEO of Secretly Distribution and Co-Founder/Co-CEO of Secretly Group. Here, Van Arman explores the threat of artificial intelligence to the music-making community and how independents can best position themselves against the inevitable and rising tide of AI…


Almost as soon as humans created mythology, they envisioned machines that could think.

In The Iliad, Homer described “servants made of gold…possessing minds, hearts with intelligence, vocal chords, and strength” who “learned to work from the immortal gods.”

These were the automaton helpers of the Olympian god Hephaestus, who was himself a laborer and craftsman known for making tools of great beauty and power.

Almost three millennia later, as electronic computers were being developed in the 1940s, British mathematician Alan Turing imagined a simple blind test to determine whether machine intelligence was approaching that of humans. If a computer’s outputs could fool us into believing we were communicating with a human, it would pass this so-called Turing Test.

Alan suggested that this indicated the computer could think like a human. However, while machines may excel at mimicking human intelligence (or be very good at predicting what humans would perceive as human-like), the question remains whether machines are truly capable of human understanding.

I’ve been reflecting on this question and another one related to music-making: are machines on the verge of achieving, or even replacing, human creativity?

This feels especially relevant now because, as large amounts of training data become accessible to artificial intelligence models, conditions are ripe for ‘generative artificial intelligence’ to fully blossom. Experts are already noticing how generative AI tools are integrating into existing creative workflows (such as digital audio workstations like Pro Tools, Ableton, and FL Studio) and vice versa, as generative AI companies incorporate editing features into their platforms.

There is now a growing sense that generative AI will not only significantly impact the quality and volume of digital media available to the public but also reshape how we create, experience culture, and connect as humans.

“There is now a growing sense that generative AI will not only significantly impact the quality and volume of digital media available to the public but also reshape how we create, experience culture, and connect as humans.”

Darius Van Arman

I currently co-lead a group of independent music companies, all operating under the Secretly banner. Like many other similar indies, we aim to broaden or diversify what is considered “mainstream.” We champion outlier artists working on the fringes and avoid investing in any creative effort that strives to be average or appeals to the lowest common denominator.

While we don’t shy away from commercial success, we are more driven by the desire to make a lasting and positive impact on culture than by the goal of maximizing streams or record sales. Therefore, it’s no surprise that, as independents, we’re less than thrilled about a future where more and more art is derived from patterns of the past. For us, living in this kind of “cultural hospice” feels like waving the white flag on human progress, signaling through our collective body language that we accept (or are resigned to) the current state of the world.

Instead, as independents—including the artists we partner with—we are motivated to find the best way to position ourselves against this inevitable and rising tide of AI. Our hope is to mitigate or defend against its worst potential outcomes. I have some ideas about this, which I will share later. But first, I want to give some shape to the AI threat heading our way and discuss some of the companies involved.


“It’s not really enjoyable to make music now.”

Human artistry has evolved over a much longer time span than the one previously mentioned; songs existed long before humans like Homer contemplated automatons or machine learning. As early human societies devoted fewer waking hours to hunting, gathering, and farming, people found more time to appreciate beauty and seek meaning and purpose.

We were the first creatures to turn storytelling into song, doing so thousands of years ago. But only in the last 150 years have we been able to create permanent audio recordings of these musical poems. Relatively speaking, the recording industry is very young—a baby still in its cradle compared to other creative fields—and the legal framework it depends on to be economically viable, such as copyright, is barely older.

Several AI startups have recently emerged with the goal of generating music. These companies have collectively received billions of dollars in investments. Some of them are focused (for now) on creating background, functional, cinematic, or orchestral music, such as Soundraw, AIVA, Beatoven.ai, Mubert, or Endel. Others, like Suno, Udio, ElevenLabs, Stability AI, Boomy, Soundful, and KLAY, have a broader range of potential uses, with some developing music products or experiences that remain within “walled gardens,” while others enable the creation of new recordings that can be distributed anywhere.

“Relatively speaking, the recording industry is very young—a baby still in its cradle compared to other creative fields—and the legal framework it depends on to be economically viable, such as copyright, is barely older.”

Darius Van Arman

Additionally, several well-established companies are expanding their current music-related businesses into this generative AI music space, including digital streaming service Spotify and sample library company Splice. (And let’s not forget incumbents like Google, OpenAI, Anthropic, Meta, Amazon, and even Apple, all of whom are likely motivated to add generative music AI features to their offerings.)

While some of these companies have taken a cooperative approach with the music industry by licensing music rights, others, most notably Suno, have chosen to directly challenge the copyright system. These “music launderers” argue that using existing works to train large language, diffusion, and transformer models to generate new music is “fair use” and is no different than what a human does when they listen to hundreds of records to learn how to write and perform songs.

Suno, in its messaging to the world, appears to be taking an even bolder step, not only challenging copyright laws but also questioning the value of the time we, as humans, spend learning how to create and perform music.

“It’s not really enjoyable to make music now,” said Suno CEO Mikey Shulman on a venture capitalist podcast called VC20. “It takes a lot of time, it takes a lot of practice, you need to get really good at an instrument or really good at a piece of production software. I think the majority of people don’t enjoy the majority of the time they spend making music.”

Of course, Shulman’s perspective here is either hyperbolic or out of touch; unlike what he and other technologists might imagine is in store for humanity, a majority of people are much more like Louis Armstrong when he had the chance to pick up a trumpet, or Stevie Nicks when she can’t help but break into song while getting her makeup done backstage, or nearly any young child when they first pick up a musical instrument—and then keep coming back to it again and again.



Many music creators—whether amateur or professional—find great joy in the everyday journey of music performance and production (and practice!), just as much as they do when they finish writing or mixing a song they are immensely proud of. It’s similar to how a marathon runner feels ecstatic at the end of a race—not because they’ve reached the finish line, but because they’ve experienced every bit of the distance between Marathon and Athens to get there.

Also, music, at its core, is a language of connection—not only through its connective structure of notes, chords, rhythms, and sounds, but also in how humans perform together, listen to music and bond over it as a group, and in the very way we feel connected to ideas and feelings that are powerfully conveyed in the body of song. Ultimately, what makes us human is our affinity for connection and meaning, and music—like all great art forms—is a vital conduit for them.

Let’s set aside Shulman’s words for now. (If we’re being generous, these AI CEOs must be under so much pressure to sound impressive to their venture capitalist bros.) The real, important issue is what is becoming increasingly clear as more and more AI companies march forward at breakneck speed. It is what they prioritize above everything else: rapid product development, growth and earnings goals, and gaining control of various market segments.

It is also what these AI companies are neglecting: the preservation of dignity and joy in labor and craftsmanship, responsible stewardship of the environment, proper respect for democratic systems, the effective free will of people in public discourse, the economic rights of other industries, and ensuring public safety—whether at an individual level (such as users’ mental health) or societal level (like a real-life re-enactment of the opening scene of Terminator 2).



A larger bet than any other in human history

In all fairness to the AI industry, we should acknowledge that nearly all commercial companies prioritize profit above almost everything else, reflecting the values of our current capitalist system. Although the corporate world has introduced neat three-letter concepts like CSR (“Corporate Social Responsibility”), ESG (“Environmental, Social, and Governance”), and CSV (“Creating Shared Value”) to add a marketing gloss to how corporations conduct business, the market has always had one true ruler—shareholder value—and this ruler is rarely swayed by anything other than significant changes to the bottom line.

One key difference, however, is how high the stakes have become for the AI industry as a whole, given the stratospherically high level of investment already made, partly due to the massive energy consumption required to operationalize AI. Various analysts estimate that between 30 and 44% of the S&P 500’s total market value comes from AI-related companies.

“In all fairness to the AI industry, we should acknowledge that nearly all commercial companies prioritize profit above almost everything else, reflecting the values of our current capitalist system.”

Darius Van Arman

As a global economy, we’ve placed a larger bet than any other in human history on the commercial success of AI. If the returns on AI investments are not sufficiently profitable, we could face a major market correction that might trigger a global economic crisis (and the real-world suffering that would follow).

So, as a microcosm of this economic pressure cooker, let’s imagine how decision-making might unfold in the C-Suite of a generative AI startup. In the fictional dialogue below, Vik is the high-flying, impatient CEO of Music Labyrinth, and Tait is the cautious CTO (chief technology officer) who previously worked for a large music publisher.

“Vik, I’m looking at the latest stress tests for the audio watermarking. It’s still failing basic MP3 compression tests. If we go live now, any track generated by the Music Labyrinth servers becomes effectively anonymous the second it’s shared on social media.”

“I think this glitch is a rounding error, Tait. We’ve delayed twice already. While you’re worrying about inaudible artifacts, our competitors are locking down partnerships and integrating with every major audio production environment.”

“The watermarking is not the only issue. The fingerprinting is broken too. If our model accidentally creates a copyrighted melody and we can’t flag it, we are creating legal jeopardy for our users.”

“How long will it take to fix these issues?”

“Two sprints. One month?”

“Ugh, one month is an eternity. This market we’re chasing might hit a billion dollars this year, and that’s just the beginning. We can’t wait for ‘perfect’ while our competitors capture all of the growth.”

“Without the watermark, our users can’t even prove they didn’t steal the stems. We’re leaving them legally exposed.”

“They’ll take the risk for the tools we’re giving them. We’re launching tomorrow morning. Tell your team to stop tinkering and to focus on server stability.”

“This feels reckless.”

“If we get too close to the sun on this one, so be it. We’ll either be too big to shut down or out of business by the time any lawsuits play out.”


Daedalus and son Icarus (also known as Taitale and Vikare) in Jakob Peter Gowy’s The Fall of Icarus (1635-1637)

This hypothetical story might be overly simple, even cartoonish, but it still rings true because, given what’s rewarded in the tech industry, it reflects rational behavior. Plus, it closely resembles the real-life choices we’ve seen AI leaders make.


”where we constantly face pressures to set aside what matters most”

Instead of negotiating training licenses with artists, songwriters, and music companies, Suno and Udio chose to illegally stream-rip vast amounts of copyrighted recordings and lyrics from platforms like YouTube, Genius, and Musixmatch. This led the three majors—Warner Music Group, Sony, and Universal—to sue the two generative AI companies in the summer of 2024. Many other non-music AI companies have also taken similar shortcuts. For example, instead of slowing down and asking for permission from various constituencies, OpenAI chose to harvest massive amounts of data from the internet—including copyrighted content, user data, and potentially sensitive or confidential information—often bypassing safety measures designed to prevent website scraping.

Historically, executives in the tech industry have been rewarded rather than punished when they follow the “move fast and break things” playbook coined by Facebook founder Mark Zuckerberg (or its corollary, “it’s better to beg forgiveness than ask for permission”). And that, in a nutshell, is the systemic threat to creative communities everywhere. Whenever there is a potential trade-off between the speed of AI progress and proceeding ethically and fairly (including respecting the rights of artists and copyright owners), there is little confidence that AI executives won’t choose progress every time.

This strong bias toward progress over safety is also supported by various insider accounts. For example, Mrinank Sharma, a senior AI safety researcher who led the Safeguards Research Team at Anthropic (currently in the midst of a capital raise at a $380 billion valuation, and home of Claude), very recently left that company. He explained his resignation in a tweet. Here is an excerpt from it:

  • “I’m especially proud of my recent efforts to help us live our values via internal transparency mechanisms; and also my final project on understanding how AI assistants could make us less human or distort our humanity. Nevertheless … throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions. I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most…”

The bolded portions above are worth sitting with for a moment.

Now imagine a scenario where Suno’s executive team is deciding whether to keep their product focused solely on assisting music creators or to develop it further into a tool that could fully replace human experts and artists, such as songwriters, singers, instrumentalists, and recording engineers.

This would be analogous to the ruling Olympian gods replacing the disabled artisan-god Hephaestus, whom I mentioned earlier, with the automatons he built and trained, instead of letting his helpers continue assisting him with his ingenious creations.

Remember Suno CEO Mikey Schulman’s out-of-touch claim that humans don’t really enjoy making music now. Think again about the enormous financial pressures facing the AI industry that we discussed earlier. Given all this, would you have any faith that Suno would voluntarily hold itself back from replacing the human aspect of music-making if such a refrain meant missing out on huge profits?

The answer is no.


A new era of “algorithmic determinism”

When it comes to prognosticating AI’s eventual impact on society, I see two distinct camps. The first anticipates the worst-case scenario: AI and algorithms will replace or eliminate our most meaningful jobs, which give us a sense of purpose. These technologies will also render our cherished tools of art-making and creativity irrelevant, tools we rely on to find connection and meaning in this lonely universe.

This future resembles the one depicted in the Pixar film Wall-E, where society becomes even more passively inclined than it is now. Technology has enabled those who have escaped Earth after environmental catastrophe to enjoy boundless leisure time. But instead of using this time to rebuild, rethink, or create, these privileged survivors will just consume, relax, and seek simple pleasures, as algorithms constantly reinforce the idea that these pursuits should be the total measure of their lives.



This possible future ties into a concept that author Meghan O’Gieblyn eloquently describes in her masterpiece God Human Animal Machine, published in 2021 at the start of the pandemic. As we become more and more inured to predictive models—whether it’s a model like Suno’s that creates music it predicts will appeal to humans, or other AI models that forecast what words we expect to see next, where crimes might occur, who we should date, or what careers we should pursue, and so on—we will begin to enter a new era of “algorithmic determinism.”

In O’Gieblyn’s words, “Because predictive models rely on past behavior and decisions—not just of the individual but of others who share the same demographics—people become trapped within the mirror of their digital reflection.” (Interestingly, the late, great Toni Morrison also highlighted the same dynamic nearly twenty years earlier in her essay “The Foreigner’s Home”, when she wrote, “…young people’s behavior is said to be an echo of what the screen offers; the screen is said to echo, represent, youthful interests and behavior—not create them.”)

O’Gieblyn then takes this concept further by exploring the knock-on societal implications.

Building on the ideas of Sapiens author Yuval Noah Harari, O’Gieblyn synthesizes this powerful passage in her book:

“Critics have speculated about what this economy of prediction might become in the future, once the technology becomes more powerful and we as citizens are more inured to its intrusions. As Yuval Noah Harari points out, we already defer to machine wisdom to recommend books and restaurants and potential dates. It’s possible that once corporations realize their earnest ambition to know the customer better than she knows herself, we will accept recommendations on whom to marry, what career to pursue, whom to vote for. Harari argues that this would officially mark the end of liberal humanism, which depends on the assumption that an individual knows what is best for herself and can make rational decisions about her best interests. ‘Dataism,’ which he believes is already succeeding humanism as a ruling ideology, invalidates the assumption that individual feelings, convictions, and beliefs constitute a legitimate source of truth.”

Meghan O’Gieblyn, God Human Animal Machine

The other camp isn’t doomsayers at all, and they come in two distinct flavors. The first includes techno-futurists who are extremely confident in the potential of AI and other technologies to lead some of us into a new utopia. (One person’s utopia is another person’s prison yard, especially if the former flatters the worldview of the billionaire class.) The second flavor consists of skeptics who believe AI is much less powerful and society-changing than advertised. Some of this second flavor also believe that many techno-futurists have either lost objectivity (confirmation bias is a helluva drug) or have become fraudsters (receiving staggeringly high levels of investment dollars is a helluva drug).

Who knows what fate has in store for humanity, or which camp or flavor above will be closest to the mark? Yet, borrowing a trick from the school of Dataism, we recognize that history has an opinion. It has shown us time and time again that the future rarely resembles what either the doomsayers or the utopians foresee. Sometimes the skeptics are right; for example, our collective fears about the Y2K bug were greatly exaggerated. However, more often than not, greater upheaval lies ahead than we can see in the present. We also often realize, with the benefit of hindsight, that we had more opportunities to change course than we understood at the time.


To AI, or not to AI 

If you look only at the independent music sector, indies are a small part of the overall music industry, not to mention the fact that there are various, diverse constituencies that make up indies. The music industry, in turn, is just a small part of the larger entertainment industry—which includes film, television, radio, book publishing, video games, podcasts, theme parks, sports, and adult entertainment sectors—and this entire entertainment industry is much smaller than the tech sector.

“Whether ‘tis nobler in the mind to suffer the slings and arrows of outrageous fortune Or to take arms against a sea of troubles And by opposing end them.”

William Shakespeare, Hamlet (Act 3, Scene 1)

So, what power do we really have as independent music companies like ours—including the artists we partner with—facing the AI juggernaut coming our way? Honestly, not much. That was my main thought recently when Secretly Distribution—where I serve as CEO—was offered the opportunity to enter into a license with the generative AI company ElevenLabs.

First, I should give some background on ElevenLabs. Founded by two Polish technologists—an ex-Google machine learning engineer and a former Palantir deployment strategist—it’s a new player in the generative AI music space. The company initially focused on AI-assisted text-to-speech software, and, like other AI firms, its history of obtaining permission to use its training data is not spotless (see Vacker v. ElevenLabs, Inc.).

It has received support from prominent venture capital firms, including Andreessen Horowitz and Sequoia Capital, both of which have significant defense industry investments. However, when ElevenLabs entered the field of generative music AI, it committed to properly licensing the use of existing music for its training. As a result, in late 2024, it signed a first-of-its-kind training license with indie rights agency Merlin and music publisher Kobalt.

At the time, none of the majors had entered into any licenses with either Suno or Udio, the two leading music generative AI companies. Instead, the majors were suing these companies. So the ElevenLabs deal was big news, as both Merlin and Kobalt are regarded as major players in the music industry, and this license represented a potential new direction for rights holders. (Merlin is sometimes called the fourth major, and Kobalt is considered one of the major music publishers.)

Quite different from ElevenLabs, the Secretly companies I am involved with have no venture capital support and have grown organically over thirty years through self-financing. We’ve built our businesses by partnering with influential artists such as Bon Iver, Mitski, Hayley Williams, and Phoebe Bridgers, as well as important labels behind groundbreaking artists like Sufjan Stevens, Mac DeMarco, and Godspeed You! Black Emperor. Anticipating the rising impact of AI on culture, we deliberately included the word “human” in Secretly Distribution’s mission statement, which is to put “human artists and the companies that support them in the best position possible to make meaningful positive impact on individuals, communities, and entire cultures…”.

So when we first had the opportunity to enter into this ElevenLabs AI training license, it was unfamiliar and uncomfortable territory. Not only did we wonder what bargaining power an indie collective like Merlin would have when negotiating with an AI company like ElevenLabs, but many other questions came up as well.

For example, if we participate in this license, are we helping ElevenLabs users create new artificial music that directly competes with music released by the artists and labels we already support? Would these artists and labels face the same threat of increased competition even if we don’t participate in this license? In a world where many companies license their rights to ElevenLabs, is it better to also license to gain some value rather than none at all? (The race to the bottom!) Or should we, on principle, hold out as a statement of our values, but also in the hope that our non-participation reduces the quality of what ElevenLabs offers?

After considering questions like those above, we ultimately decided to make the ElevenLabs license negotiated by Merlin available to the labels we work with. This has been our standard approach as a distributor, based on the principle that, whenever possible, we should empower our label partners to make the best and right decisions for their own businesses, rather than making decisions for them. (We applied the same approach with Spotify’s controversial and non-transparent steering mechanism, Discovery Mode.) However, we did not encourage our distributed labels to participate in the ElevenLabs license, and we also shared our view that artist consent should be obtained for any recordings submitted to ElevenLabs for training purposes.

Out of the hundreds of labels we work with and the thousands of artists they collaborate with, only a total of three labels and seven artists dipped their toes into the Elevenlabs experiment. This wasn’t surprising, considering the values and perspectives of the Secretly community and their artist partners. But as someone connected to the indie community who deeply values human artistic expression over AI derivatives, I’ll share a sentiment that might seem controversial or counterintuitive at first: I’m glad this Elevenlabs license exists, and that these labels and artists were willing to give it a shot.


“Training licenses are the whole game!”

Why, you might ask, would anyone who supports human creativity be glad that AI training licenses exist, and that artists, labels, and other rights holders are opting into them? It’s because, if you had to pick your poison as a human creator, a world with AI training licenses is much better than one where licenses are not required for AI companies to train on works created by humans. A major fear among creative industries and artist groups is that powerful, well-funded AI interests will convince lawmakers and regulators worldwide that AI training is “fair use” (remember what music launderers like Suno argue). As one respected major label executive recently exclaimed at a meeting with other music industry participants about legislative priorities, “Training licenses are the whole game!”

It is no secret that, in the United States, the Trump administration has pushed for a moratorium on state regulation of AI to implement AI-friendly regulations at the federal level. It also remains intensely focused on the AI arms race with China. Just one major economic downturn could open the floodgates of deregulation, weakening protections that copyright interests currently depend on. A fact pattern that could lead to such a negative outcome is the lack of any “willing buyer, willing seller” AI training licenses in the market. In this scenario, AI companies might turn to the government or lawmakers and say, “We’re doing everything we can to compete with Chinese AI companies, but copyright owners aren’t meeting us halfway.” (China has much looser copyright protections than the U.S.) This could, in theory, result in the creation of safe harbors, similar to the one established by the U.S. Congress in the Digital Millennium Copyright Act, which has helped propel the rise of user-generated content platforms like YouTube and TikTok and has transformed the music economy. (An interesting side note, before the recent rise of the AI threat, companies like YouTube and TikTok also “moved fast and broke things,” often to the detriment of copyright owners.)

So, fortunately (yes, it feels odd to say that), the ElevenLabs license now exists, and since then, Universal, Warner Music Group, and Merlin have also entered into licenses with Udio. At the very least, it’s now harder for generative music AI companies to argue that governmental intervention is necessary.

Another potential benefit of these AI training licenses is ensuring that independents and organizations like Merlin, which represent indie interests, have a seat at the licensing table. I’ve previously discussed the threat of market concentration, especially in the recording industry, and how dominance in one sector can lead to dominance in another. Therefore, a major concern for independents is that the largest companies will negotiate licenses with AI companies in a way that creates an uneven playing field.

“If the future revenues of the music industry become increasingly dependent on income from AI-generated content, and only the biggest companies hold licenses with leading AI companies, this would be another example of market concentration in one sector strengthening concentration in another.”

Darius Van Arman

For example, a large company might leverage its negotiating power with an AI company to insist that a specific methodology or third-party service is used to assign “attribution” to outputs generated by an AI model. (“Attribution” refers to the idea that you can identify the sources of copyrighted material used to create a training model’s output, such as “this song that the AI model created was 25% influenced by Miles Davis’s trumpet playing on Kind of Blue.”) Many AI experts argue that attempting this kind of simple attribution fundamentally misunderstands how the underlying generative AI models actually work. However, such an approach can be valuable if all parties in a license agree on what appears reasonable; it enables the calculation and payment of royalties to creators based on how AI-generated tracks are used in the market. The concern, of course, is that if a larger player has sufficient leverage, they might be tempted to require an AI company to adopt an attribution method that systematically favors their interests in the overall royalty calculations.

Alternatively, since training models might not need to license the breadth of copyrighted material to produce high-quality outputs, a larger player could attempt to exclude smaller competitors on a specific AI platform. If the future revenues of the music industry become increasingly dependent on income from AI-generated content, and only the biggest companies hold licenses with leading AI companies, this would be another example of market concentration in one sector strengthening concentration in another.


Our position against the rising threat of AI

Earlier, I highlighted the enormous pressures driving AI progress, the behavior that is incentivized and rewarded within the tech industry, and the regulatory and commercial realities surrounding AI training licenses. I’ve examined various implications, especially from the perspective of culture-driven independent music companies that also care about structural issues like market concentration and the potentially irreversible decline of human agency and artistry.

As I reflect on all this, I can’t help but feel pessimistic. The political and financial forces working against human creative interests often seem unstoppable. However, I’ve seen some sparks of hope. I’ve noticed a growing chorus of conscientious humanists who not only have renewed conviction but also a clear understanding of what needs to be done to protect what matters most. Additionally, there’s increasing acceptance among these humanists that their hands might have to get dirty. To have a realistic chance of preventing the worst outcomes of the AI surge (and other tech excesses), compromise can’t be ruled out. Humanists may sometimes need to choose the “least worst” option, given the power and resource gap between the two sides of this fight.

“To have a realistic chance of preventing the worst outcomes of the AI surge (and other tech excesses), compromise can’t be ruled out.”

Darius Van Arman

This combination of passionate resolve and pragmatism motivates me, despite my pessimism. While we can’t be certain which economic, legal, legislative, political, or social actions will best protect human creativity—or even if creative communities and the allies they muster will have enough power to prevent an eventual Wall-E-like outcome for our world—we know one thing for sure: doing nothing amounts to acquiescence.

Inspired by this mix of passion and pragmatism, I now propose five key imperatives. While I don’t speak for the entire independent music sector, I hope that something like the following can be embraced as the independent position against the rising threat of AI.

  1. Independents firmly hold that human creativity and artistic expression are irreplaceable, and we will fight for the long-term viability of human creative endeavors.
  2. Independents will advocate for the existence and availability of AI training licenses, as the absence of licenses could force the mandatory inclusion of human creative works in AI models.
  3. Independents will take the necessary steps to secure a seat at the licensing table, ensuring AI licensing remains a level playing field and that the growth of AI music does not lead to increased market concentration.
  4. Independents will work with other sectors, especially fellow participants in the creative industry, including the majors. We recognize the limits of the independent music sector’s political and economic power on its own, and we embrace that any meaningful fight for human artistry requires a united effort against powerful tech and financial interests.
  5. Independents will endorse the requirement that artist consent must be obtained before AI companies can use the works or likenesses, including voice likenesses, of artists in their training models.

The second, third, and fourth imperatives encapsulate the themes I discussed earlier and reflect the possibility of necessary compromises. The fifth imperative is implied in the core idea that copyright, and by extension, the labor rights of creators, must be respected.

Recently, the UK’s Council of Music Makers, a group of artist organizations, issued a letter titled “The music-maker perspective on the music industry’s AI deals.” It criticized rights owners, including the majors, for entering AI agreements without ensuring adequate protections for artists. It specifically called out Universal for pledging to secure creator consent only in two limited cases. The letter explains, “It is not enough to just seek consent when an artist’s voice or songs are key components of an AI output; explicit consent is also required whenever music is used for training on the input.”

The issue here involves derivative works. Customarily, in the music industry, an artist’s approval is required when their recording is used to create a new work, such as when a sample license is granted from one rights holder to another. While training AI models isn’t the same as sampling—like when M.I.A. creatively and transformatively used the Clash’s “Straight to Hell” in her hit “Paper Planes”—artist groups still maintain that an artist’s consent is necessary when an AI model uses their recording.

Some rights holders argue that the complicated process of obtaining consents from all the artists and producers involved in a large catalog is either impractical or too costly. They claim that to meet the moment, they need to efficiently issue blanket licenses to AI companies for a broad range of rights, all at once. But doesn’t this sound familiar? Isn’t this exactly the kind of justification that some AI companies used when they decided to skip the effort and expense of properly securing licenses, instead scraping recordings from YouTube?

This idea that, as independents, we will respect artists’ rights to give consent is fundamental. It means having the artists’ backs. It’s not only a logical extension of our commitment to support human creative labor that I mentioned earlier, but it also aligns with the fourth imperative to collaborate with fellow participants in the creative industry as we confront the challenges of rising AI. If we are not willing to support our artist partners in this way, why should we expect them to support us when we need it?


Heaven is a place on earth

During times of great social upheaval, movements emerge that look toward the distant future or even an afterlife for salvation. Right now, if you’re educated and somewhat technologically inclined—and regardless of your political views—it’s hard to look at the world around us without feeling cynical, given the numerous problems we face. So it’s easy to see why so many are drawn to the promise that AI and other technologies will lead us to Eden (whether in the distant future or on Mars!)

But even if you’re inclined to think differently as a natural response to overly zealous tech evangelism in your midst, technology itself has never been the problem. Without it, we wouldn’t have enough free time to write books or create and perform music. There would be no guitars, pianos, computers, or word processors, or the many other tools and instruments that writers and artists rely on every day to create. Essentially, technology enables our pursuit of connection and meaning, often through artistic expression, which enriches our humanity. Even the so-called “bogey person” of artificial intelligence has many helpful uses.

“Essentially, technology enables our pursuit of connection and meaning, often through artistic expression, which enriches our humanity.”

Darius Van Arman

Recently, I attended a music show in Bushwick, Brooklyn, with my friend Rob Sheffield, a well-known music writer. It was freezing outside. As we tried to figure out how to stay warm on our way to the after-show meet-up with musician Lucie Lozinski, her band, and her parents, we started talking about artificial intelligence.

I mentioned that I recently used an AI app on my phone to record and transcribe my interview with Lucie. Rob shared with me that transcribing interviews is one of his least favorite tasks as a writer, and that in the past, this tedious job was often assigned to unpaid interns at various publications he worked with.

We both agreed that AI-assisted transcription might be a less problematic use of AI (apart from environmental concerns and questions about how the software was trained). It didn’t threaten any jobs that anyone truly cared about, and at a time when many writers are struggling financially, it could help lower business costs.

Like most issues, whether AI will ultimately be seen as a force for good or evil comes down to a question of balance. For the environmentalist, the important trade-off is whether the benefits AI generates outweigh its drawbacks, such as energy consumption. In the creative world, the key question becomes where to draw the line between AI supporting creatives and AI replacing them, i.e., whether the automaton helpers continue to assist the artisan god Hephaestus or become his replacement? (In the example above, AI-assisted transcription doesn’t come close to replacing the writer.)

Many creatives will have different views on where to draw this line. The musician Holly Herndon, a pioneer at the intersection of music and machine learning, has her own opinion on this. So does music and culture writer Grayson Haver Currin, along with many others. Ultimately, the broader creative community must lead this debate and collectively answer this question.

However, for independents, there is one more task. We can help ensure that only creatives answer this question, not technologists, and by doing so, show that heaven is a place on earth.

Music Business Worldwide

Related Posts