MBW Views is a series of exclusive op/eds from eminent music industry people… with something to say. The following comes from Jessica Powell (pictured), the co-founder and CEO of Audioshake. An ex-Google exec, Powell’s award-winning Audioshake uses proprietary AI to sift through recordings to create individual stems – separate instrumental and vocal tracks – that can then be used by creators to build new musical works. Audioshake has to date partnered with rightsholders including Hipgnosis Songs Fund, as well as large distributors such as CD Baby. This piece originally appeared on Powell’s Substack.
You’ve probably heard about the Drake x The Weeknd track, “Heart on My Sleeve,” which was supposedly made 100% with AI and has been streamed millions of times since its release last week.
The artists’ label, UMG, has issued takedown requests, but the whack-a-mole nature of takedowns means it’s still easy to find at any given moment.
The track is definitely catchy, but it’s unlikely that it was 100% made by AI. In fact, if you talk to those of us who work on AI voice and music systems, you get a different take—one that runs counter to most media coverage of the controversy.
Most likely, “Heart on My Sleeve” was cobbled together with various AI technologies, with a human hand not just piecing things together, but also potentially first singing and rapping the track, before using voice conversion. But let’s put that aside. It might not be soup-to-nuts AI today, but that technology will eventually be available.
In any case, the 100% label was a good way to fuel initial attention, which in turn fueled a music industry freak-out, which in turn fueled even more streams. Fans love it, and Discord is full of AI covers at the moment.
It’s uncool to slam “Heart on My Sleeve,” the same way it was uncool in the ‘00s to slam legendary records like J. Dilla’s Donuts (with an estimated 100 samples) or Danger Mouse’s Grey Album (an artful mash-up of Jay-Z and the Beatles).
Renegade things are cool things, and remixes used to be renegade, and hated by the music industry. Nowadays, remixes are seen more optimistically, as many A&R execs and artists will acknowledge the point that DJs have long made, which is that remixes extend the life of a song and the relevance of a track, driving new listeners to the original version in the process.
Should one therefore think of these AI covers as just the latest take on remixing? There are valid parallels. Just as in the early days of hip hop, most remixers and samplers are huge fans of the tracks they are using, not looking to make a quick buck or defraud or disrespect the original artist.
As more than one Twitter user said, this Drake x The Weeknd track was the collab everyone was waiting for.
It’s also worth considering the most likely outcome of these AI creations.
Most of them will act like fan fiction, and most of them will generate few streams.
The few that go viral, like “Heart on My Sleeve,” will be subjected to the same process as remixes go through today, in which labels, acting on behalf of their artists, can issue a takedown request to streaming platforms. Over time, in a process similar to what happened with content on YouTube, labels and artists might recognize the marketing and economic value of these tracks, and look to monetize rather than take down these tracks.
At the same time, the comparison to remixes is a little disingenuous. It leverages a Cool Kid argument to justify the appropriation of someone’s voice—an inherently more intimate and invasive act than sampling. In that sense, these are not remixes, these are deep fakes.
As a society, we have not looked kindly on the proliferation of deep fakes with politicians—for example, President Biden singing “Baby Shark.” While the tech in “Heart on My Sleeve” doesn’t involve video, much of the same foundational tech or AI concepts are being leveraged here. And eventually, it won’t be hard to have a video Deep Fake Drake singing along to his fake music—the tech already exists and is getting better each day.
There are many people who would decry a technology used to suggest that a public figure was portrayed falsely, yet would defend the use of AI music covers. Certainly, the spirit in which these different fakes are made is relevant—again, think of fan fiction or parody versus an intent to deceive.
But one would have to find a way to codify that legally (maybe the “right to publicity” has everyone covered here–I’ll leave that to the lawyers), as well as ensure that the distribution of any AI cover like “Heart on My Sleeve” is ensuring proper credit and remuneration to the artist (if they want it to remain up).
What will surely happen:
- Fake tracks are going to get easier and more accessible to create. You will not need to piece together various AI services and models; nor, eventually, will you need to involve humans as much, if at all, in the process.
- Fake tracks will not be easy to detect as the AI gets better. But then neither are remixes, and that broken ecosystem continues to persist (which is a whole separate issue), relying largely on takedowns and post-release claims by labels.
- We will see labels and artists announce AI voice remixing, with embedded technology that will allow an authorized version to be distinguished from a fake, thereby 1) creating a new revenue stream for these artists, and 2) providing an easier-to-use alternative to piracy. It will surely not eliminate deep fake music, but should be able to cut into it, just as iTunes did with Napster.
- Grimes will come out saying she loves this and is all for it. Oh wait, she already did.
Music Business Worldwide