With AI-generated content proliferating across the digital media world, one key request from policymakers and civil society groups has been for that content to be labeled.
The idea is to prevent AI-generated content from misleading audiences, for example with “deepfakes” that can make it appear that a person said something they never actually said, or make it appear that a musical artist performed a song that they never actually recorded (see the infamous “fake Drake” controversy from earlier this year.)
Now, TikTok has become the latest media company to take action in this regard, announcing on Tuesday (September 19) that it’s rolling out a new tool that will enable creators to label AI-generated content that they upload to the social media platform.
TikTok also said it’s testing new automated tools to label content “that we detect was edited or created with AI.”
The new rules were developed in conjunction with TikTok’s various Safety Advisory Councils around the world, and settled on the term “AI generated” for its label, because that term is “widely understood… across different demographic groups globally.”
Additionally, TikTok says it will rename the filters it offers users for their video uploads, to make clear which ones make use of AI technology. Going forward, those filters will have “AI” in their name.
The social media site stirred some controversy earlier this year with a filter called “Bold Glamor” that makes video subjects look younger and more ‘attractive’ than they really are.
TikTok’s announcement of AI labels for content follows its release of new community standards earlier this year, which drew the line between acceptable and unacceptable use of AI on content uploaded to the platform.
Under those rules, users are required to label any content that was created with or significantly edited by AI technology. The rules also ban certain types of AI-generated content, such as any video that uses the likeness (video or audio) of any real private figure, as well as content that has been manipulated “in a way that may mislead a person about real-world events.”
The policy does allow the use of AI-generated content featuring public figures, but not if it’s used for political or commercial endorsements, or if it violates any other TikTok content policy.
“AI enables incredible creative opportunities, but can potentially confuse or mislead viewers if they’re not aware content was generated or edited with AI,” TikTok said in a statement. “Labeling content helps address this, by making clear to viewers when content is significantly altered or modified by AI technology.”
TikTok’s AI policy is similar to, but less stringent than the policies laid out by its sister app in China, Douyin.
Those policies – which followed on the heels of Chinese government regulations meant to crack down on the potential harms caused by AI content – include a labeling requirement for AI content, but also require content creators to register any “virtual person” used on the platform with Douyin; forbid the use of AI to create copyright-infringing content; and ban the use of AI to create “content that violates scientific common sense, falsifies, and spreads rumors.”
“AI enables incredible creative opportunities, but can potentially confuse or mislead viewers if they’re not aware content was generated or edited with AI. Labeling content helps address this, by making clear to viewers when content is significantly altered or modified by AI technology.”
The idea of labeling AI content is gathering steam, within both the business and political spheres.
The European Union’s AI Act, currently making its way through the legislative process, would make it a legal requirement to label any AI-generated content. That law isn’t expected to come into force until 2026. In the meantime, the European Commission has asked tech giants to enforce AI labeling rules on a voluntary basis.
Alphabet Inc. announced earlier this month that its Google service will require political advertisers to clearly label any AI-generated image, video or audio content they use in their ads. The policy will apply across all Google platforms.
That follows a controversy earlier this year in which the presidential campaign of Republican contender Ron DeSantis used deepfake images of rival candidate Donald Trump in ads that appeared on Twitter.
Google has also unveiled a new tool to detect Ai-generated images, while OpenAI, the Microsoft-backed developer of the uber-popular AI chatbot ChatGPT, has released a tool to detect AI-generated text.
Detecting AI-generated content has also become a concern for the music industry. Industry insiders have repeatedly voiced concerns about the use of AI to create fraudulent music tracks that siphon off streaming royalties from legitimate artists.
The music industry is also concerned about the specter of AI-generated deepfakes of real artists, for instance the aforementioned “fake Drake” track that went viral earlier this year, accumulating hundreds of thousands of views and streams across social media platforms (including TikTok) before it was eventually pulled down.
Streaming service Deezer announced earlier this year that it’s developing a set of tools that will enable it to detect AI-generated content on its platform, and aims to develop a system for tagging music that has been created by generative AI, starting with songs using synthetic voices of existing artists.
Deezer told MBW that, in 2022, around 7% of streams on the platform were detected as fraudulent.
“You have technologies out there in the market today that can detect an AI-generated track with 99.9% accuracy, versus a human-created track.”
Denis Ladegallerie, Believe
Meanwhile, Believe CEO Denis Ladegaillerie said this past spring that his digital music company, which also operates digital music distribution service TuneCore, aims to stop the distribution of AI-generated music, and that the tools to detect this content already exist.
“We have deployed a number of quality controls in our business and we aim not to distribute any content that is 100% created by AI, whether that’s through Believe or through TuneCore,” he told analysts on the firm’s Q1 earnings call in April.
“You have technologies out there in the market today that can detect an AI-generated track with 99.9% accuracy, versus a human-created track,” Ladegaillerie said.
“Something… we feel very good [about] is the fact that the ability to control [AI uploads] is there. Now it needs to be deployed everywhere.”Music Business Worldwide