EU calls on tech giants to label AI-generated content

Christian Lue via Unsplash

The European Union is working to get ahead of the rapid proliferation of AI technology, and its latest move involves asking tech giants like Google, Meta, TikTok and Microsoft to start labeling AI-generated content on its services, as part of efforts to combat misinformation online.

Members of the European Commission, the EU’s executive arm, on Monday (June 5) called for tech giants to start labeling AI content on a voluntary basis, well in advance of legislation that would make it obligatory.

The EU is currently working on an AI Act that would set rules for the use of AI technology in the 27-country union. It faces a key vote in the European Parliament next week, but even if it were to pass quickly, its provisions likely wouldn’t come into force before 2026, Bloomberg reports.

In the meantime, the Commission’s VP for Values and Transparency, Vera Jourova, said she will ask the 44 organizations that have signed up to the EU’s voluntary Code of Practice for combating misinformation to develop separate guidelines for dealing with AI-generated misinformation.

“Signatories who integrate generative AI into their services like Bingchat for Microsoft, Bard for Google should build in necessary safeguards that these services cannot be used by malicious actors to generate disinformation,” Jourova said, as quoted by Politico.

“Signatories who have services with a potential to disseminate AI-generated disinformation should in turn put in place technology to recognize such content and clearly label this to users.”

Signatories of the Code of Practice include Google, Facebook and Instagram owner Meta, Microsoft, TikTok and Twitch.

Among the many concerns the EU aims to address is the creation of “deepfakes” that could feature prominent people or private citizens saying or doing things that they didn’t say or do in real life. A good example of this is a deepfake of former President Barack Obama warning about the dangers of deepfakes – in words he never spoke.

Late last month, an apparently AI-generated image of smoke near the Pentagon in Washington, DC, accompanied by a claim that an explosion had taken place at the military facility, caused a brief panic on the stock markets.

For the music business, one area of immediate concern is the proliferation of music that appropriates a known artist’s voice in an AI-generated song that the artist never performed. In one such example, an AI-generated track featuring vocals from Drake and The Weeknd went viral earlier this year.

It’s unclear whether all the operators of search engines and social media sites like Facebook and TikTok have the necessary tools to identify AI-generated content when it appears, but it is clear that many are working rapidly to develop that capability.

At its I/O conference in May, Google unveiled a new tool that allows users to check whether a picture has been generated by AI, thanks to hidden data embedded in AI-generated images. That tool is expected to roll out to the public this summer.

Image editing software maker Adobe is implementing a tool called “content credentials” that, among other things, is able to detect when an image has been altered by AI.

Similar efforts are underway among music business companies. Believe CEO Denis Ladegaillerie said in May that the company is working with AI companies to deploy AI detection mechanisms on Believe’s platforms, and those tools should be in place within one or two quarters.

“We believe this is a mistake from Twitter… They chose confrontation, which was noticed very much in the Commission.”

Vera Jourova, European Commission

Additionally, Twitter announced last Thursday (May 30) that it’s rolling out a “Notes on Media” feature that will allow trusted users to add information to an image, such as a warning that an image is AI-generated. That message will appear even on duplicates that are hosted on other Twitter accounts. Twitter cited “AI-generated images” and “duplicate videos” as its reasons for the move.

However, unlike Adobe and Google, Twitter is not a signatory of the EU’s Code of Practice. Owner Elon Musk reportedly pulled the social media site out of the group last month, drawing a harsh response from some EC executives.

“Obligations remain,” Thierry Breton, the EU’s Commissioner for Internal Markets, said in a Tweet on May 26, telling Twitter that “you can run but you can’t hide.”

“We believe this is a mistake from Twitter,” Jourova added on Monday, as quoted by Politico. “They chose confrontation, which was noticed very much in the Commission.”

Breton noted that, as of August 25, the Code of Practice will no longer be voluntary, but a legal obligation under the EU’s new Digital Services Act (DSA).

Under the DSA, very large online platforms (VLOPs) like Twitter and TikTok, and widely-used search engines like Google and Bing, will have to identify deepfakes — be they images, audio or video — with “prominent markings” or face large fines.

The European Parliament is working on similar rules to apply to companies generating AI content as part of the AI Act, Politico reports.

Participants in the Code of Practice will be required to release reports in mid-July detailing their efforts to stop misinformation on their networks and their plans to prevent AI-generated misinformation from spreading through their platforms or services, Politico added.Music Business Worldwide

Related Posts