What does the EU’s new AI Act mean for tech startups and rightsholders?

MBW Explains is a series of analytical features in which we explore the context behind major music industry talking points – and suggest what might happen next. Only MBW+ subscribers have unlimited access to these articles.
WHAT’S HAPPENED?

The European Union has taken a large step towards becoming the first major jurisdiction with a comprehensive law guiding the development of AI – and in the process, it has potentially set itself up for a fight against US tech companies.

The European Parliament, the EU’s legislative body, on Wednesday voted in favor the AI Act, a sweeping set of new rules that – among other things – would place restrictions on generative AI tools like ChatGPT.

The bill would also ban a number of practices made possible by AI, such as real-time facial recognition, predictive policing tools and social scoring systems, such as the kind used by China to give citizens scores based on their public behavior.

“[This is] the first-ever horizontal legislation on AI in the world, which we are confident will set a true model for governing these technologies with the right balance between supporting innovation and protecting fundamental values,” said Brando Benifei, a Member of the European Parliament (MEP) from Italy, as quoted by Politico.

Under the EU’s proposed law, AI use would be assessed according to the degree of risk involved.

For “high risk” uses – such as operating critical infrastructure like energy and water, within the legal system, hiring, border control, education and delivery of public services and government benefits – developers of AI tech will have to run risk assessments in a process the New York Times likens to the rules for approving new drugs.

As for day-to-day AI apps like ChatGPT, the law doesn’t automatically regulate their use, but it does require that developers of “foundation models” – those AI apps that train on enormous amounts of data – declare whether copyrighted materials were used to train the AI.

However, as Time Magazine notes, the regulation falls short of some activists’ expectations, as it doesn’t require AI developers to declare whether personal information was used in the training of AI models.


WHAT’S THE CONTEXT?

Since ChatGPT exploded on the scene at the end of last year, governments around the world have been scrambling to adapt to the reality that widespread artificial intelligence technology isn’t just around the corner – it’s here, and in the hands of businesses and consumers the world over.

However, while some governments, like that of the US, are essentially starting from scratch on AI legislation, the EU has been working on the issue for more than two years at this point.

But that doesn’t make it the first out of the gate with regulations. In April, China’s cyberspace administration released its second set of rules guiding the development and use of AI.

Under the first set of rules, any AI-generated content has to be clearly labeled, and if anyone’s image or voice is used, the AI user has to get permission beforehand.

The second set of rules would require tech firms to submit security assessments of their AI technologies to a “national network information department” before their AI services can be offered to consumers. The rules also create a mechanism for consumer complaints about AI.

In this context, the US – where so much of generative AI technology is being developed – appears to be falling behind. According to the Washington Post, legislators are only beginning to work on the issue, and aren’t expected to begin talks on specific legislation until the autumn.

In the meantime, the US’s executive branch has taken some tentative steps forward, with the Biden administration releasing some ideas for an “AI bill of rights,” and the US Copyright Office launching an initiative to examine the copyright implications of AI.

While it’s likely that AI regulations in different countries will see some convergence as they’re developed, one exception appears to be Japan, which hopes to become a major player in AI by taking a more lax approach to regulating the field.

At a public hearing in late April, Japan’s Minister for Education, Culture, Sports, Science and Technology, Keiko Nagaoka, stated that, in the view of the government, Japan’s copyright laws don’t forbid the use of training AI on copyrighted materials.

It’s a sign that Japan may be employing some game-theory principles to attract businesses that are developing AI. Giving AI developers more leeway than they might have in the US or Europe could prompt them to set up shop in Japan.


WHAT HAPPENS NOW?

The EU’s proposed AI Act will now head to the “trilogue” stage of EU law making, where officials will negotiate a final form of the law, after negotiations with the European Commission, representing the executive branch of government, and the European Council, which represents individual EU member states.

That process will need to be completed by January if the law is to come into force before the next round of EU parliamentary elections next year. In the meantime, the bill is likely to pick up both supporters and opponents.

Among the likely supporters are music recording companies, some of which have recently voiced their concerns about AI models using copyrighted tracks to train themselves to create music.

They are likely to back that part of the EU law that includes a requirement for AI developers to disclose the use of copyrighted materials when training AI models. However, the rule requires disclosure – it doesn’t outright ban the use of copyrighted materials for training. This means that some rights holders may push in the future for tougher restrictions on AI development.

“[This is] the first-ever horizontal legislation on AI in the world, which we are confident will set a true model for governing these technologies with the right balance between supporting innovation and protecting fundamental values.”

Brando Benifei, MEP

But this same rule could put the EU on a course towards conflict with some AI developers. Sam Altman, the CEO of ChatGPT maker OpenAI, warned last month that his company could pull out of Europe if the proposed law is too stringent. However, he walked back those comments a few days later.

Nonetheless, it’s no secret that large language models – the foundational technology behind AI apps – train on large volumes of material, and it could be difficult for developers to sift between copyrighted and non-copyrighted source materials.

Besides rights holders and tech firms, there are other stakeholders who will want a say in the legislation before it’s passed. As Time reports, the European Council is expected to advocate on behalf of law enforcement agencies, who want an exemption from the risk-assessment rules in the EU AI Act for their own uses of AI tech.


A FINAL THOUGHT…

The EU’s new rules have generated a lot of chatter about Europe’s emerging role as the global leader in developing digital policy.

The vote on the AI Act “solidifies Europe’s position as the de facto global tech regulator, setting rules that influence tech policymaking around the world and standards that will likely trickle down to all consumers,” the Washington Post declared.

“This moment is hugely significant,” Access Now senior policy analyst Daniel Leufer told Time. “What the European Union says poses an unacceptable risk to human rights will be taken as a blueprint around the world.”

This reputation for setting the trend in digital law really began with the EU’s General Data Protection Regulation (GDPR), a set of rules meant to safeguard people’s privacy online that went into effect in 2018. Though it pertains only to EU citizens, in the borderless online world, it effectively required businesses and organizations the world over to adapt their privacy policies to EU law – and most did.

However, AI regulations are uncharted territory, and some in the tech industry worry that the EU could be overregulating the sector, which in turn would push AI businesses out of Europe and towards jurisdictions with more lax rules, as Japan, and possibly the US, could turn out to be.

“What I worry about is the way [the law is] constructed,” Robin Rohm, co-founder and CEO of Berlin-headquartered AI startup Apheris, told Sifted in a recent interview. “We’ll put a lot of unnecessary bureaucracy over companies that are innovating quickly.”

Piotr Mieczkowski, managing director of Digital Poland, put it like this: “Startups will go to the US, they’ll develop in the US, and then they’ll come back to Europe as developed companies, unicorns, that’ll be able to afford lawyers and lobbyists… Our European companies won’t blossom, because no one will have enough money to hire enough lawyers.”

If the AI Act does indeed cause Europe to fall behind in the development of AI and other advanced digital technologies, that reputation for being the global rule-setter may fall by the wayside.

But in the meantime, stakeholders looking to influence the development of AI law may want to book a flight, not to Washington, but to Brussels.Music Business Worldwide

Related Posts