Sam Altman, CEO of ChatGPT maker OpenAI, to call for regulation of artificial intelligence in the US

Photo Credit: Ascannio/Shutterstock

As AI technology takes the world by storm, lawmakers around the world are taking the first steps towards developing regulatory control over the technology.

And now, the CEO of one of the world’s most prominent AI firms is adding his voice to growing calls for regulation.

Sam Altman, the CEO of OpenAI – the company behind the now-famous ChatGPT app – is set to testify in front of the US Congress on Tuesday (May 16), and according to his prepared statement, seen in advance by several media outlets, he plans to call on the US government to require the licensing of AI developers.

“The regulation of AI is essential,” Altman’s prepared statement says, as quoted by the Financial Times.

Altman will tell members of the Senate Judiciary Committee’s subcommittee on privacy, technology and the law that he is “eager to help policymakers as they determine how to facilitate regulation that balances incentivizing safety while ensuring that people are able to access the technology’s benefits.”

According to Reuters, Altman will advocate for “a governance regime flexible enough to adapt to new technical developments” and for regular updates to safety standards as AI technology evolves.

Altman’s congressional hearing comes as the use of AI technology spreads at breakneck speed, following the high-profile arrival of the ChatGPT app at the end of last year.

As of the end of March, it was estimated that ChatGPT’s various versions had around 100 million users worldwide. And that doesn’t include the various competitors to ChatGPT that have popped up since, such as Chatsonic,  and Google’s Bard.

The music industry has been both struggling with – and working with – AI technology for some time. Earlier this year, Universal Music Group took a stance against generative AI tools’ use of copyrighted music to “train” themselves to write music and lyrics.

UMG, via the Recording Industry Association of America (RIAA), is a founding member of the Human Artistry Campaign, a new industry group dedicated to ensuring that AI won’t “erode” or replace human talent in music.

At the same time, many music companies – from Tencent Music Entertainment to streaming service Deezer to UMG itself – have become involved with AI technology, either to lower the cost of content creation, or to better market and deliver their human artists’ products.

“The regulation of AI is essential.”

Sam Altman, OpenAI

And as various industries adapt their business models to the existence of AI technology, governments are beginning to develop regulations to guide the technology’s evolution.

For the time being, China appears to have taken the lead, with the country currently developing its second round of regulations to guide the use of AI in media.

Among other things, China’s rules require any AI-generated content to be labeled as such, and require the source material used in AI-generated content to be traceable.

But while China focuses on rules for AI-generated content, the European Union is moving forward with a broader set of regulations regarding the use of AI technology.

The European Parliament’s Internal Market Committee adopted a draft mandate last Thursday (May 11) that would see member states create national accreditation bodies to assess AI technologies.

Certain behaviors by AI would be banned outright, such as using subliminal, manipulative or deceptive techniques to alter people’s behavior; exploiting vulnerabilities of individuals or groups of people; and using AI for social scoring or determining trustworthiness.

While the US has yet to move forward with legislation on AI, some government agencies are taking action. The Federal Trade Commission (FTC) recently said that it is on the lookout for uses of AI that could would violate antitrust and consumer protection laws, and warned that AI applications are not exempt from those laws.Music Business Worldwide

Related Posts