White House secures voluntary safety pledges from Meta, Amazon, Google, others to manage AI risks

Photo credit: Andrea Izzotti/Shutterstock

The Biden-⁠Harris Administration has secured voluntary safety commitments from a number of companies operating in the AI field, including Amazon, Google, Meta, Microsoft, and OpenAI.

The voluntary commitments aim to manage the risks posed by the rapid advancement of AI technology and ensure that innovation does not come at the expense of Americans’ rights and safety, the White House said Friday (July 21).

The White House brought together seven AI companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI — to make commitments underscoring three principles: safety, security, and trust in AI development.

The companies pledged to ensure that products are safe before they reach the public, committing to rigorous internal and external security testing of AI systems prior to release.

To enhance transparency, they will share information with governments, civil society, and academia to manage potential risks and adopt best practices.

The tech giants also vowed to build secure systems, invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased models. 

“These model weights are the most essential part of an AI system, and the companies agree that it is vital that the model weights be released only when intended and when security risks are considered.”

White House

“These model weights are the most essential part of an AI system, and the companies agree that it is vital that the model weights be released only when intended and when security risks are considered,” the White House said.

The companies further pledged to facilitate third-party discovery and reporting of vulnerabilities in their AI systems to promptly address any issues.

The firms also committed to earning the public’s trust. To achieve this, they plan to develop robust technical mechanisms, such as watermarking systems, to identify AI-generated content and prevent fraud and deception. 

Publicly reporting their AI systems’ capabilities, limitations, and appropriate and inappropriate use areas is also part of the commitment. This includes addressing potential societal risks like fairness, bias, and privacy protection.

The White House said the US is determined to lead the way in responsible AI innovation. To do this, the Biden-Harris Administration is developing an executive order and pursuing bipartisan legislation to ensure the safe and responsible development of AI technology.

The White House has invited its allies to establish a strong international framework to govern the development and use of AI.

It has consulted with various countries, including Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK to establish a strong international framework for AI governance.

The commitments by AI companies in the US follow the European Union’s efforts to regulate AI.

In May, the EU took a groundbreaking step by adopting a draft negotiating mandate for what they said would be the “world’s first rules” on AI.

These new rules have been crafted to prioritize safety, transparency, non-discrimination, traceability, and environmental friendliness of AI systems. They also emphasized the need for human oversight in the implementation of AI technology.

The rapid advancement of AI technologies has challenged many industries including the music sector. One area that has raised controversy is the surge in popularity of AI-generated music, which features computer-generated vocals imitating real artists. 

These AI-generated tracks have garnered millions of streams on digital platforms, triggering debates about copyright infringement and ethical implications.

Music Business Worldwide

Related Posts