Elon Musk and Apple co-founder Steve Wozniak amongst hundreds of tech and science leaders calling for 6-month pause on ‘training of AI systems more powerful than GPT-4’

Credit: Trevor Cokley/Apex/Alamy
Elon Musk is an external advisor of the Future Of Life Institute

This is a big moment in the story of AI, and how it affects – and will affect – both the entertainment business and all of our lives.

Over 1,000 signatories – including hundreds of tech, science, and academic leaders – have signed an open letter calling on all AI labs around the world “to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”.

It adds: “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Those who’ve signed the letter, instigated by the Future Of Life Institute, include Elon Musk (CEO of SpaceX, Twitter, and Tesla), in addition to the co-founder of Apple, Steve Wozniak.

Other notable signatories include Emad Mostaque, the CEO of Stability AI (home of popular text-to-image generator Stable Diffusion), plus Evan Sharp, the co-founder of Pinterest, and Chris Larsen, the co-founder of Ripple.

Elsewhere, the letter’s backers include three team members at Alphabet/Google‘s experimental AI hub, DeepMind: Victoria Krakovna (DeepMind, Research Scientist, co-founder of Future of Life Institute); Zachary Kenton, (DeepMind, Senior Research Scientist); and Ramana Kumar, DeepMind, Research Scientist.

Musk’s involvement is particularly significant, considering he was a co-founder (and helped fund the creation) of OpenAI Incorporated, which launched GPT-4 on March 14.

San Francisco-headquartered OpenAI, which accepted a USD $10 billion investment package from Microsoft in January this year,  began life as a non-profit entity, but transitioned into being a for-profit company in 2019. (Musk resigned from its board in 2018.)

Today, Open AI calls GPT-4 (widely referred to as Chat GPT-4), “The latest milestone in [our] effort in scaling up deep learning.”

It boasts that “while [GPT-4 is] less capable than humans in many real-world scenarios, [it] exhibits human-level performance on various professional and academic benchmarks”.

In their open letter, titled simply “Pause Giant AI Experiments”, Musk and the other signatories write: “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.”

They continue: “Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Elsewhere in the letter, the signatories move into a discussion around generative AI’s potential impact on specific areas of business, politics, the economy and beyond.

They write: “AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

“In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.”


The suggestion that the world needs products of AI to carry “watermarketing systems to help distinguish real from synthetic” will resonate with the music business.

Earlier this month, over 30 groups representing and/or associated with the music business launched the Human Artistry Campaign, which aims to ensure that AI will not replace or “erode” human culture and artistry.

Its signatories included the Recording Industry Association of America (RIAA), the Recording Academy, SAG-AFTRA and SoundExchange.Music Business Worldwide