A number of prominent tech leaders penned a letter calling for a six-month pause in artificial intelligence experiments, warning of “profound risks to society and humanity.”


What You Need To Know

  • SpaceX, Tesla and Twitter CEO Elon Musk, Apple co-founder Steve Wozniak, 2020 presidential candidate Andrew Yang and other prominent tech figures called for a six-month pause in artificial intelligence experiments, warning of “profound risks to society and humanity"

  • The letter urges companies to stop training AI systems more powerful than GPT-4, the latest version of the generative AI system from OpenAI

  • If a pause cannot be agreed to quickly, the letter reads, then governments should intervene and impose a moratorium

  • A number of governments are already working to regulate high-risk AI tools; the United Kingdom released a paper Wednesday outlining its approach, which it said “will avoid heavy-handed legislation which could stifle innovation

The letter, signed by SpaceX, Tesla and Twitter CEO Elon Musk, Apple co-founder Steve Wozniak, 2020 presidential candidate Andrew Yang and other experts, urges companies to stop training AI systems more powerful than GPT-4 (Generative Pre-trained Transformer 4), the latest version of the generative AI system from OpenAI. The release of OpenAI's widely-used ChatGPT chatbot platform helped spark a race among tech giants Microsoft and Google to unveil similar applications.

“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?” the letter reads. "Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?

“Such decisions must not be delegated to unelected tech leaders,” the letter continues. “Powerful AI systems should be developed only once we are confident that their effects ill be positive and their risks will be manageable.”

The leaders pointed to a recent statement from OpenAI, which states the importance of “independent review before starting to train future systems” at some point in the future.

“We agree,” they wrote. “That point is now. Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors.”

If a pause cannot be agreed to quickly, they continued, then governments should intervene and impose a moratorium.

The letter was released by the Future of Life Institute, a nonprofit that seeks to “steer transformative technologies” — like artificial intelligence, as well as biotechnologies and nuclear technology — “away from extreme, large-scale risks and towards benefiting life.” Members of the group’s board include Jaan Tallinn, co-founder of Skype; Musk is one of the external advisers to the organization. 

Musk, an OpenAI co-founder and early investor, has long expressed concerns about AI's existential risks.

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter reads. “These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

A number of governments are already working to regulate high-risk AI tools. The United Kingdom released a paper Wednesday outlining its approach, which it said “will avoid heavy-handed legislation which could stifle innovation.” Lawmakers in the 27-nation European Union have been negotiating passage of sweeping AI rules.

The letter concludes on an optimistic note, saying that “humanity can enjoy a flourishing future with AI,” adding: “Having succeeded in creating powerful AI systems, we can now enjoy an ‘AI summer’ in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt.”

“Society has hit pause on other technologies with potentially catastrophic effects on society,” they concluded. “We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.”

The Associated Press contributed to this report.