×
1,300+ experts demand halt on superintelligence until safety proven
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A group of prominent AI researchers and industry leaders, including Geoffrey Hinton and Yoshua Bengio, have signed a petition calling for a halt to superintelligence development until safety measures can be established. The statement, published by the Future of Life Institute and signed by over 1,300 people, warns that unregulated competition to build superintelligent AI could lead to “human economic obsolescence” and even “potential human extinction.”

What they’re asking for: The petition demands a prohibition on superintelligence development until two key conditions are met.

  • Researchers must reach “broad scientific consensus that it will be done safely and controllably.”
  • There must be “strong public buy-in” for moving forward with the technology.

Who’s backing this: The petition attracted signatures from across tech, academia, and politics.

  • AI pioneers Geoffrey Hinton and Yoshua Bengio, both Turing Award winners known as “Godfathers of AI,” signed the statement.
  • Other notable signatories include Apple cofounder Steve Wozniak, Virgin Group founder Sir Richard Branson, computer scientist Stuart Russell, and author Yuval Noah Harari.
  • Even political figures like former Trump strategist Steve Bannon and commentator Glenn Beck added their names.

The public agrees: A recent poll by the Future of Life Institute found widespread concern about superintelligent AI among Americans.

  • 64% of 2,000 surveyed adults believe “superhuman AI should not be developed until it is proven safe and controllable, or should never be developed.”

What superintelligence actually means: The term refers to hypothetical AI that could outperform humans on any cognitive task, though definitions remain fuzzy.

  • Meta launched Superintelligence Labs in June as an internal R&D division focused on building the technology.
  • OpenAI CEO Sam Altman has argued that superintelligence is imminent, despite previously calling “superhuman machine intelligence” the “greatest threat to the continued existence of humanity” in a 2015 blog post.
  • The concept was popularized by Oxford philosopher Nick Bostrom’s 2014 book, which warned about self-improving AI systems escaping human control.

In plain English: Think of superintelligence as an AI system that could outthink humans at everything—from solving math problems to writing poetry to making strategic decisions.

  • Unlike today’s AI tools that excel at specific tasks, superintelligence would be like having a digital Einstein that’s better than any human at every mental activity imaginable.

The stark warning: The petition outlines catastrophic risks if superintelligence development continues unchecked.

  • Potential consequences include “human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction.”

Why previous efforts failed: This isn’t the first time experts have called for AI development slowdowns.

  • Many of the same signatories, including Bengio, Russell, and Wozniak, signed a 2023 open letter calling for a six-month pause on training powerful AI models.
  • Despite media attention and public debate, the momentum to commercialize AI models ultimately overpowered calls for a moratorium.
  • The AI race has intensified as competition spreads beyond Silicon Valley to international borders, with President Trump and Sam Altman framing it as a geopolitical competition between the US and China.

Industry response: Safety researchers from major AI companies have issued their own warnings about monitoring AI models for risky behavior.

  • Companies including OpenAI, Anthropic, Meta, and Google have published smaller-scale statements about the importance of AI safety as the field evolves.
Worried about superintelligence? So are these AI leaders

Recent News

IBM’s AI business hits $9.5B as mainframe sales jump 17%

Banks drive demand for AI-ready mainframes that maintain strict data residency requirements.

Meta cuts 600 AI jobs while ramping up hiring in race against rivals

Fewer conversations will speed up decision-making and boost individual impact.

OpenAI security chief warns ChatGPT Atlas browser vulnerable to hackers

Hackers can hide malicious instructions on websites that trick AI into following their commands.