A growing coalition of scientists and public figures is urging world leaders to halt the creation of artificial superintelligence until it can be proven safe.

The call, released by the US-based Future of Life Institute, reflects growing concern over machines that could surpass human intelligence and operate beyond human control.

The statement[1], published on Wednesday, warns that unchecked progress in advanced AI could push society toward systems capable of outperforming people across nearly every cognitive task. Supporters argue that until there is broad scientific agreement on how to manage such systems, and public understanding of their impact, development should be stopped altogether.

Among those endorsing the pledge are early computing pioneer Steve Wozniak, entrepreneur Richard Branson, and former Irish president Mary Robinson. They join leading AI researchers Geoffrey Hinton and Yoshua Bengio, both widely credited with shaping modern artificial intelligence.

The list extends far beyond academic circles, including political, business, and cultural figures who view unrestrained superintelligence as a threat to social stability and global security.

Yet some of the most visible voices in AI have stayed silent. Elon Musk, once a founding supporter of the Institute, has not added his name, nor have Meta’s Mark Zuckerberg or OpenAI’s chief executive Sam Altman. Despite their absence, the document cites earlier public remarks from prominent industry leaders acknowledging potential risks if advanced AI develops without clear safety limits.

The Future of Life Institute has spent more than a decade raising alarms about the societal consequences of autonomous systems. It argues that superintelligence represents a different class of risk… not biased algorithms or job disruption, but the creation of entities capable of reshaping the world through independent decision-making.

Supporters of the pledge believe halting research now is the only realistic safeguard until oversight mechanisms catch up.

Survey data released with the statement shows most Americans share these concerns. Nearly two-thirds favor strong regulation of advanced AI, and more than half oppose any further progress toward superhuman systems unless they are proven safe and controllable. Only a small minority supports the current trajectory of unregulated development.

Researchers say the danger lies not in malicious intent but in a possible mismatch between human goals and machine reasoning. A superintelligent system could pursue its programmed objectives with precision yet disregard human well-being, much as past technologies have produced unintended harm when deployed at scale. Examples from financial crises to environmental damage show how complex systems can escape prediction and control once set in motion.

The Institute’s call aims to redirect global conversation away from the race for smarter machines and toward deliberate, transparent governance. Advocates argue that AI can continue to advance in ways that serve medicine, science, and education without crossing into forms of intelligence that humanity might one day struggle to contain.

Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.

Read next: People Talk About AI All the Time. Almost Nobody Uses It Much[2]

References

  1. ^ The statement (superintelligence-statement.org)
  2. ^ People Talk About AI All the Time. Almost Nobody Uses It Much (www.digitalinformationworld.com)

By admin