Top Tech Leaders Call for Global Ban on Superintelligent AI to Avert Existential Risks
![]()
Hundreds of public figures from the worlds of technology, academia, politics, and entertainment have signed a statement calling for an immediate global moratorium on the development of superintelligent AI. The initiative, which has garnered over 4,300 signatures, aims to prevent the creation of a technological force that could surpass all human cognitive capabilities and pose an existential threat to humanity.
Among the signatories are prominent figures such as Apple co-founder Steve Wozniak, Virgin Group founder Richard Branson, media celebrities Kate Bush and Will.I.am, and technological pioneers like Geoffrey Hinton and Yoshua Bengio. The statement urges policymakers and industry leaders to prioritize safety over speed in the development of AI, arguing that the potential risks associated with superintelligent systems are too severe to ignore.
The call for a global ban stems from growing concerns about the potential for AI to trigger economic devastation, undermine human autonomy, and even lead to human extinction. According to the statement, the development of such advanced AI could outpace regulatory frameworks, leading to unforeseen consequences that may not be mitigated in time. The document calls for an international agreement to suspend research until there is a broad scientific consensus on the safety and control of superintelligent systems, along with strong public support.
Meanwhile, the ongoing race for AI superiority between the United States and China has intensified, with major tech companies investing billions of dollars in research to develop AI models capable of independent thought, planning, and coding. OpenAI, Google DeepMind, Anthropic, and xAI are among the firms at the forefront of this competitive landscape, viewing AI dominance as a critical component of national security and economic strategy.
Despite the rising global concern over AI risks, regulatory frameworks remain fragmented and inconsistent. The European Union’s AI Act represents the first major legislative effort to classify and manage AI systems by risk levels, but critics argue that its implementation could lag behind the rapid advancements in AI technology. This creates a pressing need for a more coordinated and proactive approach to AI governance, as the stakes of the technology continue to escalate.
The statement also highlights the need for greater transparency and public engagement in AI development. It emphasizes that the benefits of AI should be balanced against its potential harms, with a focus on ensuring that the technology serves human interests rather than undermining them. As the debate over AI’s future intensifies, the call for global cooperation on its regulation is becoming increasingly urgent.