Influential Figures Demand Halt on Superintelligent AI Amidst Global Safety Fears

Est. Reading: 2 minutes
superintelligent ai safety concerns
Published on:October 22, 2025
Author
AI New Revolution Team
Tags
Share Article

When over 850 public figures decide something needs to stop, maybe it's time to listen. A massive coalition has signed an open letter demanding a halt on superintelligent AI development.

We're talking Apple co-founder Steve Wozniak, Virgin's Richard Branson, and even Prince Harry and Meghan Markle. Not exactly your typical tech bros panicking over their stock options.

The signatories read like a who's who of people who actually know things. AI pioneers Yoshua Bengio, Geoffrey Hinton, and Stuart Russell – literally the "fathers of modern AI" – put their names on this thing.

Former Joint Chiefs chairman Mike Mullen and ex-national security adviser Susan Rice joined in. When the people who helped create AI are saying "pump the brakes," that's worth noting.

Here's where it gets interesting. Conservative media personalities Steve Bannon and Glenn Beck signed alongside progressive figures. Celebrities like Joseph Gordon-Levitt and will.i.am threw their support behind it too.

When Steve Bannon and progressive activists find common ground on anything, you know the threat feels genuinely existential.

This isn't partisan politics – it's widespread fear dressed up as measured concern.

The demands are straightforward. Stop developing superintelligent AI until scientists agree it's safe. Get public support before moving forward. Create actual regulations with teeth.

Focus specifically on AI that surpasses human abilities across all tasks. Pretty reasonable requests, considering the alternative might be human extinction.

The concerns aren't subtle either. Mass unemployment from automation. Loss of freedoms to opaque AI systems. National security nightmares. Economic chaos. Oh, and the small matter of potentially wiping out humanity. No big deal.

What's striking is how this crosses every sector imaginable. Technology, academia, politics, entertainment, religion, military – everyone's freaking out together.

Nobel laureates and faith leaders don't usually agree on lunch orders, let alone existential threats. Meanwhile, major tech companies like OpenAI and Meta's Superintelligence Labs continue racing toward advanced AI despite these warnings.

Public polling backs them up. Sixty-four percent of Americans think superintelligent AI shouldn't be developed until proven safe. Seventy-three percent want robust regulation.

Only five percent support continuing under current conditions – basically no oversight whatsoever. This represents the second intervention by the Future of Life Institute after their previous 2023 letter failed to slow development.

Experts who helped create these systems warn that AGI could become uncontrollable if goals aren't properly aligned with human values.

When this many smart people from different worlds agree something's dangerous, ignoring them seems pretty stupid.

AI Ethics and Governance
October 4, 2025 Explosive Rise in Dishonesty: How AI Delegation Sparks Unethical Financial Behavior

AI delegation triggers a staggering 200% spike in financial dishonesty as people exploit digital middlemen to escape moral responsibility. Ethics crumble when machines become scapegoats.

AI Ethics and Governance
August 5, 2025 AI Dangers at Work: Alarming Risks You Can’t Afford to Overlook

Is AI erasing your future? 300 million jobs vanishing by 2030, with 43% of workers fearing privacy violations and 50% worried about hallucinated facts. Your career might be next.

AI Ethics and Governance
September 19, 2025 Why AI Should Support Human Insights, Not Supplant Them: Expert Panel Highlights Essential Choices

AI can process thousands of transcripts in minutes, but can it interpret what humans truly mean? Expert panel reveals the imperative balance between machine efficiency and human wisdom in research.

AI Ethics and Governance
November 22, 2025 Figma Accused of Exploiting User Data for AI—Is Your Privacy at Risk?

Figma secretly feeds your design data to AI algorithms while most users blindly accept vague privacy policies. Your creative work isn't as private as you think.

1 2 3 36
Your ultimate destination for cutting-edge crypto news, insider insights, and analysis on the ever-evolving world of digital assets.
© Copyright 2025 - AI News Revolution - All Rights Reserved
ABOUT USCONTACTTERMS & CONDITIONSPRIVACY POLICY
The information provided on this website is provided for informational and educational purposes only. The content on this website should not be construed as technical, technological, engineering, legal, or professional advice. In addition, the content published on AI News Revolution may include AI-generated material and could contain inaccuracies or outdated information as the field of artificial intelligence evolves rapidly. We make no representations or warranties of any kind, expressed or implied, about the completeness, accuracy, adequacy, legality, usefulness, reliability, suitability, or availability of information on our website. Any implementation of technologies, methods, or applications described on our site is strictly at your own risk. AI News Revolution is not responsible for any outcomes resulting from actions taken based on information found on this website. For comprehensive guidance on implementing AI technologies or making technology-related decisions, we recommend consulting with qualified professionals in the relevant fields.
Additional terms are found in our Terms of Use.
magnifiercross linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram