When over 850 public figures decide something needs to stop, maybe it's time to listen. A massive coalition has signed an open letter demanding a halt on superintelligent AI development.
We're talking Apple co-founder Steve Wozniak, Virgin's Richard Branson, and even Prince Harry and Meghan Markle. Not exactly your typical tech bros panicking over their stock options.
The signatories read like a who's who of people who actually know things. AI pioneers Yoshua Bengio, Geoffrey Hinton, and Stuart Russell – literally the "fathers of modern AI" – put their names on this thing.
Former Joint Chiefs chairman Mike Mullen and ex-national security adviser Susan Rice joined in. When the people who helped create AI are saying "pump the brakes," that's worth noting.
Here's where it gets interesting. Conservative media personalities Steve Bannon and Glenn Beck signed alongside progressive figures. Celebrities like Joseph Gordon-Levitt and will.i.am threw their support behind it too.
When Steve Bannon and progressive activists find common ground on anything, you know the threat feels genuinely existential.
This isn't partisan politics – it's widespread fear dressed up as measured concern.
The demands are straightforward. Stop developing superintelligent AI until scientists agree it's safe. Get public support before moving forward. Create actual regulations with teeth.
Focus specifically on AI that surpasses human abilities across all tasks. Pretty reasonable requests, considering the alternative might be human extinction.
The concerns aren't subtle either. Mass unemployment from automation. Loss of freedoms to opaque AI systems. National security nightmares. Economic chaos. Oh, and the small matter of potentially wiping out humanity. No big deal.
What's striking is how this crosses every sector imaginable. Technology, academia, politics, entertainment, religion, military – everyone's freaking out together.
Nobel laureates and faith leaders don't usually agree on lunch orders, let alone existential threats. Meanwhile, major tech companies like OpenAI and Meta's Superintelligence Labs continue racing toward advanced AI despite these warnings.
Public polling backs them up. Sixty-four percent of Americans think superintelligent AI shouldn't be developed until proven safe. Seventy-three percent want robust regulation.
Only five percent support continuing under current conditions – basically no oversight whatsoever. This represents the second intervention by the Future of Life Institute after their previous 2023 letter failed to slow development.
Experts who helped create these systems warn that AGI could become uncontrollable if goals aren't properly aligned with human values.
When this many smart people from different worlds agree something's dangerous, ignoring them seems pretty stupid.

