Skip to content
Search AI Powered

Latest Stories

AI Expert Warns Musk-Signed Letter Calling For 'Pause' Isn't Enough: 'Everyone On Earth Will Die'

Elon Musk; Eliezer Yudkowsky
Justin Sullivan/Getty Images, Bankless Shows/YouTubbe

Artificial intelligence expert Eliezer Yudkowsky criticized the Twitter CEO and other tech leaders after they called for AI developers to pause the creation of 'systems more powerful than GPT-4' for six month.

An Artificial Intelligence expert warned that an open letter signed by Twitter CEO Elon Musk asking to "pause" further training of advanced AI tech models understated a potential risk of human extinction.

U.S. Decision Theorist Eliezer Yudkowsky wrote a Time op-ed explaining that a six-month moratorium on AI further developments was not enough and that it needed to shut down altogether, otherwise, he feared, "everyone on earth will die."


Yudkowsky, who leads research at the Machine Intelligence Research Institute and has been working on aligning AI since 2001, refrained from signing the open letter that called for:

“...all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

The letter published by the non-profit Future of Life Institute–which is primarily funded by Musk's charity grantmaking organization, Musk Foundation–included 1,100 signatures from other tech innovators such as Apple co-founder Steve Wozniak and the likes of former presidential candidate Andrew Yang and Skype co-founder Jaan Tallinn.

It stated that its goal was to “steer transformative technologies away from extreme, large-scale risks.”

The letter also read:

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
“This confidence must be well justified and increase with the magnitude of a system’s potential effect.”



Yudkowsky acknowledged that he respected those who signed the letter as it was "an improvement on the margin" and that it was better than having no moratorium at all.

However, he suggested the letter offered very little by way of solving the problem.

"The key issue is not 'human-competitive' intelligence (as the open letter puts it)," he wrote and further stated:

"It’s what happens after AI gets to smarter-than-human intelligence."
"Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing."

He went on to claim:

"Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die."

"Not as in 'maybe possibly some remote chance,' but as in 'that is the obvious thing that would happen,'" asserted Yukowsky.

"It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers."

People who found the argument confusing shared their thoughts and also slammed Musk for adding his signature demanding a pause on AI tech progress.





The AI expert asked us to dispense with preconceived notions of what characterizes a sentient "hostile superhuman" that exists on the internet and sends "ill-intentioned emails."

"Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow."

He maintained that such an super intelligent entity "won't stay confined to computers for long".

"In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing."
"If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter."










Yudkowsky stressed that coming up with solutions to halt rapidly advanced AI tech models should have been addressed 30 years ago and that it cannot be solved in a six-month gap.

"It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today’s capabilities."
"Solving safety of superhuman intelligence—not perfect safety, safety in the sense of 'not killing literally everyone'—could very reasonably take at least half that long."

Achieving this, he implied, allows no room for error.

"The thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead."
"Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone."






Yudkowsky proposed that a moratorium on the development of powerful AI systems should be "indefinite and worldwide" with no exceptions, including for governments or militaries.

"Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs."
"Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms."



Skeptics of Yudkowsky's proposal added their two cents.








You can watch a discussion with Yudkowsky about AI and the end of humanity featured on Bankless Podcast below.

159 - We’re All Gonna Die with Eliezer Yudkowsky youtu.be

More from People

Miriam Margolyes
David Levenson/Getty Images

'Harry Potter' Star Miriam Margolyes Offers Mic Drop Explanation For Why Respecting Pronouns Matters

Sometimes it is just that easy to make people happy. This is a lesson learned over and over in our lives, but that's because it's an important one.

Actor Miriam Margolyes shared how she learned to change her behavior to make others happier. Margolyes appeared on The Graham Norton Show recently and brought up a fairly polarizing subject in the United Kingdom: trans people.

Keep Reading Show less
Elon Musk looks on during a public appearance, as the billionaire once again turns a newsroom style decision into a culture-war grievance broadcast to millions on X.
BRENDAN SMIALOWSKI/AFP via Getty Images

Elon Musk Cries Racism After Associated Press Explains Why They Capitalize 'Black' But Not 'White'

Elon Musk has spent the year picking fights, from health research funding to imagined productivity crises among federal workers and whether DOGE accomplished anything at all besides leaving chaos in its wake.

His latest grievance, however, is thinly disguised as grammatical. Specifically, he is once again furious that the Associated Press (AP) capitalizes “Black” while keeping “white” lowercase.

Keep Reading Show less
Elon Musk; Yale University School of Engineering and Applied Science
Brendan Smialowski/AFP via Getty Images; Plexi Images/GHI/UCG/Universal Images Group via Getty Images

Elon Musk Gets Brutal Wakeup Call After Claiming That Yale's Lack Of Republican Faculty Is 'Outrageous Bigotry'

Elon Musk—who has repeatedly whined about diversity, equity, and inclusion (DEI)—took to his social media platform to whine about a lack of conservative faculty at Yale University.

Musk shared data compiled by The Buckley Institute (TBI), a conservative-leaning organization founded at Yale in 2010. TBI found 82.3% of faculty self-identified as Democrats or primarily supporting Democratic candidates, 15% identified as independents, while only 2.3% identified as Republicans.

Keep Reading Show less
Barry Manilow
Mat Hayward/Getty Images

Barry Manilow Speaks Out After Postponing Farewell Tour Dates Due To Lung Cancer Scare

"Looks Like We Made It" singer Barry Manilow is in the process of saying goodbye to the stage and meeting his fans in-person, but he has to press pause for a few months after receiving a jarring diagnosis.

On December 22, 2025, the "Mandy" singer posted on Facebook, explaining that a "cancerous spot" had been discovered on his left lung.

Keep Reading Show less
Chris Evans as Steve Rogers in Avengers: Endgame, the last time audiences saw Captain America before his unexpected return was teased for Avengers: Doomsday.
Disney/Marvel Studios

Marvel Just Confirmed That Chris Evans Is Returning For 'Avengers: Doomsday'—And Fans Have Mixed Feelings

Folks, once again, continuity is more of a suggestion than a rule in the Marvel Cinematic Universe. Marvel has officially confirmed that Chris Evans is returning as Steve Rogers in Avengers: Doomsday, and the internet has responded exactly how you’d expect: screaming, celebrating, arguing, and a very justified side-eye toward how Sam Wilson keeps getting treated.

The confirmation comes via a teaser now playing exclusively in theaters ahead of Avatar: Fire and Ash. There is no official online release, despite leaks circulating. If you didn’t catch it on the big screen, Marvel’s response is essentially: sorry, guess you had to be there.

Keep Reading Show less