An Artificial Intelligence expert warned that an open letter signed by Twitter CEO Elon Musk asking to "pause" further training of advanced AI tech models understated a potential risk of human extinction.
U.S. Decision Theorist Eliezer Yudkowsky wrote a Time op-ed explaining that a six-month moratorium on AI further developments was not enough and that it needed to shut down altogether, otherwise, he feared, "everyone on earth will die."
Yudkowsky, who leads research at the Machine Intelligence Research Institute and has been working on aligning AI since 2001, refrained from signing the open letter that called for:
β...all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.β
The letter published by the non-profit Future of Life Instituteβwhich is primarily funded by Musk's charity grantmaking organization, Musk Foundationβincluded 1,100 signatures from other tech innovators such as Apple co-founder Steve Wozniak and the likes of former presidential candidate Andrew Yang and Skype co-founder Jaan Tallinn.
It stated that its goal was to βsteer transformative technologies away from extreme, large-scale risks.β
\u201cBREAKING: A petition is circulating to PAUSE all major AI developments.\n\ne.g. No more ChatGPT upgrades & many others.\n\nSigned by Elon Musk, Steve Wozniak, Stability AI CEO & 1000s of other tech leaders.\n\nHere's the breakdown: \ud83d\udc47\u201dβ Lorenzo Green \u3030\ufe0f (@Lorenzo Green \u3030\ufe0f) 1680093344
The letter also read:
βPowerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
βThis confidence must be well justified and increase with the magnitude of a systemβs potential effect.β
\u201cElon Musk joined artificial intelligence experts and industry executives worried about AI's impact on society. He is among the signatories to an open letter calling for a six-month pause in developing systems stronger than OpenAI's GPT-4 https://t.co/ZLhbXdsUTl\u201dβ Reuters (@Reuters) 1680113100
Yudkowsky acknowledged that he respected those who signed the letter as it was "an improvement on the margin" and that it was better than having no moratorium at all.
However, he suggested the letter offered very little by way of solving the problem.
"The key issue is not 'human-competitive' intelligence (as the open letter puts it)," he wrote and further stated:
"Itβs what happens after AI gets to smarter-than-human intelligence."
"Key thresholds there may not be obvious, we definitely canβt calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing."
He went on to claim:
"Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die."
"Not as in 'maybe possibly some remote chance,' but as in 'that is the obvious thing that would happen,'" asserted Yukowsky.
"Itβs not that you canβt, in principle, survive creating something much smarter than you; itβs that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers."
People who found the argument confusing shared their thoughts and also slammed Musk for adding his signature demanding a pause on AI tech progress.
\u201c@mrgreen It's interesting how a lot of the signatories have competitive AI business (ahem Tesla) who would love to see OpenAI and other leaders pause to give them time to "catch up."\u201dβ Lorenzo Green \u3030\ufe0f (@Lorenzo Green \u3030\ufe0f) 1680093344
\u201c@Reuters The guy who owns a company working to implant computer chips in people's brains wants us to pump the brakes on conversational AI? Sounds good!\u201dβ Reuters (@Reuters) 1680113100
\u201c@Reuters I admit I don\u2019t have a clue what they are talking about.\nI can guess we are going to be in for a massive shift in our perceptions at the very least. I myself never thought I would have lived long enough to experience it. Having positive thoughts.\u201dβ Reuters (@Reuters) 1680113100
\u201c@mrgreen Change is always uncomfortable and uncertain.\n\nIf few masterminds are putting all their force to stop AI.\n\nThe other half are forcing it to get bigger.\u201dβ Lorenzo Green \u3030\ufe0f (@Lorenzo Green \u3030\ufe0f) 1680093344
The AI expert asked us to dispense with preconceived notions of what characterizes a sentient "hostile superhuman" that exists on the internet and sends "ill-intentioned emails."
"Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computersβin a world of creatures that are, from its perspective, very stupid and very slow."
He maintained that such an super intelligent entity "won't stay confined to computers for long".
"In todayβs world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing."
"If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter."
\u201c@lexfridman @ESYudkowsky When listening I was wondering about how much we understand about human consciousness. The neural and chemical pathways and how they work but why. Just wondering would we need to understand how or why an AI became sentient to know that it did?\u201dβ Lex Fridman (@Lex Fridman) 1680189676
\u201c@lexfridman @ESYudkowsky I feel that the fact that I can only definitively confirm the existence of my own consciousness makes proving AI sentience impossible. But Skynet is almost an absolution regardless.\u201dβ Lex Fridman (@Lex Fridman) 1680189676
\u201c@lexfridman @ESYudkowsky All these threats (AI, advanced Aliens, another dangerous guides-"avatars") will lead us to ultimate necessity to become smart, much more smart then now.\u201dβ Lex Fridman (@Lex Fridman) 1680189676
Yudkowsky stressed that coming up with solutions to halt rapidly advanced AI tech models should have been addressed 30 years ago and that it cannot be solved in a six-month gap.
"It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach todayβs capabilities."
"Solving safety of superhuman intelligenceβnot perfect safety, safety in the sense of 'not killing literally everyone'βcould very reasonably take at least half that long."
Achieving this, he implied, allows no room for error.
"The thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead."
"Humanity does not learn from the mistake and dust itself off and try again, as in other challenges weβve overcome in our history, because we are all gone."
\u201c@lexfridman @ESYudkowsky Gotta be honest, although Eliezer has some very good points I do believe some of his beliefs (especially those pertaining to AGI) are not based on any actual fact and promote extreme sensationalism for no good reason.\u201dβ Lex Fridman (@Lex Fridman) 1680189676
\u201c@JeffLadish End of humanity is more important but I'd note there's another issue: Who can be "allowed" to own AI technology? Is it a fundamental part of science- like say electromagnetic radiation, a fundamental discovery used for all proprietary tech, OR- is it a thing owned by one company\u2026\u201dβ Jeffrey Ladish (@Jeffrey Ladish) 1680148148
Yudkowsky proposed that a moratorium on the development of powerful AI systems should be "indefinite and worldwide" with no exceptions, including for governments or militaries.
"Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs."
"Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms."
Skeptics of Yudkowsky's proposal added their two cents.
\u201c@JeffLadish Global ban on AGI development that anyone would actually adhere to is as much likely as a global ban on CO2 emissions. Sad truth.\u201dβ Jeffrey Ladish (@Jeffrey Ladish) 1680148148
\u201c@JeffLadish If we can get China and others on board too, for sure yes.\u201dβ Jeffrey Ladish (@Jeffrey Ladish) 1680148148
\u201c@mrgreen Sadly there is no way to turn it back into the box \ud83d\ude22\u201dβ Lorenzo Green \u3030\ufe0f (@Lorenzo Green \u3030\ufe0f) 1680093344
\u201c@JeffLadish \u201cArtificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world\u201d \n\nPutin is *not* slowing down\u201dβ Jeffrey Ladish (@Jeffrey Ladish) 1680148148
\u201c@JeffLadish I don't think there's any reason to have an indefinite ban on AGI, but certainly a ban until we can figure out what the hell is going on\u201dβ Jeffrey Ladish (@Jeffrey Ladish) 1680148148
\u201c@mrgreen might as well ask Earth to pause spinning. AI companies wouldn't and shouldn't voluntarily stop doing the most important work of their lives. government regulation won't catch up in time. other markets won't abide by this petition. do you want an AGI developed by the CCP? because\u2026\u201dβ Lorenzo Green \u3030\ufe0f (@Lorenzo Green \u3030\ufe0f) 1680093344
\u201c@JeffLadish I don't like the idea because there is no way to enforce it. Foreign and domestic entities and bad actors will ignore it. \nIt will be framed in a different way to allow it anyway.\n\nJust like gain of function.\n\nWe will be tooling ourselves.\u201dβ Jeffrey Ladish (@Jeffrey Ladish) 1680148148
You can watch a discussion with Yudkowsky about AI and the end of humanity featured on Bankless Podcast below.
159 - Weβre All Gonna Die with Eliezer Yudkowskyyoutu.be