Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Robin Hanson & Liron Shapira Debate AI X-Risk, published by Liron on July 10, 2024 on LessWrong.
Robin and I just had an interesting 2-hour AI doom debate. We picked up where the Hanson-Yudkowsky Foom Debate left off in 2008, revisiting key arguments in the light of recent AI advances.
My position is similar to Eliezer's: P(doom) on the order of 50%.
Robin's position remains shockingly different: P(doom) < 1%.
I think we managed to illuminate some of our cruxes of disagreement, though by no means all. Let us know your thoughts and feedback!
Topics
AI timelines
The "outside view" of economic growth trends
Future economic doubling times
The role of culture in human intelligence
Lessons from human evolution and brain size
Intelligence increase gradient near human level
Bostrom's Vulnerable World hypothesis
The optimization-power view
Feasibility of AI alignment
Will AI be "above the law" relative to humans
Where To Watch/Listen/Read
YouTube video
Podcast audio
Transcript
About Doom Debates
My podcast, Doom Debates, hosts high-quality debates between people who don't see eye-to-eye on the urgent issue of AI extinction risk.
All kinds of guests are welcome, from luminaries to curious randos. If you're interested to be part of an episode, DM me here or contact me via Twitter or email.
If you're interested in the content, please subscribe and share it to help grow its reach.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
view more