Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Dwarksh's Podcast with Leopold Aschenbrenner, published by Zvi on June 10, 2024 on LessWrong.
Previously: Quotes from Leopold Aschenbrenner's Situational Awareness Paper
Dwarkesh Patel talked to Leopold Aschenbrenner for about four and a half hours.
The central discussion was the theses of his paper, Situational Awareness, which I offered quotes from earlier, with a focus on the consequences of AGI rather than whether AGI will happen soon. There are also a variety of other topics.
Thus, for the relevant sections of the podcast I am approaching this via roughly accepting the technological premise on capabilities and timelines, since they don't discuss that. So the background is we presume straight lines on graphs will hold to get us to AGI and ASI (superintelligence), and this will allow us to generate a 'drop in AI researcher' that can then assist with further work. Then things go into 'slow' takeoff.
I am changing the order of the sections a bit. I put the pure AI stuff first, then afterwards are most of the rest of it.
The exception is the section on What Happened at OpenAI.
I am leaving that part out because I see it as distinct, and requiring a different approach. It is important and I will absolutely cover it. I want to do that in its proper context, together with other events at OpenAI, rather than together with the global questions raised here. Also, if you find OpenAI events relevant to your interests that section is worth listening to in full, because it is absolutely wild.
Long post is already long, so I will let this stand on its own and not combine it with people's reactions to Leopold or my more structured response to his paper.
While I have strong disagreements with Leopold, only some of which I detail here, and I especially believe he is dangerously wrong and overly optimistic about alignment, existential risks and loss of control in ways that are highly load bearing, causing potential sign errors in interventions, and also I worry that the new AGI fund may make our situation worse rather than better, I want to most of all say: Thank you.
Leopold has shown great courage. He stands up for what he believes in even at great personal cost. He has been willing to express views very different from those around him, when everything around him was trying to get him not to do that. He has thought long and hard about issues very hard to think long and hard about, and is obviously wicked smart. By writing down, in great detail, what he actually believes, he allows us to compare notes and arguments, and to move forward. This is The Way.
I have often said I need better critics. This is a better critic. A worthy opponent.
Also, on a great many things, he is right, including many highly important things where both the world at large and also those at the labs are deeply wrong, often where Leopold's position was not even being considered before. That is a huge deal.
The plan is to then do a third post, where I will respond holistically to Leopold's model, and cover the reactions of others.
Reminder on formatting for Podcast posts:
1. Unindented first-level items are descriptions of what was said and claimed on the podcast unless explicitly labeled otherwise.
2. Indented second-level items and beyond are my own commentary on that, unless labeled otherwise.
3. Time stamps are from YouTube.
The Trillion Dollar Cluster
1. (2:00) We start with the trillion-dollar cluster. It's coming. Straight lines on a graph at half an order of magnitude a year, a central theme throughout.
2. (4:30) Power. We'll need more. American power generation has not grown for decades. Who can build a 10 gigawatt center let alone 100? Leonard thinks 10 was so six months ago and we're on to 100. Trillion dollar cluster a bit farther out.
3. (6:15) Distinction between cost of cluster versus rental...
view more