Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Notes on Dwarkesh Patel's Podcast with Demis Hassabis, published by Zvi on March 2, 2024 on LessWrong.
Demis Hassabis was interviewed twice this past week.
First, he was interviewed on Hard Fork. Then he had a much more interesting interview with Dwarkesh Patel.
This post covers my notes from both interviews, mostly the one with Dwarkesh.
Hard Fork
Hard Fork was less fruitful, because they mostly asked what for me are the wrong questions and mostly get answers I presume Demis has given many times. So I only noticed two things, neither of which is ultimately surprising.
They do ask about The Gemini Incident, although only about the particular issue with image generation. Demis gives the generic 'it should do what the user wants and this was dumb' answer, which I buy he likely personally believes.
When asked about p(doom) he expresses dismay about the state of discourse and says around 42:00 that 'well Geoffrey Hinton and Yann LeCun disagree so that indicates we don't know, this technology is so transformative that it is unknown. It is nonsense to put a probability on it.
What I do know is it is non-zero, that risk, and it is worth debating and researching carefully… we don't want to wait until the eve of AGI happening.' He says we want to be prepared even if the risk is relatively small, without saying what would count as small. He also says he hopes in five years to give us a better answer, which is evidence against him having super short timelines.
I do not think this is the right way to handle probabilities in your own head. I do think it is plausibly a smart way to handle public relations around probabilities, given how people react when you give a particular p(doom).
I am of course deeply disappointed that Demis does not think he can differentiate between the arguments of Geoffrey Hinton versus Yann LeCun, and the implied importance on the accomplishments and thus implied credibility of the people. He did not get that way, or win Diplomacy championships, thinking like that. I also don't think he was being fully genuine here.
Otherwise, this seemed like an inessential interview. Demis did well but was not given new challenges to handle.
Dwarkesh Patel
Demis Hassabis also talked to Dwarkesh Patel, which is of course self-recommending. Here you want to pay attention, and I paused to think things over and take detailed notes. Five minutes in I had already learned more interesting things than I did from the entire Hard Fork interview.
Here is the transcript, which is also helpful.
(1:00) Dwarkesh first asks Demis about the nature of intelligence, whether it is one broad thing or the sum of many small things. Demis says there must be some common themes and underlying mechanisms, although there are also specialized parts. I strongly agree with Demis. I do not think you can understand intelligence, of any form, without some form the concept of G.
(1:45) Dwarkesh follows up by asking then why doesn't lots of data in one domain generalize to other domains? Demis says often it does, such as coding improving reasoning (which also happens in humans), and he expects more chain transfer.
(4:00) Dwarkesh asks what insights neuroscience brings to AI. Demis points to many early AI concepts. Going forward, questions include how brains form world models or memory.
(6:00) Demis thinks scaffolding via tree search or AlphaZero-style approaches for LLMs is super promising. He notes they're working hard on search efficiency in many of their approaches so they can search further.
(9:00) Dwarkesh notes that Go and Chess have clear win conditions, real life does not, asks what to do about this. Demis agrees this is a challenge, but that usually 'in scientific problems' there are ways to specify goals. Suspicious dodge?
(10:00) Dwarkesh notes humans are super sample efficient, Demis says it ...
view more