Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Lex Fridman's Second Podcast with Altman, published by Zvi on March 25, 2024 on LessWrong.
Last week Sam Altman spent two hours with Lex Fridman (transcript). Given how important it is to understand where Altman's head is at and learn what he knows, this seemed like another clear case where extensive notes were in order.
Lex Fridman overperformed, asking harder questions than I expected and going deeper than I expected, and succeeded in getting Altman to give a lot of what I believe were genuine answers. The task is 'get the best interviews you can while still getting interviews' and this could be close to the production possibilities frontier given Lex's skill set.
There was not one big thing that stands out given what we already have heard from Altman before. It was more the sum of little things, the opportunity to get a sense of Altman and where his head is at, or at least where he is presenting it as being. To watch him struggle to be as genuine as possible given the circumstances.
One thing that did stand out to me was his characterization of 'theatrical risk' as a tactic to dismiss potential loss of human control. I do think that we are underinvesting in preventing loss-of-control scenarios around competitive dynamics that lack bad actors and are far less theatrical than those typically focused on, but the overall characterization here seems like a strategically hostile approach. I am sad about that, whereas I was mostly happy with the rest of the interview.
I will follow my usual format for podcasts of a numbered list, each with a timestamp.
(01:13) They open with the Battle of the Board. Altman starts with how he felt rather than any details, and drops this nugget: "And there were definitely times I thought it was going to be one of the worst things to ever happen for AI safety." If he truly believed that, why did he not go down a different road? If Altman had come out strongly for a transition to Mutari and searching for a new outside CEO, that presumably would have been fine for AI safety.
So this then is a confession that he was willing to put that into play to keep power.
(2:45) He notes he expected something crazy at some point and it made them more resilient. Yes from his perspective, but potentially very much the opposite from other perspectives.
(3:00) And he says 'the road to AGI should be a giant power struggle… not should… I expect that to be the case.' Seems right.
(4:15) He says he was feeling really down and out of it after the whole thing was over. That certainly is not the picture others were painting, given he had his job back. This suggests that he did not see this outcome as such a win at the time.
(5:15) Altman learned a lot about what you need from a board, and says 'his company nearly got destroyed.' Again, his choice. What do you think he now thinks he needs from the board?
(6:15) He says he thinks the board members were well-meaning people 'on the whole' and under stress and time pressure people make suboptimal decisions, and everyone needs to operate under pressure.
(7:15) He notes that boards are supposed to be powerful but are answerable to shareholders, whereas non-profit boards answer to no one. Very much so. This seems like a key fact about non-profits and a fundamentally unsolved problem. The buck has to stop somewhere. Sam says he'd like the board to 'answer to the world as a whole' so much as that is a practical thing. So, WorldCoin elections? I would not recommend it.
(8:00) What was wrong with the old board? Altman says insufficient size or experience. For new board members, new criteria is more considered, including different expertise on a variety of fronts, also different perspectives on how this will impact society and help people. Says track record is a big deal for board members, much more than for other positions, ...
view more