Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenAI: Leaks Confirm the Story, published by Zvi on December 12, 2023 on LessWrong.
Previously: OpenAI: Altman Returns, OpenAI: The Battle of the Board, OpenAI: Facts from a Weekend, additional coverage in AI#41.
We have new stories from The New York Times, from Time, from the Washington Post and from Business Insider.
All paint a picture consistent with the central story told in OpenAI: The Battle of the Board. They confirm key facts, especially Altman's attempted removal of Toner from the board via deception. We also confirm that Altman promised to help with the transition when he was first fired, so we have at least one very clear cut case of Altman saying that which was not.
Much uncertainty remains, especially about the future, but past events are increasingly clear.
The stories also provide additional color and key details. This post is for those who want that, and to figure out what to think in light of the new details.
The most important new details are that NYT says that the board proposed and was gung ho on Brad Taylor, and says D'Angelo suggested Summers and grilled Summers together with Altman before they both agreed to him as the third board member. And that the new board is remaining quiet while it investigates, echoing the old board, and in defiance of the Altman camp and its wish to quickly clear his name.
The New York Times Covers Events
The New York Times finally gives its take on what happened, by Tripp Mickle, Mike Isaac, Karen Weise and the infamous Cade Metz (so treat all claims accordingly).
As with other mainstream news stories, the framing is that Sam Altman won, and this shows the tech elite and big money are ultimately in charge. I do not see that as an accurate description what happened or its implications, yet both the tech elite and its media opponents want it to be true and are trying to make it true through the magician's trick of saying that it is true, because often power resides where people believe it resides.
I know that at least one author did read my explanations of events, and also I talked to a Times reporter not on the byline to help make everything clear, so they don't have the excuse that no one told them. Didn't ultimately matter.
Paul Graham is quoted as saying Altman is drawn to power more than money, as an explanation for why Altman would work on something that does not make him richer. I believe Graham on this, but also I think there are at least three damn good other reasons to do it, making the decision overdetermined.
If Altman wants to improve his own lived experience and those of his friends and loved ones, building safe AGI, or ensuring no one else builds unsafe AGI, is the most important thing for him to do. Altman already has all the money he will ever need for personal purposes, more would not much improve his life. His only option is to instead enrich the world, and ensure humanity flourishes and also doesn't die.
Indeed, notice the rest of his portfolio includes a lot of things like fusion power and transformational medical progress. Even if Altman only cares about himself, these are the things that make his life better - by making everyone's life better.
Power and fame and prestige beget money. Altman does not have relevant amounts of equity in OpenAI, but he has used his position to raise money, to get good deal flow, and in general to be where the money resides. If Altman decided what he cared about was cash, he could easily turn this into cash. To be clear, I do not at all begrudge in general. I am merely not a fan of some particular projects, like 'build a chip factory in the UAE.'
AGI is the sweetest, most interesting, most exciting challenge in the world. Also the most important. If you thought your contribution would increase the chance things went well, why would you want to be working on anything ...
view more