Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenAI: Helen Toner Speaks, published by Zvi on May 30, 2024 on LessWrong.
Helen Toner went on the TED AI podcast, giving us more color on what happened at OpenAI. These are important claims to get right.
I will start with my notes on the podcast, including the second part where she speaks about regulation in general. Then I will discuss some implications more broadly.
Notes on Helen Toner's TED AI Show Podcast
This seems like it deserves the standard detailed podcast treatment. By default each note's main body is description, any second-level notes are me.
1. (0:00) Introduction. The host talks about OpenAI's transition from non-profit research organization to de facto for-profit company. He highlights the transition from 'open' AI to closed as indicative of the problem, whereas I see this as the biggest thing they got right.
He also notes that he was left with the (I would add largely deliberately created and amplified by enemy action) impression that Helen Toner was some kind of anti-tech crusader, whereas he now understands that this was about governance and misaligned incentives.
2. (5:00) Interview begins and he dives right in and asks about the firing of Altman. She dives right in, explaining that OpenAI was a weird company with a weird structure, and a non-profit board supposed to keep the company on mission over profits.
3. (5:20) Helen says for years Altman had made the board's job difficult via withholding information, misrepresenting things happening at the company, and 'in some cases outright lying to the board.'
4.
(5:45) Helen says she can't share all the examples of lying or withholding information, but to give a sense: The board was not informed about ChatGPT in advance and learned about ChatGPT on Twitter, Altman failed to inform the board that he owned the OpenAI startup fund despite claiming to be an independent board member, giving false information about the company's formal safety processes on multiple occasions, and relating to her research paper, that Altman in the paper's wake started lying to
other board members in order to push Toner off the board.
1. I will say it again. If the accusation bout Altman lying to the board in order to change the composition of the board is true, then in my view the board absolutely needed to fire Altman. Period. End of story. You have one job.
2. As a contrasting view, the LLMs I consulted thought that firing the CEO should be considered, but it was plausible this could be dealt with via a reprimand combined with changes in company policy.
3. I asked for clarification given the way it was worded in the podcast, and can confirm that the Altman withheld information from the board regarding the startup fund and the launch of ChatGPT, but he did not lie about those.
4. Repeatedly outright lying about safety practices seems like a very big deal?
5. It sure sounds like Altman had a financial interest in OpenAI via the startup fund, which means he was not an independent board member, and that the company's board was not majority independent despite OpenAI claiming that it was. That is… not good, even if the rest of the board knew.
5. (7:25) Toner says that any given incident Altman could give an explanation, but the cumulative weight meant they could not trust Altman. And they'd been considering firing Altman for over a month.
1. If they were discussing firing Altman for at least a month, that raises questions about why they weren't better prepared, or why they timed the firing so poorly given the tender offer.
6. (8:00) Toner says that Altman was the board's main conduit of information about the company. They had been trying to improve processes going into the fall, these issues had been long standing.
7. (8:40) Then in October two executives went to the board and said they couldn't trust Altman, that the atmospher...
view more