Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenAI: Fallout, published by Zvi on May 28, 2024 on LessWrong.
Previously: OpenAI: Exodus (contains links at top to earlier episodes), Do Not Mess With Scarlett Johansson
We have learned more since last week. It's worse than we knew.
How much worse? In which ways? With what exceptions?
That's what this post is about.
The Story So Far
For years, employees who left OpenAI consistently had their vested equity explicitly threatened with confiscation and the lack of ability to sell it, and were given short timelines to sign documents or else. Those documents contained highly aggressive NDA and non disparagement (and non interference) clauses, including the NDA preventing anyone from revealing these clauses.
No one knew about this until recently, because until Daniel Kokotajlo everyone signed, and then they could not talk about it. Then Daniel refused to sign, Kelsey Piper started reporting, and a lot came out.
Here is Altman's statement from May 18, with its new community note.
Evidence strongly suggests the above post was, shall we say, 'not consistently candid.'
The linked article includes a document dump and other revelations, which I cover.
Then there are the other recent matters.
Ilya Sutskever and Jan Leike, the top two safety researchers at OpenAI, resigned, part of an ongoing pattern of top safety researchers leaving OpenAI. The team they led, Superalignment, had been publicly promised 20% of secured compute going forward, but that commitment was not honored. Jan Leike expressed concerns that OpenAI was not on track to be ready for even the next generation of models needs for safety.
OpenAI created the Sky voice for GPT-4o, which evoked consistent reactions that it sounded like Scarlett Johansson, who voiced the AI in the movie Her, Altman's favorite movie. Altman asked her twice to lend her voice to ChatGPT. Altman tweeted 'her.' Half the articles about GPT-4o mentioned Her as a model. OpenAI executives continue to claim that this was all a coincidence, but have taken down the Sky voice.
(Also six months ago the board tried to fire Sam Altman and failed, and all that.)
A Note on Documents from OpenAI
The source for the documents from OpenAI that are discussed here, and the communications between OpenAI and its employees and ex-employees, is Kelsey Piper in Vox, unless otherwise stated.
She went above and beyond, and shares screenshots of the documents. For superior readability and searchability, I have converted those images to text.
Some Good News But There is a Catch
OpenAI has indeed made a large positive step. They say they are releasing former employees from their nondisparagement agreements and promising not to cancel vested equity under any circumstances.
Kelsey Piper: There are some positive signs that change is happening at OpenAI. The company told me, "We are identifying and reaching out to former employees who signed a standard exit agreement to make it clear that OpenAI has not and will not cancel their vested equity and releases them from nondisparagement obligations."
Bloomberg confirms that OpenAI has promised not to cancel vested equity under any circumstances, and to release all employees from one-directional non-disparagement agreements.
And we have this confirmation from Andrew Carr.
Andrew Carr: I guess that settles that.
Tanner Lund: Is this legally binding?
Andrew Carr:
I notice they are also including the non-solicitation provisions as not enforced.
(Note that certain key people, like Dario Amodei, plausibly negotiated two-way agreements, which would mean theirs would still apply. I would encourage anyone in that category who is now free of the clause, even if they have no desire to disparage OpenAI, to simply say 'I am under no legal obligation not to disparage OpenAI.')
These actions by OpenAI are helpful. They are necessary.
They are no...
view more