Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: RAISE post-mortem, published by toonalfrink on the LessWrong.
Edit November 2021: there is now the Cambridge AGI Safety Fundamentals course, which promises to be successful. It is enlightening to compare this project with RAISE. Why is that one succeeding while this one did not? I'm quite surprised to find that the answer isn't so much about more funding, more senior people to execute it, more time, etc. They're simply using existing materials instead of creating their own. This makes it orders of magnitude easier to produce the thing, you can just focus on the delivery. Why didn't I, or anyone around me, think of this? I'm honestly perplexed. It's worth thinking about.
Since June, RAISE has stopped operating. I’ve taken some time to process things, and now I’m wrapping up.
What was RAISE again
AI Safety is starved for talent. I saw a lot of smart people around me that wanted to do the research. Their bottleneck seemed to be finding good education (and hero licensing). The plan was to alleviate that need by creating an online course about AI Safety (with nice diplomas).
How did it go
We spent a total of ~2 years building the platform. It started out as a project based on volunteers creating the content. Initially, many people (more than 80) signed up to volunteer, but we did not manage to get most of them to show up consistently. We gradually pivoted to paying people instead.
We received a lot of encouragement for the project. Most of the enthusiasm came from people wanting to learn AI Safety. Robert Miles joined as a lecturer. When we reached out to some AI Safety researchers for suggestions on which topics to cover, we readily received helpful advice. Sometimes we also received some funds from a couple of prominent AIS organizations who thought the project could be high value, at least in expectation.
The stream of funding was large enough to sustain about 1 fte working for a relatively low wage. Obtaining it was a struggle: our runway was never longer than 2 months. This created a large attention sink that made it a lot harder to create things. Nearly all of my time was spent on overhead, while others were creating the content. I did not have the time to review much of it.
About 1 year into the project, we escaped this poverty trap by moving to the EA Hotel and starting a content development team there. We went up to about 4 fte, and the production rate shot up leading to an MVP relatively quickly.
How did it end
Before launch, the best way to secure funding seemed to be to just create the damn thing, make sure it’s good, and let it advocate for itself. After launch, a negative signal could not be dismissed as easily.
We got two clear negative signals: one from a major AIS research org (that has requested not to be named), and one from the LTF fund. The former declined to continue their experimental funding of RAISE. The latter declined a grant request. These were clear signals that people in the establishment of AI Safety did not deem the project worth funding, so I reached out for a conversation.
The question was this: “what version of RAISE would you fund?” The answer was roughly that while they agreed strongly with the vision for RAISE, our core product sadly wasn’t coming together in a way that suggested it would be worth it for us to keep working on it. I was tentatively offered a personal grant if I spent it on taking a step back to think hard and figure out what AI Safety needs (I ended up declining for career-strategic reasons).
In another conversation, an insider told us that AI Safety needs to grow in quality more than quantity. There is already a lot of low-quality research. We need AI Safety to be held to high standards. Lowering the bar for a research-level understanding will not solve that.
I decided to quit. I was out of runway, updated towards RAI...
view more