Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: EA residencies as an outreach activity , published by Buck on the AI Alignment Forum.
[This was partially inspired by some ideas of Claire Zabel's. Thanks to Jessica McCurdy, Neel Nanda, Kuhan Jeyapragasan, Rebecca Baron, Joshua Monrad, Claire Zabel, and the people who came on my Slate Star Codex roadtrip for helpful comments.]
A few months ago, some EAs and I went on a trip to the East Coast to go to a bunch of Slate Star Codex meetups. I'm going to quote that entire post here (with a couple edits):
Our goals are:
- to meet promising people at the SSC meetups and move them into the EA recruiting pipeline
- to spend some time with promising new EAs, eg those at student groups, in the hope that spending a few hours of focused one-on-one time with one of us will help get them more into EA. Like, I think 80K finds people who are excited about AI safety stuff but aren't very knowledgeable about it yet; I think that those people can maybe get a lot out of a few hours' conversation with a few people who have worked professionally on this stuff.
- to visit EAs who are "in holding" doing things like PhD or EtG tech jobs, with possible good outcomes being that they'll be fired up wrt EA and more likely to do really impactful EA stuff on a timescale of like a year, or that their improved (Bay Area/professional EA) connections make it easier for them to spot good opportunities or move into doing more impactful work.
- (less primary) to talk to hardcore EAs and swap arguments and get to know each other better
Here's why I think it's worth us talking to various promising new EAs and enthusiastic EAs who haven't worked in the EA scene full time:
- There are a lot of accumulated arguments about EA topics which I think it’s really helpful to think about but which are hard to access when you only know EAs on the internet, because those arguments haven't been written up clearly or at all, or because their writeups are hard to find and rely on background knowledge that you don't know how to acquire.
- A lot of the time, EAs present versions of arguments that are strong enough to convince you to tentatively think that it's worth engaging seriously with the possibility that the conclusion might be true, but which have a bunch of holes in them that require substantial thinking to fill. Sometimes EAs (eg me) make the mistake of conflating these two levels of strength of argument, and act as if people should be persuaded by the initial sketch. One way that I notice when I'm making this mistake is by getting in arguments with people who've thought about stuff more than me. I hope that talking to more knowledgeable EAs might help some of the EAs we hang out spot holes in their understanding that might help improve their understanding and their epistemics.
Here is a reason that I think that having SF Bay Area EAs talk to rationalists in these cities at SSC meetups is plausibly worthwhile:
When smart people are skeptical of some of my weird beliefs, eg that AI x-risk is really important, or that they should consider working on EA stuff, or that long term we should consider radically restructuring the world to make it better for animals, a lot of the time their disagreement stems from something true about the world that the arguments they've seen didn't address. This is hard to avoid because if you try to write an argument that addresses all the potential concerns, it will be incredibly long. But this makes me think that it's often really high impact for people who have thought a lot about these arguments to talk to people who have heard of them but felt very unpersuaded.
My predictions mostly matched my impressions of what happened.
But I think you might be able to get many of these benefits more efficiently by doing something more like a residency, where you spend a relatively long time in eac...
view more