Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
this is: Help me find the crux between EA/XR and Progress Studies, published by jasoncrawford on the effective altruism forum.
I'm trying to get to the crux of the differences between the progress studies (PS) and the EA / existential risk (XR) communities. I'd love input from you all on my questions below.
The road trip metaphor
Let me set up a metaphor to frame the issue:
Picture all of humanity in a car, traveling down the highway of progress. Both PS and EA/XR agree that the trip is good, and that as long as we don't crash, faster would be better. But:
XR thinks that the car is out of control and that we need a better grip on the steering wheel. We should not accelerate until we can steer better, and maybe we should even slow down in order to avoid crashing.
PS thinks we're already slowing down, and so wants to put significant attention into re-accelerating. Sure, we probably need better steering too, but that's secondary.
(See also @Max_Daniel's recent post)
My questions
Here are some things I don't really understand about the XR position (granted that I haven't read the literature on it extensively yet, but I have read a number of the foundational papers).
(Edit for clarity: these questions are not proposed as cruxes. They are just questions I am unclear on, related to my attempt to find the crux)
How does XR weigh costs and benefits?
Is there any cost that is too high to pay, for any level of XR reduction? Are they willing to significantly increase global catastrophic risk—one notch down from XR in Bostrom's hierarchy—in order to decrease XR? I do get that impression. They seem to talk about any catastrophe less than full human extinction as, well, not that big a deal.
For instance, suppose that if we accelerate progress, we can end poverty (by whatever standard) one century earlier than otherwise. In that case, failing to do so, in itself, should be considered a global catastrophic risk, or close to it. If you're willing to accept GCR in order to slightly reduce XR, then OK—but it feels to me that you've fallen for a Pascal's Mugging.
Eliezer has specifically said that he doesn't accept Pascal's Mugging arguments in the x-risk context, and Holden Karnofsky has indicated the same. The only counterarguments I've seen conclude “so AI safety (or other specific x-risk) is still a worthy cause”—which I'm fine with. I don't see how you get to “so we shouldn't try to speed up technological progress.”
Does XR consider tech progress default-good or default-bad?
My take is that tech progress is default good, but we should be watchful for bad consequences and address specific risks. I think it makes sense to pursue specific projects that might increase AI safety, gene safety, etc. I even think there are times when it makes sense to put a short-term moratorium on progress in an area in order to work out some safety issues—this has been done once or twice already in gene safety.
When I talk to XR folks, I sometimes get the impression that they want to flip it around, and consider all tech progress to be bad unless we can make an XR-based case that it should go forward. That takes me back to point (1).
What would moral/social progress actually look like?
This idea that it's more important to make progress in non-tech areas: epistemics, morality, coordination, insight, governance, whatever. I actually sort of agree with that, but I'm not sure at all that what I have in mind there corresponds to what EA/XR folks are thinking. Maybe this has been written up somewhere, and I haven't found it yet?
Without understanding this, it comes across as if tech progress is on indefinite hold until we somehow become better people and thus have sufficiently reduced XR—although it's unclear how we could ever reduce it enough, because of (1).
What does XR think about the large numbers of people who do...
view more