Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: S-Risks: Fates Worse Than Extinction, published by A.G.G. Liu on May 4, 2024 on The Effective Altruism Forum.
Cross-posted from LessWrong
In this Rational Animations video, we discuss s-risks (risks from astronomical suffering), which involve an astronomical number of beings suffering terribly. Researchers on this topic argue that s-risks have a significant chance of occurring and that there are ways to lower that chance.
The script for this video was a winning submission to the Rational Animations Script Writing contest (https://forum.effectivealtruism.org/posts/p8aMnG67pzYWxFj5r/rational-animations-script-writing-contest). The first author of this post, Allen Liu, was the primary script writer with the second author (Writer) and other members of the Rational Animations writing team giving significant feedback. Outside reviewers, including authors of several of the cited sources, provided input as well.
Production credits are at the end of the video. You can find the script of the video below.
Is there anything worse than humanity being driven extinct? When considering the long term future, we often come across the concept of "existential risks" or "x-risks": dangers that could effectively end humanity's future with all its potential. But these are not the worst possible dangers that we could face. Risks of astronomical suffering, or "s-risks", hold even worse outcomes than extinction, such as the creation of an incredibly large number of beings suffering terribly.
Some researchers argue that taking action today to avoid these most extreme dangers may turn out to be crucial for the future of the universe.
Before we dive into s-risks, let's make sure we understand risks in general. As Swedish philosopher Nick Bostrom explains in his 2013 paper "Existential Risk Prevention as Global Priority",[1] one way of categorizing risks is to classify them according to their "scope" and their "severity". A risk's "scope" refers to how large a population the risk affects, while its "severity" refers to how much that population is affected.
To use Bostrom's examples, a car crash may be fatal to the victim themselves and devastating to their friends and family, but not even noticed by most of the world. So the scope of the car crash is small, though its severity is high for those few people. Conversely, some tragedies could have a wide scope but be comparatively less severe.
If a famous painting were destroyed in a fire, it could negatively affect millions or billions of people in the present and future who would have wanted to see that painting in person, but the impact on those people's lives would be much smaller.
In his paper, Bostrom analyzes risks which have both a wide scope and an extreme severity, including so-called "existential risks" or "x-risks". Human extinction would be such a risk: affecting the lives of everyone who would have otherwise existed from that point on and forever preventing all the joy, value and fulfillment they ever could have produced or experienced.
Some other such risks might include humanity's scientific and moral progress permanently stalling or reversing, or us squandering some resource that could have helped us immensely in the future.
S-risk researchers take Bostrom's categories a step further. If x-risks are catastrophic because they affect everyone who would otherwise exist and prevent all their value from being realized, then an even more harmful type of risk would be one that affects more beings than would otherwise exist and that makes their lives worse than non-existence: in other words, a risk with an even broader scope and even higher severity than a typical existential risk, or a fate worse than extinction.
David Althaus and Lukas Gloor, in their article from 2016 titled "Reducing Risks of Astronomical Suffering: A Neglected Priority"...
view more