Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Some global catastrophic risk estimates, published by Tamay on the effective altruism forum.
In October of 2018, I developed a question series on Metaculus related to extinction events spanning risks from nuclear war, bio-risk, risks from climate change and geo-engineering, Artificial Intelligence risk, and risks from nanotechnology failure modes. Since then, these questions have accrued nearly 2,000 predictions.
Catastrophes were defined as a reduction in the human population of at least 10% in any period of 5 years or less. (Near) extinction is defined as an event that reduces the human population by at least 10% within 5 years, and by at least 95% within 25 years.
Here's a summary of the results as they stand today.
Global catastrophic risk Chance of catastrophe by 2100 Chance of (near) extinction by 2100
Nuclear war 4.18% 0.29%
Biotechnology or bioengineered pathogens 4.18% 0.17%
Artificial Intelligence failure modes 3.99% 1.88%
Climate change or geo-engineering 1.71% 0.02%
Nanotechnology failure modes 0.57% n/a
These predictions are generated by aggregating forecasters' individual predictions based on their track records. Specifically, the predictions are weighted by a function of the forecasters' level of 'skill', where 'skill' is estimated with data on relative performance on a number (typically many hundreds) of resolved forecasts.
If we assume that these events are independent, the predictions suggest that there's at least a 13.85% chance of catastrophe, and a 2.34% chance of (near) extinction by the end of the century. Admittedly, independence is likely to be an inappropriate assumption, since, for example, some catastrophes could exacerbate other global catastrophic risks. Moreover, the risks might higher be higher than these numbers suggest, given that there are other sources of global catastrophic risk besides the ones in the list.
Interestingly, the predictions indicate that although nuclear risk and bioengineered pathogens are most likely to result in a major catastrophe, an AI failure mode is by far the biggest source of extinction-level risk—it is at least 6-times more likely to cause near extinction than the second most likely event to do so (namely, nuclear war).
Links to all the questions on which these predictions are based may be found here.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
view more