Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EA Survey: Cause Prioritization, published by Jamie Elsey on May 15, 2024 on The Effective Altruism Forum.
Summary
In this post we report findings from the 2022 EA Survey (EAS 2022) and the 2023 Supplemental EA Survey (EAS Supplement 2023)[1], covering:
Ratings of different causes, as included in previous EA Surveys (EAS 2022)
New questions about ideas related to cause prioritization (EAS 2022)
A new question about what share of resources respondents believe should be allocated to each cause (EAS Supplement 2023)
New questions about whether people would have gotten involved in EA, had EA supported different causes (EAS Supplement 2023)
Overall cause prioritization
Global Poverty and AI Risk were the highest-rated causes, closely followed by Biosecurity
Ratings of AI Risk, Biosecurity, Nuclear Security, and Animal Welfare have all increased in recent years
Prioritizing longtermist over neartermist causes was predicted by higher levels of engagement, male gender, white ethnicity, and younger age (when accounting for the influence of these and several other variables in the same model).
63% of respondents gave their highest rating to a longtermist cause, while 47% gave a neartermist cause their highest rating (the total exceeds 100% because 26% of respondents gave their highest ratings to both a longtermist cause and a neartermist cause). Splitting these out, we see 38% gave only a longtermist cause their highest rating, 21% a neartermist cause only, 26% both, and 16% neither a near- nor longtermist cause (e.g., Animal Welfare).
This suggests that although almost twice as many respondents prioritize a longtermist cause as prioritize a neartermist cause, there is considerable overlap.
Philosophical ideas related to cause prioritization
"I'm comfortable supporting low-probability, high-impact interventions": 66% of respondents agreed, 19% disagreed.
"It is more than 10% likely that ants have valenced experience (e.g., pain)": 56% of respondents agreed, 17% disagreed
"The impact of our actions on the very long-term future is the most important consideration when it comes to doing good": 46% of respondents agreed, 37% disagreed
"I endorse being roughly risk-neutral, even if it increases the odds that I have no impact at all." 37% of respondents agreed, 41% disagreed
"Most expected value in the future comes from digital minds' experiences, or the experiences of other nonbiological entities." 27% of respondents agreed, 43% disagreed
"The EA community should defer to mainstream experts on most topics, rather than embrace contrarian views." 23% of respondents agreed, 46% disagreed
The following statements were quite strongly associated with each other and predicted support for longtermist causes over neartermist ones quite well: "Long-term future", "Low probability high impact", "Risk neutral", and "Digital minds"
Allocation of resources to causes
On average, respondents allocated the following shares to each cause: Global Health and Development (29.7%), AI Risks (23.2%), Farm Animal Welfare (FAW) (18.7%), Other x-risks (16.0%), Wild Animal Welfare (5.1%), and Other causes (7.3%). Results did not dramatically differ looking only at highly engaged respondents.
Compared to
actual
allocations
estimed in 2019, the average of the survey allocations are lower for GHD, higher for AI / x-risk overall and higher for animal welfare, but compared to 2019
Leaders' Forum allocations, the survey assigns larger shares to GHD and to FAW and less to AI and x-risk.
Causes and getting involved in EA
Respondents were strongly inclined to report that "the key philosophical ideas and principles of EA in the abstract" were more important than "Specific causes that EA focuses on"
However, results were much more mixed when asked whether they would still have gotten involved in EA if the community was n...
view more