Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The social disincentives of warning about unlikely risks, published by Lucius Caviola on June 17, 2024 on The Effective Altruism Forum.
If you knew about a potential large-scale risk that, although unlikely, could kill millions, would you warn society about it? You might say yes, but many people are reluctant to warn.
In ten studies, Matt Coleman, Joshua Lewis, Christoph Winter, and I explored a psychological barrier to warning about low-probability, high-magnitude risks.
In short, we found that people are reluctant to warn because they could look bad if the risk doesn't occur. And while unlikely risks probably won't happen, they should still be taken seriously if the stakes are large enough. For example, it's worth wearing a seat belt because, even though a car crash is unlikely, its consequences would be so severe. Unfortunately, reputational incentives are often not aligned with what's most beneficial for society.
People would rather keep quiet and hope nothing happens rather than be seen as overly alarmist.
Below, I summarize some of our studies, discuss the underlying psychology of the phenomenon, and suggest possible strategies to encourage risk warning in society. If you want more information about the studies, you can check out
our research paper (including all data, materials, scripts, and pre-registrations).
Reputational fears of warning about unlikely risks
In Study 1, we asked 397 US online participants to imagine they were biological risk experts and believed there was a 5% chance of a new, extremely dangerous virus emerging within the next three years. They could warn society about the risk and recommend a $10-billion investment to develop a vaccine that would prevent all possible harm. If no vaccine is developed and the virus emerges, it will kill millions of people and lead to billions of dollars of economic damage.
But if the virus doesn't emerge, nobody will be harmed, and the money invested in developing the vaccine will have been wasted. Participants were then asked how likely or unlikely they would be to warn society about the risk, and how concerned they would be that society would blame them for warning about the virus.
We hypothesized that people would be reluctant to warn about an unlikely risk due to fear of blame. If true, they should be more willing to warn anonymously. So, we told half the participants their identity would be public if they warned, and the other half that their identity would remain anonymous. We assured both groups that key decision-makers would take their warnings seriously. As expected, participants were less likely to warn society about the risk publicly than anonymously (M = 4.56 vs.
M = 5.25, on a scale from 1 to 7, p < .001). And they were more concerned about being blamed for warning about the risk publicly than anonymously (M = 4.12 vs. M = 2.90, p < .0001).
Warning disincentives are specific to unlikely risks
If you warn about a low-probability risk, the most likely outcome is that the risk won't materialize, and you'll look naive or overly alarmist. In contrast, if you warn about a high probability risk, the risk probably will materialize, and you'll look smart. Thus, we hypothesized that people would be particularly reluctant to publicly warn about unlikely risks compared to likely ones since for the latter, they know that their prediction will probably turn out to be true.
To test this, in Study 2a, 539 US participants imagined they believed that there was an extremely damaging storm that could emerge within the next three months. They were planning to warn society about the risk and recommend that the government invest in minimizing harm from the storm. To test our hypothesis, we randomly varied the likelihood and severity of the storm. It had either a 1% chance of killing 100 million people or a 99% chance ...
view more