Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The Folly of "EAs Should, published by Davidmanheim on the AI Alignment Forum.
I've seen and heard many discussions about what EAs should do. William McAskill has ventured a definition of Effective Altruism, and I think it is instructive. Will notes that "Effective altruism consists of two projects, rather than a set of normative claims." One consequence of this is that if there are no normative claims, any supposition about what ought to happen based on EA ideas is invalid. This is a technical point, and one which might seem irrelevant to practical concerns, but I think there are some pernicious consequences of some of the normative claims that get made.
So I think we should discuss why "Effective Altruism" implying that there are specific and clear preferable options for "Effective Altruists" is often harmful. Will's careful definition avoids that harm, and I think should be taken seriously in that regard.
Mistaken Assertions
Claiming something normative given moral uncertainty, i.e. that we may be incorrect, is hard to justify. There are approaches to moral uncertainty that allow a resolution, but if EAs should cooperate, I argue that it may be useful, regardless of normative goals, to avoid normative statements that exclude some viewpoints. This is not because they cannot be justified, but because they can be strategic mistakes. Specifically, we should be wary of making the project exclusive rather than inclusive.
EA is Young, Small, and Weird
EA is very young. Some find this an obvious situation - aren't most radical movements young? Are most people willing to embrace new ideas young? - but I disagree. Many of the most popular movements sweep across age groups. Environmentalism, Gay rights, and Animal welfare all skewed young, but were increasingly adopted by those of all ages. In part, that is because they allow people to embrace them. There is no widespread belief in environmentalism that doctors have wasted their careers focusing on saving lives at the retail level rather than saving the world. There is little reason that anyone would hesitate the raise the pride flag because they are not doing enough for the movement. But effective altruism is often perceived differently.
To the extent that EAs embrace a single vision (a very limited extent, to be clear,) they often exclude those who differ on details, intentionally or not. "Failing" to embrace longtermism, or ("worse"?) disagreeing about impartiality, is enough to start arguments. Is it any wonder that we have so few people with well-established lives and worldviews willing to consider our project, "the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources"? Nothing about the project is exclusive - it is the community that creates exclusion. And it would be a shame for people to feel useless and excluded.
Of course, allowing more diversity will allow the ideas of effective altruism to spread - but it will also reduce the tension which seems to exist around disagreeing with the orthodoxy. People debate whether EA should be large and welcoming or small and weird. But as John Maxwell suggests, large and weird might be a fine compromise. We see this - the LGBT movement, now widely embraced, famously suggests that people should "let your freak flag fly," but the phrase dates back to the 60s counterculture. Neither stayed small and weird, and despite each leading to a culture war, each seems to have been, at least in retrospect, very widely embraced. And neither needed to develop a single coherent worldview to get there; no-one can argue that LGBT groups all agree about many issues. And despite the fragmentation and arguments, the key messages came through to the broader public just fine.
EA is Already Fragmented
It may come as a surprise to readers of the forum...
view more