welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Illegible impact is still impact, published by G Gordon Worley III on the effective altruism forum.
Write a Review
In EA we focus a lot on legible impact. At a tactical level, it's the thing that often separates EA from other altruistic efforts. Unfortunately I think this focus on impact legibility, when taken to extremes and applied in situations where it doesn't adequately account for value, leads to bad outcomes for EA and the world as a whole.
Legibility is the idea that only what can easily be explained and measured within a model matters. Anything that doesn't fit neatly in the model is therefore illegible.
In the case of impact, legible impact is that which can be measured easily in ways that a model predicts is correlated with outcomes. Examples of legible impact measures for altruistic efforts include counterfactual lives saved, QALYs, DALYs, and money donated; examples of legible impact measures for altruistic individuals include the preceding plus things like academic citations and degrees, jobs at EA organizations, and EA Forum karma.
Some impact is semi-legible, like social status among EAs, claims of research progress, and social media engagement. Semi-legible impact either involves fuzzy measurement procedures or low confidence models of how the measure correlates with real world outcomes.
Illegible impact is, by comparison, invisible, like helping a friend who, without your help, might have been too depressed to get a better job and donate more money to effective charities or filling a seat in the room at an EA Global talk such that the speaker feels marginally more rewarded for having done the work they are talking about and marginally incentives them to do more. Illegible impact is either hard or impossible to measure or there's no agreed upon model suggesting the action is correlated with impact. And the examples I gave are not maximally illegible because they had to be legible enough for me to explain them to you; the really invisible stuff is like dark matter—we can see signs of its existence (good stuff happens in the world) but we can't tell you much about what it is (no model of how the good stuff happened).
The alluring trap is thinking that illegible impact is not impact and that legible impact is the only thing that matters. If that doesn't resonate, I recommend checking out the links above on legibility to see when and how focusing on the legible to the exclusion of the illegible can lead to failure.
One place we risk failing to adequately appreciate illegible impact is in work on far future concerns and existential risk. This comes with the territory: it's hard to validate our models of what will happen in the far future, and the feedback cycle is so long that it may be thousands or millions of lifetimes before we get data back that lets us know if an intervention, organization, or person had positive impact, let alone if that impact was effectively generated.
Another place we risk impact illegible is in dealing with non-humans since there remains great uncertainty in many people's minds about how to value the experiences of animals, plants, and non-living dynamic systems like AI. Yes, people who care about non-humans are often legible to each other because they share enough assumptions that they can share models and can believe measures in terms of those models, but outside these groups interventions to help non-humans can seem broadly illegible, up to interpreting these the interventions, like those addressing wild animal suffering, as being silly or incoherent rather than potentially positively impactful.
Beyond these two examples, there's one place where I think the problems of illegible impact are especially neglected and that is easily tractable if we bother to acknowledge it. It's one EAs are already familiar with, though likely no...
view more