Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Relative Impact of the First 10 EA Forum Prize Winners, published by NunoSempere on the AI Alignment Forum.
Summary
We don’t normally estimate the value of small to medium-sized projects.
But we could!
If we could do this reliably and scalably, this might lead us to choose better projects
Here is a small & very speculative attempt
My estimates are very uncertain (ranging several orders of magnitude), but they still seem useful for comparing projects. Nonetheless, the reader is advised to not take them too seriously.
Introduction
The EA forum—and local groups—have been seeing a decent amount of projects, but few are evaluated for impact. This makes it difficult to choose between projects beforehand, beyond using personal intuition (however good it might be), a connection to a broader research agenda, or other rough heuristics. Ideally, we would have something more objective, and more scalable.
As part of QURI’s efforts to evaluate and estimate the impact of things in general, and projects QURI itself might carry out in particular, I tried to evaluate the impact of 10 projects I expected to be fairly valuable.
Methodology
I chose the first 10 posts which won the EA Forum Prize, back in 2017 and 2018, to evaluate. For each of the 10 posts, each estimate has a structure like the one below. Note that not all estimates will have each element:
Title of the post
Background information: What are some salient facts about the post?
Theory of change: If this isn’t clear, how is this post aiming to have an impact?
Reasoning about my estimate: How do I arrive at my estimate of impact given what I know about the world?
Guesstimate model: Verbal reasoning can be particularly messy, so I also provide a guesstimate model
Ballpark: A verbal estimate
Estimate: A numerical estimate of impact
If a writeup refers to a project distinct from the writeup, I generally try to estimate the impact of both the project and the writeup.
Where possible, I estimated their impact in an ad-hoc scale, Quality Adjusted Research Papers (QARPs for short), whose levels correspond to the following:
Value Description Example
~0.1 mQARPs A thoughtful comment A thoughtful comment about the details of setting up a charity
~1 mQARPs A good blog post, a particularly good comment What considerations influence whether I have more influence over short or long timelines?
~10 mQARPs An excellent blog post Humans Who Are Not Concentrating Are Not General Intelligences
~100 mQ A fairly valuable paper Categorizing Variants of Goodhart's Law.
~1 QARPs A particularly valuable paper The Vulnerable World Hypothesis
~10-100 QARPs A research agenda The Global Priorities Institute's Research Agenda.
~100-1000+ QARPs A foundational popular book on a valuable topic Superintelligence, Thinking Fast and Slow
~1000+ QARPs A foundational research work Shannon’s "A Mathematical Theory of Communication."
Ideally, this would both have relative meaning (i.e., I claim that an average thoughtful comment is worth less than an average good post), and absolute meaning (i.e., after thinking about it, a factor of 10x between an average thoughtful comment and an average good post seems roughly right). In practice, the second part is a work in progress. In an ideal world, this estimate would be cause-independent, but cause comparability is not a solved problem, and in practice the scale is more aimed towards long-term focused projects.
To elaborate on cause independence, upon reflection we may find out that a fairly valuable paper on AI Alignment might be 20 times as a fairly valuable paper on Food Security, and give both of their impacts in a common unit. But we are uncertain about their actual relative impacts, and they will not only depend on uncertainty, but also on moral preferences and values (e.g., weight given to animals, weight given to pe...
view more