Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Possible misconceptions about (strong) longtermism, published by jackmalde on the AI Alignment Forum.
Overview
In this post I provide a brief sketch of The case for strong longtermism as put forward by Greaves and MacAskill, and proceed to raise and address possible misconceptions that people may have about strong longtermism. Some of the misconceptions I have come across, whilst others I simply suspect may be held by some people in the EA community.
The goal of this post isn’t to convert people as I think there remain valid objections against strong longtermism to grapple with, which I touch on at the end of this post. Instead, I simply want to address potential misunderstandings, or point out nuances that may not be fully appreciated by some in the EA community. I think it is important for the EA community to appreciate these nuances, which should hopefully aid the goal of figuring out how we can do the most good.
EDIT: I realise this is a long post. Feel free just to read misconceptions you are interested in as opposed to the whole post!
NOTE: I certainly do not consider myself to be any sort of authority on longtermism. I partly wrote this post to push me to engage with the ideas more deeply than I already had. No-one has read through this before my posting, so it’s certainly possible that there are inaccuracies or mistakes in this post and I look forward to any of these being pointed out! I’d also appreciate ideas for other possible misconceptions that I have not covered here.
Defining strong longtermism
The specific claim that I want to address possible misconceptions about is that of axiological strong longtermism, which Greaves and MacAskill define in their 2019 paper The case for strong longtermism as the following:
Axiological strong longtermism (AL): “In a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best.”
Put more simply (and phrased in a deontic way that assumes we should do is what will result in the best consequences), one might say that:
“In most of the choices (or, most of the most important choices) we face today, what we ought to do is mainly determined by possible effects on the far future.”
Greaves and MacAskill note that an implication of axiological strong longtermism is that:
“for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years, focussing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.”
I think most people would agree that this is a striking claim.
Sketch of the strong longtermist argument
The argument made by Greaves and MacAskill (2019) begins with a plausibility argument that goes roughly as follows:
Plausibility Argument:
In expectation, the future is vast in size (in terms of expected number of beings)
All consequences matter equally (i.e. it doesn’t matter when a consequence occurs, or if it was intended or not)
Therefore it is at least plausible that the amount of ex ante good we can generate by influencing the expected course of the very long-run future exceeds the amount of ex ante good we can generate via influencing the expected course of short-run events, even after taking into account the greater uncertainty of further-future effects.
Also, because of the near-term bias exhibited by the majority of existing actors, we should expect tractable longtermist options (if they exist) to be systematically under-exploited at the current margin
The authors then consider the intractability objection: that it is essentially impossible to significantly influence the long-term future ex ante, perhaps because the magnitude of the effects of one’s actions (in expected value-d...
view more