Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Scope-sensitive ethics: capturing the core intuition motivating utilitarianism, published by richard_ngo on the AI Alignment Forum.
Classical utilitarianism has many advantages as an ethical theory. But there are also many problems with it, some of which I discuss here. A few of the most important:
The idea of reducing all human values to a single metric is counterintuitive. Most people care about a range of things, including both their conscious experiences and outcomes in the world. I haven’t yet seen a utilitarian conception of welfare which describes what I’d like my own life to be like.
Concepts derived from our limited human experiences will lead to strange results when they’re taken to extremes (as utilitarianism does). Even for things which seem robustly good, trying to maximise them will likely give rise to divergence at the tails between our intuitions and our theories, as in the repugnant conclusion.
Utilitarianism doesn’t pay any attention to personal identity (except by taking a person-affecting view, which leads to worse problems). At an extreme, it endorses the world destruction argument: that, if given the opportunity to kill everyone who currently exists and replace them with beings with greater welfare, we should do so.
Utilitarianism is post-hoc on small scales; that is, although you can technically argue that standard moral norms are justified on a utilitarian basis, it’s very hard to explain why these moral norms are better than others. In particular, it seems hard to make utilitarianism consistent with caring much more about people close to us than strangers.
I (and probably many others) think that these objections are compelling, but none of them defeat the core intuition which makes utilitarianism appealing: that some things are good, and some things are bad, and we should continue to want more good things and fewer bad things even beyond the parochial scales of our own everyday lives. Instead, the problems seem like side effects of trying to pin down a version of utilitarianism which provides a precise, complete guide for how to act. Yet I’m not convinced that this is useful, or even possible. So I’d prefer that people defend the core intuition directly, at the cost of being a bit vaguer, rather than defending more specific utilitarian formalisations which have all sorts of unintended problems. Until now I’ve been pointing to this concept by saying things like “utilitarian-ish” or “90 percent utilitarian”. But it seems useful for coordination purposes to put a label on the property which I consider to be the most important part of utilitarianism; I’ll call it “scope-sensitivity”.
My tentative definition is that scope-sensitive ethics consists of:
Endorsing actions which, in expectation, bring about more intuitively valuable aspects of individual lives (e.g. happiness, preference-satisfaction, etc), or bring about fewer intuitively disvaluable aspects of individual lives (e.g. suffering, betrayal).
A tendency to endorse actions much more strongly when those actions increase (or decrease, respectively) those things much more.
I hope that describing myself as caring about scope-sensitivity conveys the most important part of my ethical worldview, without implying that I have a precise definition of welfare, or that I want to convert the universe into hedonium, or that I’m fine with replacing humans with happy aliens. Now, you could then ask me which specific scope-sensitive moral theory I subscribe to. But I think that this defeats the point: as soon as we start trying to be very precise and complete, we’ll likely run into many of the same problems as utilitarianism. Instead, I hope that this term can be used in a way which conveys a significant level of uncertainty or vagueness, while also being a strong enough position that if you accept scope-...
view more