The Nonlinear Library: EA Forum
Education
EA - Solution to the two envelopes problem for moral weights by MichaelStJules
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Solution to the two envelopes problem for moral weights, published by MichaelStJules on February 19, 2024 on The Effective Altruism Forum.SummaryWhen taking expected values, the results can differ radically based on which common units we fix across possibilities. If we normalize relative to the value of human welfare, then other animals will tend to be prioritized more than by normalizing by the value of animal welfare or by using other approaches to moral uncertainty.For welfare comparisons and prioritization between different moral patients like humans, other animals, aliens and artificial systems, I argue that we should fix and normalize relative to the moral value of human welfare, because our understanding of the value of welfare is based on our own experiences of welfare, which we directly value.Uncertainty about animal moral weights is about the nature of our experiences and to what extent other animals have capacities similar to those that ground our value, and so empirical uncertainty, not moral uncertainty (more).I revise the account in light of the possibility of multiple different human reference points between which we don't have fixed uncertainty-free comparisons of value, like pleasure vs belief-like preferences (cognitive desires) vs non-welfare moral reasons, or specific instances of these.If and because whatever moral reasons we apply to humans, (similar or other) moral reasons aren't too unlikely to apply with a modest fraction of the same force to other animals, then the results would still be relatively animal-friendly (more).I outline why this condition plausibly holds across moral reasons and theories, so that it's plausible we should be fairly animal-friendly (more).I describe and respond to some potential objections:There could be inaccessible or unaccessed conscious subsystems in our brains that our direct experiences and intuitions do not (adequately) reflect, and these should be treated like additional moral patients (more).The approach could lead to unresolvable disagreements between moral agents, but this doesn't seem any more objectionable than any other disagreement about what matters (more).Epistemic modesty about morality may push for also separately normalizing by the values of nonhumans or against these comparisons altogether, but this doesn't seem to particularly support the prioritization of humans (more).I consider whether similar arguments apply in cases of realism vs illusionism about phenomenal consciousness, moral realism vs moral antirealism, and person-affecting views vs total utilitarianism, and find them less compelling for these cases, because value may be grounded on fundamentally different things (more).How this work has changed my mind: I was originally very skeptical of intertheoretic comparisons of value/reasons in general, including across theories of consciousness and the scaling of welfare and moral weights between animals, because of the two envelopes problem (Tomasik, 2013-2018) and the apparent arbitrariness involved. This lasted until around December 2023, and some arguments here were originally going to be part of a piece strongly against such comparisons for cross-species moral weights, which I now respond to here along with positive arguments for comparisons.AcknowledgementsI credit Derek Shiller and Adam Shriver for the idea of treating the problem like epistemic uncertainty relative to what we experience directly. I'd also like to thank Brian Tomasik, Derek Shiller and Bob Fischer for feedback. All errors are my own.BackgroundOn the allocation between the animal-inclusive and human-centric near-termist views, specifically, Karnofsky (2018) raised a problem:The "animal-inclusive" vs. "human-centric" divide could be interpreted as being about a form of "normative uncertainty": un...
Create your
podcast in
minutes
It is Free