Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Comment on "Endogenous Epistemic Factionalization", published by Zack_M_Davis on the LessWrong.
In "Endogenous Epistemic Factionalization" (due in a forthcoming issue of the philosophy-of-science journal Synthese), James Owen Weatherall and Cailin O'Connor propose a possible answer to the question of why people form factions that disagree on multiple subjects.
The existence of persistent disagreements is already kind of a puzzle from a Bayesian perspective. There's only one reality. If everyone is honestly trying to get the right answer and we can all talk to each other, then we should converge on the right answer (or an answer that is less wrong given the evidence we have). The fact that we can't do it is, or should be, an embarrassment to our species. And the existence of correlated persistent disagreements—when not only do I say "top" when you say "bottom" even after we've gone over all the arguments for whether it is in fact the case that top or bottom, but furthermore, the fact that I said "top" lets you predict that I'll probably say "cold" rather than "hot" even before we go over the arguments for that, is an atrocity. (Not hyperbole. Thousands of people are dying horrible suffocation deaths because we can't figure out the optimal response to a new kind of coronavirus.)
Correlations between beliefs are often attributed to ideology or tribalism: if I believe that Markets Are the Answer, I'm likely to propose Market-based solutions to all sorts of seemingly-unrelated social problems, and if I'm loyal to the Green tribe, I'm likely to selectively censor my thoughts in order to fit the Green party line. But ideology can't explain correlated disagreements on unrelated topics that the content of the ideology is silent on, and tribalism can't explain correlated disagreements on narrow, technical topics that aren't tribal shibboleths.
In this paper, Weatherall and O'Connor exhibit a toy model that proposes a simple mechanism that can explain correlated disagreement: if agents disbelieve in evidence presented by those with sufficiently dissimilar beliefs, factions emerge, even though everyone is honestly reporting their observations and updating on what they are told (to the extent that they believe it). The paper didn't seem to provide source code for the simulations it describes, so I followed along in Python. (Replication!)
In each round of the model, our little Bayesian agents choose between repeatedly performing one of two actions, A or B, that can "succeed" or "fail." A is a fair coin: it succeeds exactly half the time. As far as our agents know, B is either slightly better or slightly worse: the per-action probability of success is either 0.5 + ɛ or 0.5 − ɛ, for some ɛ (a parameter to the simulation). But secretly, we the simulation authors know that B is better.
import random
ε = 0.01
def b():
return random.random()
The agents start out with a uniformly random probability that B is better. The ones who currently believe that A is better, repeatedly do A (and don't learn anything, because they already know that A is exactly a coinflip). The ones who currently believe that B is better, repeatedly do B, but keep track of and publish their results in order to help everyone figure out whether B is slightly better or slightly worse than a coinflip.
class Agent:
def experiment(self):
results = [b() for _ in range(self.trial_count)]
return results
If
H
represents the hypothesis that B is better than A, and
H
−
represents the hypothesis that B is worse, then Bayes's theorem says
P
H
E
P
E
H
P
H
P
E
H
P
H
P
E
H
−
P
H
−
where E is the record of how many successes we got in how many times we tried action B. The likelihoods
P
E
H
and
P
E
H
−
can be calculated from the probability mass function of the binomial distribution, so the agents have all the information th...
view more