Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Ideological Bayesians, published by Kevin Dorst on February 26, 2024 on LessWrong.
TLDR: It's often said that Bayesian updating is unbiased and converges to the truth - and, therefore, that biases must emerge from non-Bayesian sources. That's not quite right. The convergence results require updating on your total evidence - but for agents at all like us, that's impossible - instead, we must selectively attend to certain questions, ignoring others.
Yet correlations between what we see and what questions we ask - "ideological" Bayesian updating - can lead to predictable biases and polarization.
Professor Polder is a polarizing figure.
His fans praise him for his insight; his critics denounce him for his aggression.
Ask his fans, and they'll supply you with a bunch of instances when he made an insightful comment during discussions. They'll admit that he's sometimes aggressive, but they can't remember too many cases - he certainly doesn't seem any more aggressive than the average professor.
Ask his critics, and they'll supply you with a bunch of instances when he made an aggressive comment during discussions. They'll admit that he's sometimes insightful, but they can't remember too many cases - he certainly doesn't seem any more insightful than the average professor.
This sort of polarization is, I assume, familiar.
But let me tell you a secret: Professor Polder is, in fact, perfectly average - he has an unremarkably average number of both insightful and aggressive comments.
So what's going on?
His fans are better at noticing his insights, while his critics are better at noticing his aggression. As a result, their estimates are off: his fans think he's more insightful than he is, and his critics think he's more aggressive than he is. Each are correct about individual bits of the picture - when they notice aggression or insight, he is being aggressive or insightful. But none are correct about the overall picture.
This source of polarization is also, I assume, familiar. It's widely appreciated that background beliefs and ideology - habits of mind, patterns of salience, and default forms of explanation - can lead to bias, disagreement, and polarization. In this broad sense of "ideology", we're familiar with the observation that real people - especially fans and critics - are often ideological.[1]
But let me tell you another secret: Polder's fans and critics are all Bayesians. More carefully: they all maintain precise probability distributions over the relevant possibilities, and they always update their opinions by conditioning their priors on the (unambiguous) true answer to a partitional question.
How is that possible? Don't Bayesians, in such contexts, update in unbiased[2] ways, always converge to the truth, and therefore avoid persistent disagreement?
Not necessarily. The trick is that which question they update on is correlated with what they see - they have different patterns of salience. For example, when Polder makes a comment that is both insightful and aggressive, his fans are more likely to notice (just) the insight, while his critics are more likely to notice (just) the aggression. This can lead to predictable polarization.
I'm going to give a model of how such correlations - between what you see, and what questions you ask about it - can lead otherwise rational Bayesians to diverge from both each other and the truth. Though simplified, I think it sheds light on how ideology might work.
Limited-Attention Bayesians
Standard Bayesian epistemology says you must update on your total evidence.
That's nuts. To see just how infeasible that is, take a look at the following video. Consider the question: what happens to the exercise ball?
I assume you noticed that the exercise ball disappeared. Did you also notice that the Christmas tree gained lights, the bowl changed c...
view more