Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Trust, published by johnswentworth on December 7, 2023 on LessWrong.
"Trust", as the word is typically used, is a… weird concept, to me. Like, it's trying to carve the world in a way which I don't naturally carve it myself. This post is my attempt to convey what I find weird about "trust", and what adjacent concepts I personally find more natural instead.
The Weirdness Of "Trust"
Here are some example phrases which make "trust" feel like a weird/unnatural concept to me:
"I decided to trust her"
"Should I trust him?"
"Trust me"
"They offered me their trust"
To me, the phrase "I decided to trust her" throws an error. It's the "decided" part that's the problem: beliefs are not supposed to involve any "deciding". There's priors, there's evidence, and if it feels like there's a degree of freedom in what to do with those, then something has probably gone wrong. (The main exception here is self-fulfilling prophecy, but that's not obviously centrally involved in whatever "I decided to trust her" means.)
Similarly with "trust me". Like, wat? If I were to change my belief about some arbitrary thing, just because somebody asked me to change my belief about that thing, that would probably mean that something had gone wrong.
"Should I trust him?" is a less central example, but… "should" sounds like it has a moral/utility element here. I could maybe interpret the phrase in a purely epistemic way - e.g. "should I trust him?" -> "will I end up believing true things if I trust him?" - but also that interpretation seems like it's missing something about how the phrase is actually used in practice? Anyway, a moral/utility element entering epistemic matters throws an error.
The thing which is natural to me is: when someone makes a claim, or gives me information, I intuitively think "what process led to them making this claim or giving me this information, and does that process systematically make the claim/information match the territory?". If Alice claims that moderate doses of hydroxyhopytheticol prevent pancreatic cancer, then I automatically generate hypotheses for what caused Alice to make that claim.
Sometimes the answer is "Alice read it in the news, and the reporter probably got it by misinterpreting/not-very-carefully-reporting a paper which itself was some combination of underpowered, observational, or in vitro/in silico/in a model organism", and then I basically ignore the claim. Other times the answer is "Alice is one of those friends who's into reviewing the methodology and stats of papers", and then I expect the claim is backed by surprisingly strong evidence.
Note that this is a purely epistemic question - simplifying somewhat, I'm asking things like "Do I in fact think this information is true? Do I in fact think that Alice believes it (or alieves it, or wants-to-believe it, etc)?". There's no deciding whether I believe the person. Whether I "should" trust them seems like an unnecessary level of meta-reasoning. I'm just probing my own beliefs: not "what should I believe here", but simply "what do I in fact believe here".
As a loose general heuristic, if questions of belief involve "deciding" things or answering "should" questions, then a mistake has probably been made. The rules of Bayes inference (or logical uncertainty, etc) do not typically involve "deciding" or "shouldness"; those enter at the utility stage, not the epistemic stage.
What's This "Trust" Thing?
Is there some natural thing which lines up with the concept of "trust", as it's typically used? Some toy model which would explain why "deciding to trust someone" or "asking for trust" or "offering trust" make sense, epistemically? Here's my current best guess.
Core mechanism: when you "decide to trust Alice", you believe Alice' claims but, crucially, if you later find that Alice' claims were false then you'l...
view more