Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Gears in understanding, published by Valentine on the LessWrong.
Some (literal, physical) roadmaps are more useful than others. Sometimes this is because of how well the map corresponds to the territory, but sometimes it's because of features of the map that are irrespective of the territory. E.g., maybe the lines are fat and smudged such that you can't tell how far a road is from a river, or maybe it's unclear which road a name is trying to indicate.
In the same way, I want to point at a property of models that isn't about what they're modeling. It interacts with the clarity of what they're modeling, but only in the same way that smudged lines in a roadmap interact with the clarity of the roadmap.
This property is how deterministically interconnected the variables of the model are. There are a few tests I know of to see to what extent a model has this property, though I don't know if this list is exhaustive and would be a little surprised if it were:
Does the model pay rent? If it does, and if it were falsified, how much (and how precisely) could you infer other things from the falsification?
How incoherent is it to imagine that the model is accurate but that a given variable could be different?
If you knew the model were accurate but you were to forget the value of one variable, could you rederive it?
I think this is a really important idea that ties together a lot of different topics that appear here on Less Wrong. It also acts as a prerequisite frame for a bunch of ideas and tools that I'll want to talk about later.
I'll start by giving a bunch of examples. At the end I'll summarize and gesture toward where this is going as I see it.
Example: Gears in a box
Let's look at this collection of gears in an opaque box:
(Drawing courtesy of my colleague, Duncan Sabien.)
If we turn the lefthand gear counterclockwise, it's within our model of the gears on the inside that the righthand gear could turn either way. The model we're able to build for this system of gears does poorly on all three tests I named earlier:
The model barely pays rent. If you speculate that the righthand gear turns one way and you discover it turns the other way, you can't really infer very much. All you can meaningly infer is that if the system of gears is pretty simple (e.g., nothing that makes the righthand gear alternate as the lefthand gear rotates counterclockwise), then the direction that the righthand gear turns determines whether the total number of gears is even or odd.
The gear on the righthand side could just as well go either way. Your expectations aren't constrained.
Right now you don't know which way the righthand gear turns, and you can't derive it.
Suppose that Joe peeks inside the box and tells you "Oh, the righthand gear will rotate clockwise." You imagine that Joe is more likely to say this if the righthand gear turns clockwise than if it doesn't, so this seems like relevant evidence that the righthand gear turns clockwise. This gets stronger the more people like Joe who look in the box and report the same thing.
Now let's peek inside the box:
.and now we have to wonder what's up with Joe.
The second test stands out for me especially strongly. There is no way that the obvious model about what's going on here could be right and Joe is right. And it doesn't matter how many people agree with Joe in terms of the logic of this statement: Either all of them are wrong, or your model is wrong. This logic is immune to social pressure. It means that there's a chance that you can accumulate evidence about how well your map matches the territory here, and if that converges on your map being basically correct, then you are on firm epistemic footing to disregard the opinion of lots of other people. Gathering evidence about the map/territory correspondence has higher leverage for seeing the trut...
view more