Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: How To Write Quickly While Maintaining Epistemic Rigor, published by johnswentworth on LessWrong.
There’s this trap people fall into when writing, especially for a place like LessWrong where the bar for epistemic rigor is pretty high. They have a good idea, or an interesting belief, or a cool model. They write it out, but they’re not really sure if it’s true. So they go looking for evidence (not necessarily confirmation bias, just checking the evidence in either direction) and soon end up down a research rabbit hole. Eventually, they give up and never actually publish the piece.
This post is about how to avoid that, without sacrificing good epistemics.
There’s one trick, and it’s simple: stop trying to justify your beliefs. Don’t go looking for citations to back your claim. Instead, think about why you currently believe this thing, and try to accurately describe what led you to believe it.
I claim that this promotes better epistemics overall than always researching everything in depth.
Why?
It’s About The Process, Not The Conclusion
Suppose I have a box, and I want to guess whether there’s a cat in it. I do some tests - maybe shake the box and see if it meows, or look for air holes. I write down my observations and models, record my thinking, and on the bottom line of the paper I write “there is a cat in this box”.
Now, it could be that my reasoning was completely flawed, but I happen to get lucky and there is in fact a cat in the box. That’s not really what I’m aiming for; luck isn’t reproducible. I want my process to robustly produce correct predictions. So when I write up a LessWrong post predicting that there is a cat in the box, I don’t just want to give my bottom-line conclusion with some strong-sounding argument. As much as possible, I want to show the actual process by which I reached that conclusion. If my process is good, this will better enable others to copy the best parts of it. If my process is bad, I can get feedback on it directly.
Correctly Conveying Uncertainty
Another angle: describing my own process is a particularly good way to accurately communicate my actual uncertainty.
An example: a few years back, I wondered if there were limiting factors on the expansion of premodern empires. I looked up the peak size of various empires, and found that the big ones mostly peaked at around the same size: ~60-80M people. Then, I wondered when the US had hit that size, and if anything remarkable had happened then which might suggest why earlier empires broke down. Turns out, the US crossed the 60M threshold in the 1890 census. If you know a little bit about the history of computers, that may ring a bell: when the time came for the 1890 census, it was estimated that tabulating the data would be so much work that it wouldn’t even be done before the next census in 1900. It had to be automated. That sure does suggest a potential limiting factor for premodern empires: managing more than ~60-80M people runs into computational constraints.
Now, let’s zoom out. How much confidence should I put in this theory? Obviously not very much - we apparently have enough evidence to distinguish the hypothesis from entropy, but not much more.
On the other hand. what if I had started with the hypothesis that computational constraints limited premodern empires? What if, before looking at the data, I had hypothesized that modern nations had to start automating bureaucratic functions precisely when they hit the same size at which premodern nations collapsed? Then this data would be quite an impressive piece of confirmation! It’s a pretty specific prediction, and the data fits it surprisingly well. But this only works if I already had enough evidence to put forward the hypothesis, before seeing the data.
Point is: the amount of uncertainty I should assign depends on the details of my ...
view more