Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Originality vs. Correctness, published by alkjash on December 6, 2023 on LessWrong.
I talk with Alkjash about valuing original thinking vs getting things right. We discuss a few main threads:
What are the benefits of epistemic specialisation? What about generalism?
How much of the action in an actual human mind is in tweaking your distribution over hypotheses and how much over making sure you're considering good hypotheses?
If your hope is to slot into an epistemic process that figures out what's correct in part by you coming up with novel ideas, will processes that are out to get you make you waste your life?
Intellectual generals vs supersoldiers
Over time I've noticed that I care less and less about epistemic rationality - i.e. being correct - and more and more about being original. Of course the final goal is to produce thoughts that are original AND correct, but I find the originality condition more stringent and worth optimizing for. This might be a feature of working in mathematics where verifying correctness is cheap and reliable.
Huh that feels like an interesting take. I don't have a super strong take on originality vs. correctness, but I do think I live my life with more of a "if you don't understand the big picture and your environment well, you'll get got, and also the most important things are 10000x more important than the median important thing, so you really need to be able to notice those opportunities, which requires an accurate map of how things work in-aggregate". Which like, isn't in direct conflict with what you are saying, though maybe is.
I think I have two big sets of considerations that make me hesitant to optimize for originality over correctness (and also a bunch the other way around, but I'll argue for one side here first):
The world itself is really heavy-tailed and having a good understanding of how most of the world works, while sacrificing deeper understanding of how a narrower slicer of the world works, seems worth it since behind every part of reality that you haven't considered, a crucial consideration might lurk that completely shifts what you want to be doing with your life
The obvious example from an LW perspective is encountering the arguments for AI Risk vs. not and some related considerations around "living in the most important century". But also broader things like encountering the tools of proof and empirical science and learning how to program.
The world is adversarial in the sense that if you are smart and competent, there are large numbers of people and institutions optimizing to get you to do things that are advantageous to them, ignoring your personal interests.
Most smart people "get got" and end up orienting their lives around some random thing they don't even care about that much, because they've gotten their OODA loop captured by some social environment that makes it hard for them to understand what is going on or learn much about what they actually want to do with their lives. I think navigating an adversarial environment like this requires situational awareness and broad maps of the world, and prioritizing originality over correctness IMO makes one substantially more susceptible to a large set of attacks.
Some quick gut reactions that I'll reflect/expand on:
I think the world is not as heavy-tailed for most human utility functions as you claim. Revealed preferences suggest that saving the world is probably within an OOM as good (to me and most other people) as living ten years longer, or something like this. Same with the difference between $1m and $1b.
One of the core heuristics I have is that your perspective (which is one that seems predominant on LW) is one of very low trust in "the intellectual community," leading to every individual doing all the computations from the ground up for themselves. It feels to...
view more