Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: The Coordination Frontier: Sequence Intro, published by Raemon on the LessWrong.
Sometimes, groups of humans disagree about what to do.
We also sometimes disagree about how to decide what to do.
Sometimes we even disagree about how to decide how to decide.
Among the philosophically unsophisticated, there is a sad, frustrating way this can play out: People resolve "how to decide" with yelling, or bloodshed, or, (if you’re lucky), charismatic leaders assembling coalitions. This can leave lots of value on the table, or actively destroy value.
Among the extremely philosophically sophisticated, there are different sad, frustrating ways this can play out: People have very well thought out principles informing their sense of "how to coordinate well." But, their principles are not the same, and they don’t have good meta-principles on when/how to compromise. They spend hours (or years) arguing about how to decide. Or they burn a lot of energy in conflict. Or they end up walking away from what could have been a good deal, if only people were a bit better at communicating.
I’ve gone through multiple iterations on this sequence intro, some optimistic, some pessimistic.
Optimistic takes include: “I think rationalists are in a rare position to actually figure out good coordination meta-principles, because we are smart, and care, and are in positions where good coordination actually matters. This is exciting, because coordination is basically the most important thing [citation needed]. Anyone with a shot at pushing humanity’s coordination theory and capacity forward should do that.”
Pessimistic takes include: “Geez louise, rationalists are all philosophical contrarians with weird, extreme, self-architected psychology who are a pain to work with”, as well as “Actually, the most important facets of coordination to improve are maybe more like ‘slightly better markets’ than like ‘figuring out how to help oddly specific rationalists get along’.”
I started writing this post several years ago because I was annoyed at, like, 6 particular people, many of them smarter and more competent than me, many of whom were explicitly interested in coordination theory, who nonetheless seemed to despair at coordinating with rationalists-in-particular (including each other). The post grew into a sequence. The sequence grew into a sprawling research project. My goal was “provide a good foundation to get rationalists through the Valley of Bad Coordination”. I feel like we’re so close to being able to punch above our weight at coordination and general competence.
I think my actual motivations were sort of unhealthy. “If only I could think better and write really good blogposts, these particular people I’m frustrated with could get along.”
I’m currently in a bit of a pessimistic swing, and do not expect that writing sufficiently good blogposts will fix the things I was originally frustrated by. The people in question (probably) have decent reasons for having different coordination strategies.
Nonetheless, I think “mild irritation at something not quite working” is pretty good as motivations go. I’ve spent the past few years trying to reconcile the weirdly-specific APIs of different rationalists who each were trying to solve pretty real problems, and who had developed rich, complex worldviews along the way that point towards something important. I feel like I can almost taste the center of some deeper set of principles that unite them.
Since getting invested in this, I’ve come to suspect “If you want to succeed at coordination, ‘incremental improvements on things like markets’ is more promising than ‘reconcile weird rationalist APIs.” But, frustration with weird rationalist APIs was the thing that got me on this path, and I think I’m just going to see that through to the end.
So.
Here is this sequence, and he...
view more