Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Complex systems research as a field (and its relevance to AI Alignment), published by Nora Ammann on December 2, 2023 on LessWrong.
I have this high prior that complex-systems type thinking is usually a trap. I've had a few conversations about this, but still feel kind of confused, and it seems good to have a better written record of mine and your thoughts here.
At a high level, here are some thoughts that come to mind for me when I think about complex systems stuff, especially in the context of AI Alignment:
A few times I ended up spending a lot of time trying to understand what some complex systems people are trying to say, only to end up thinking they weren't really saying anything. I think I got this feeling from engaging a bunch with the Santa Fe stuff and Simon Dedeo's work (like this paper and this paper)
A part of my model of how groups of people make intellectual progress is that one of the core ingredients is having a shared language and methodology that allows something like "the collective conversation" to make incremental steps forward. Like, you have a concept of experiment and statistical analysis that settles an empirical issue, or you have a concept of proof that settles an issue of logical uncertainty, and in some sense a lot of interdisciplinary work is premised on the absence of a shared methodology and language.
While I feel more confused about this in recent times, I still have a pretty strong prior towards something like g or the positive manifold, where like, there are methodological foundations that are important for people to talk to each other, but most of the variance in people's ability to contribute to a problem is grounded in how generally smart and competent and knowledgeable they are, and expertise is usually overvalued (for example, it's not that rare for a researcher to win a Nobel prize
in two fields).
A lot of interdisciplinary work (not necessarily complex systems work, but some of the generator that I feel like I see behind PIBBS) feels like it puts a greater value on intellectual diversity here than I would.
Ok, so starting with one high-level point: I'm definitely not willing to die on the hill of 'complex systems research' as a scientific field as such. I agree that there is a bunch of bad or kinda hollow work happening under the label. (I think the first DeDeo paper you link is a decent example of this: feels mostly like having some cool methodology and applying it to some random phenomena without really an exciting bigger vision of a deeper thing to be understood, etc.)
That said, there are a bunch of things that one could describe as fitting under the complex systems label that I feel positive about, let's try to name a few:
I do think, contra your second point, complex systems research (at least its better examples) have a lot of/enough shared methodology to benefit from the same epistemic error correction mechanisms that you described. Historically it really comes out of physics, network science, dynamical systems, etc. The main move that happened was to say that, rather than indexing the boundaries of a field on the natural phenomena or domain it studies (e.g. biology, chemistry, economics), to instead index it on a set of methods of inquiry, with the premise that you can usefully apply these methods across different types of systems/domains and gain valuable understanding of underlying principles that govern these phenomena across systems (e.g.
I think a (typically) complex systems angle is better at accounting for environment-agent interactions. There is a failure mode of naive reductionism that starts by fixing the environment to be able to hone in on what system-internal differences produce what differences in the phenomena, and then conclude that all of what drives the phenomena is systems-internal while forget tha...
view more