Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dialogue on What It Means For Something to Have A Function/Purpose, published by johnswentworth on July 16, 2024 on LessWrong.
Context for LW audience: Ramana, Steve and John regularly talk about stuff in the general cluster of agency, abstraction, optimization, compression, purpose, representation, etc. We decided to write down some of our discussion and post it here. This is a snapshot of us figuring stuff out together.
Hooks from Ramana:
Where does normativity come from?
Two senses of "why" (from Dennett): How come? vs What for? (The latter is more sophisticated, and less resilient. Does it supervene on the former?)
An optimisation process is something that produces/selects things according to some criterion. The products of an optimisation process will have some properties related to the optimisation criterion, depending on how good the process is at finding optimal products.
The products of an optimisation process may or may not themselves be optimisers (i.e. be a thing that runs an optimisation process itself), or may have goals themselves. But neither of these are necessary.
Things get interesting when some optimisation process (with a particular criterion) is producing products that are optimisers or have goals. Then we can start looking at what the relationship is between the goals of the products, or the optimisation criteria of the products, vs the optimisation criterion of the process that produced them.
If you're modeling "having mental content" as having a Bayesian network, at some point I think you'll run into the question of where the (random) variables come from. I worry that the real-life process of developing mental content mixes up creating variables with updating beliefs a lot more than the Bayesian network model lets on.
A central question regarding normativity for me is "Who/what is doing the enforcing?", "What kind of work goes into enforcing?"
Also to clarify, by normativity I was trying to get at the relationship between some content and the thing it represents. Like, there's a sense of the content is "supposed to" track or be like the thing it represents. There's a normative standard on the content. It can be wrong, it can be corrected, etc. It can't just be. If it were just being, which is how things presumably start out, it wouldn't be representing.
Intrinsic Purpose vs Purpose Grounded in Evolution
Steve
As you know, I totally agree that mental content is normative - this was a hard lesson for philosophers to swallow, or at least the ones that tried to "naturalize" mental content (make it a physical fact) by turning to causal correlations. Causal correlations was a natural place to start, but the problem with it is that intuitively mental content can misrepresent - my brain can represent Santa Claus even though (sorry) it can't have any causal relation with Santa.
(I don't mean my brain can represent ideas or concepts or stories or pictures of Santa - I mean it can represent Santa.)
Ramana
Misrepresentation implies normativity, yep.
In the spirit of recovering a naturalisation project, my question is: whence normativity? How does it come about? How did it evolve?
How do you get some proto-normativity out of a purely causal picture that's close to being contentful?
Steve
So one standard story here about mental representation is teleosemantics, that roughly something in my brain can represent something in the world by having the function to track that thing. It may be a "fact of nature" that the heart is supposed to pump blood, even though in fact hearts can fail to pump blood.
This is already contentious, that it's a fact the heart is supposed to pump blood - but if so, it may similarly be a fact of nature that some brain state is supposed to track something in the world, even when it fails to. So teleology introduces the possibility of m...
view more