Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Woods' new preprint on object permanence, published by Steven Byrnes on March 8, 2024 on LessWrong.
Quick poorly-researched post, probably only of interest to neuroscientists.
The experiment
Justin Wood at University of Indiana has, over many years with great effort, developed a system for raising baby chicks such that all the light hitting their retina is experimentally controlled right from when they're an embryo - the chicks are incubated and hatched in darkness, then moved to a room with video screens, head-tracking and so on. For a much better description of how this works and how he got into this line of work, check out
his recent appearance on the
Brain Inspired
podcast.
He and collaborators posted a new paper last week:
"Object permanence in newborn chicks is robust against opposing evidence" by Wood, Ullman, Wood, Spelke, and Wood. I just read it today. It's really cool!
In their paper, they are using the system above to study "object permanence", the idea that things don't disappear when they go out of sight behind an occluder. The headline result is that baby chicks continue to act as if object permanence is true, even if they have seen thousands of examples where it is false and zero where it is true over the course of their short lives.
They describe two main experiments. Experiment 1 is the warmup, and Experiment 2 is the headline result I just mentioned.
In experiment 1, the chicks are raised in a VR visual world where they never see anything occlude anything, ever. They only see one virtual object move around an otherwise-empty virtual room. The chicks of course
imprint on the object. This phase lasts 4 days. Then we move into the test phase.
The test initializes when the chick moves towards the virtual object, which starts in the center of the room. Two virtual opaque screens appear on the sides of the room.
In the easier variant of the test, the object moves behind one of the screens, and then nothing else happens for a few minutes. The experimenters measure which screen the chick looks at more. The result: all 8 chicks looked more-than-chance at the screen that the virtual object would be behind, than at the other screen, at least for the first 30 seconds or so after the object disappeared from view.
In the harder variant, one of the screens moves to the object, occludes the object, then moves back to its starting point. Again, the experiments measure which screen the chick looks at more. Here, 7 of the 8 chicks looked more-than-chance towards the screen that the virtual object would be behind, at least for 15ish seconds.
Moving on to experiment 2, the test phase was the same as the easier variant above - the object moved to behind one of the two opaque virtual screens on the sides.
But the preceding 4-day training phase was different for these chicks: instead of never seeing any occlusion events, they witnessed thousands of occlusion events, where the object would go behind a virtual opaque screen, and then after a variable amount of time (0-20 seconds), the screens would lower to reveal that the object was where we might expect (for the "natural world" chicks), or had magically teleported to behind the "wrong" screen (the "unnatural world" chicks).
(There was no randomization - each chick lived its whole training-phase in either the natural or unnatural world.)
Remarkably, all four chicks in the "natural world" and all four chicks in the "unnatural world" spent more time looking at the screen that the object had disappeared behind, rather than the other one, more than chance, at least for the first 15-30 seconds. In fact, remarkably, there was no difference between the natural-world and unnatural-world chicks!
How do we make sense of these results?
It's always worth asking: maybe the experiment is garbage? I'm far from an expert, but the methodol...
view more