Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Cached Selves, published by AnnaSalamon on the LessWrong.
by Anna Salamon and Steve Rayhawk (joint authorship)
Related to: Beware identity
Update, 2021: I believe a large majority of the priming studies failed replication, though I haven't looked into it in depth. I still personally do a great many of the "possible strategies" listed at the bottom; and they subjectively seem useful to me; but if you end up believing that it should not be on the basis of the claimed studies.
A few days ago, Yvain introduced us to priming, the effect where, in Yvain’s words, "any random thing that happens to you can hijack your judgment and personality for the next few minutes."
Today, I’d like to discuss a related effect from the social psychology and marketing literatures: “commitment and consistency effects”, whereby any random thing you say or do in the absence of obvious outside pressure, can hijack your self-concept for the medium- to long-term future.
To sum up the principle briefly: your brain builds you up a self-image. You are the kind of person who says, and does... whatever it is your brain remembers you saying and doing. So if you say you believe X... especially if no one’s holding a gun to your head, and it looks superficially as though you endorsed X “by choice”... you’re liable to “go on” believing X afterwards. Even if you said X because you were lying, or because a salesperson tricked you into it, or because your neurons and the wind just happened to push in that direction at that moment.
For example, if I hang out with a bunch of Green Sky-ers, and I make small remarks that accord with the Green Sky position so that they’ll like me, I’m liable to end up a Green Sky-er myself. If my friends ask me what I think of their poetry, or their rationality, or of how they look in that dress, and I choose my words slightly on the positive side, I’m liable to end up with a falsely positive view of my friends. If I get promoted, and I start telling my employees that of course rule-following is for the best (because I want them to follow my rules), I’m liable to start believing in rule-following in general.
All familiar phenomena, right? You probably already discount other peoples’ views of their friends, and you probably already know that other people mostly stay stuck in their own bad initial ideas. But if you’re like me, you might not have looked carefully into the mechanisms behind these phenomena. And so you might not realize how much arbitrary influence consistency and commitment is having on your own beliefs, or how you can reduce that influence. (Commitment and consistency isn’t the only mechanism behind the above phenomena; but it is a mechanism, and it’s one that’s more likely to persist even after you decide to value truth.)
Consider the following research.
In the classic 1959 study by Festinger and Carlsmith, test subjects were paid to tell others that a tedious experiment has been interesting. Those who were paid $20 to tell the lie continued to believe the experiment boring; those paid a mere $1 to tell the lie were liable later to report the experiment interesting. The theory is that the test subjects remembered calling the experiment interesting, and either:
Honestly figured they must have found the experiment interesting -- why else would they have said so for only $1? (This interpretation is called self-perception theory.), or
Didn’t want to think they were the type to lie for just $1, and so deceived themselves into thinking their lie had been true. (This interpretation is one strand within cognitive dissonance theory.)
In a follow-up, Jonathan Freedman used threats to convince 7- to 9-year old boys not to play with an attractive, battery-operated robot. He also told each boy that such play was “wrong”. Some boys were given big threats, or were kept carefully su...
view more