Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Assumed Intent Bias, published by silentbob on November 6, 2023 on LessWrong.
Summary: when thinking about the behavior of others, people seem to have a tendency to assume clear purpose and intent behind it. In this post I argue that this assumption of intent quite often is incorrect, and that a lot of behavior exists in a gray area where it's easily influenced by subconscious factors.
This consideration is not new at all and relates to many widely known effects such as the
typical mind fallacy
, the
false consensus effect
, black and white thinking and the concept of
trivial inconveniences
. It still seems valuable to me to clarify this particular bias with some graphs, and have it available as a post one can link to.
Note that "assumed intent bias" is not a commonly used name, as I believe there is no commonly used name for the bias I'm referring to.
The Assumed Intent Bias
Consider three scenarios:
When I quit my previous job, I was allowed to buy my work laptop from the company for a low price and did so. Hypothetically the company's admins should have made sure to wipe my laptop beforehand, but they left that to me, apparently reasoning that had I had any intent whatsoever to do anything shady with the company's data, I could have easily made a copy prior to that anyway. So they further assumed that anyone without a clear intention of stealing the company's data would surely do the right thing then, and wipe the device themselves.
At a different job, we continuously A/B-tested changes to our software. One development team decided to change a popular feature, so that using it required a double click instead of a single mouse click. They reasoned that this shouldn't affect feature usage of our users, because anyone who wants to use the feature can still easily do it, and nobody in their right mind would say "I will use this feature if I have to click once, but two clicks are too much for me!". (The A/B test data later showed that the usage of that feature had reduced quite significantly due to that change)
In debates about gun control, gun enthusiasts sometimes make an argument roughly like this: gun control doesn't increase safety, because potential murderers who want to shoot somebody will find a way to get their hands on a gun anyway, whether they are easily and legally available or not.
[1]
These three scenarios all are of a similar shape: Some person or group (the admins; the development team; gun enthusiasts) make a judgment about the potential behavior (stealing sensitive company data; using a feature; shooting someone) of somebody else (leaving employees; users; potential murderers), and assume that the behavior in question happens or doesn't happen
with full intentionality
.
According to this view, if you plotted the number of people that have a particular level of intent with regards to some particular action, it may look somewhat like this:
This graph would represent a situation where practically every person either has a strong intention to act in a particular way (the peak on the right), or to
not
act in that way (peak on the left).
And indeed, in such a world, relatively weak interventions such as
"triggering a feature on double click instead of single click"
, or
"making it more difficult to buy a gun"
may not end up being effective: while such interventions would move the action threshold slightly to the right or left, this wouldn't actually change people's behavior, as everyone stays on the same side of the threshold. So everybody would still act in the same way they would otherwise.
However, I think that in many, if not most, real life scenarios, the graph actually looks more like this:
Or even this:
In these cases, only a relatively small number of people have a clear and strong intention with regards to the behavior, and a lot of people are...
view more