Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: Simulacrum 3 As Stag-Hunt Strategy, published by johnswentworth on the LessWrong.
Reminder of the rules of Stag Hunt:
Each player chooses to hunt either Rabbit or Stag
Players who choose Rabbit receive a small reward regardless of what everyone else chooses
Players who choose Stag receive a large reward if-and-only-if everyone else chooses Stag. If even a single player chooses Rabbit, then all the Stag-hunters receive zero reward.
From the outside, the obvious choice is for everyone to hunt Stag. But in real-world situations, there’s lots of noise and uncertainty, and not everyone sees the game the same way, so the Schelling choice is Rabbit.
How does one make a Stag hunt happen, rather than a Rabbit hunt, even though the Schelling choice is Rabbit?
If one were utterly unscrupulous, one strategy would be to try to trick everyone into thinking that Stag is the obvious right choice, regardless of what everyone else is doing.
Now, tricking people is usually a risky strategy at best - it’s not something we can expect to work reliably, especially if we need to trick everyone. But this is an unusual case: we’re tricking people in a way which (we expect) will benefit them. Therefore, they have an incentive to play along.
So: we make our case for Stag, try to convince people it’s the obviously-correct choice no matter what. And. they’re not fooled. But they all pretend to be fooled. And they all look around at each other, see everyone else also pretending to be fooled, and deduce that everyone else will therefore choose Stag. And if everyone else is choosing Stag. well then, Stag actually is the obvious choice. Just like that, Stag becomes the new Schelling point.
We can even take it a step further.
If nobody actually needs to be convinced that Stag is the best choice regardless, then we don’t actually need to try to trick them. We can just pretend to try to trick them. Pretend to pretend that Stag is the best choice regardless. That will give everyone else the opportunity to pretend to be fooled by this utterly transparent ploy, and once again we’re off to hunt Stag.
This is simulacrum 3: we’re not telling the truth about reality (simulacrum 1), or pretending that reality is some other way in order to manipulate people (simulacrum 2). We’re pretending to pretend that reality is some other way, so that everyone else can play along.
In The Wild
We have a model for how-to-win-at-Stag-Hunt. If it actually works, we’d expect to find it in the wild in places where economic selection pressure favors groups which can hunt Stag. More precisely: we want to look for places where the payout increases faster-than-linearly with the number of people buying in. Economics jargon: we’re looking for increasing marginal returns.
Telecoms, for instance, are a textbook example. One telecom network connecting fifty cities is far more valuable than fifty networks which each only work within one city. In terms of marginal returns: the fifty-first city connected to a network contributes more value than the first, since anyone in the first fifty cities can reach a person in the fifty-first. The bigger the network, the more valuable it is to expand it.
From an investor’s standpoint, this means that a telecom investment is likely to have better returns if more people invest in it. It’s like a Stag Hunt for investors: each investor wants to invest if-and-only-if enough other investors also invest. (Though note that it’s more robust than a true Stag Hunt - we don’t need literally every investor to invest in order to get a big payoff.)
Which brings us to this graph, from T-mobile’s 2016 annual report (second page):
Fun fact: that is not a graph of those numbers. Some clever person took the numbers, and stuck them as labels on a completely unrelated graph. Those numbers are actually near-perfectly linear, ...
view more