Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Explaining Impact Markets, published by Saul Munn on January 31, 2024 on LessWrong.
Let's say you're a billionaire. You want to have a flibbleflop, so you post a prize:
Make a working flibbleflop - $1 billion.
There begins a global effort to build working flibbleflops, and you see some teams of brilliant people starting to work on flibbleflop engineering. But it doesn't take long for you to notice that the teams keep running into one specific problem: they need money to start (buy flobble juice, hire deeblers, etc), money they don't have.
So, the people who want to build the flibbleflop go and pitch to investors. They offer investors a chunk of their prize money if they end up winning, in exchange for cold hard cash right now to get started building. If the investors think that the team is likely to build a successful flibbleflop and win the billion dollar prize, they invest. If not, not.
If you squint, you could replace "flibbleflop" with highly capable LLMs, quantum computers, or any number of cool and potentially lucrative technologies. But if you stop squinting, and instead add the adjective "altruistic" before "billionaire," you could replace "flibbleflop" with "malaria vaccine." Let's see what happens:
Make a working malaria vaccine - $1 billion.
There begins a global effort to build working malaria vaccines, and you see some teams of brilliant people starting to work on vaccine engineering. But it doesn't take long for you to notice that the teams keep running into one specific problem: they need money to start (buy lab equipment, hire researchers, etc), money they don't have.
So, what should they do?
Obviously, the people who want to build the vaccine should go and pitch to investors. They should offer investors a chunk of their prize money if they end up winning, in exchange for cold hard cash right now to get started building. If the investors think that the team is likely to build a successful malaria vaccine and win the billion dollar prize, they should invest. If not, not.
The prize part of this is how a lot of philanthropy is done. An altruistic billionaire notices a problem and makes a prize for the solution. But the investing part of it is pretty unique, and doesn't happen too often.
Why is this whole setup good? Why would you want the investing thing on the side? Mostly, because it resolves the problem that some teams will be wonderfully capable but horribly underfunded. In exchange for a chunk of their (possible) future winnings, they get to be both wonderfully capable and wonderfully funded. This is how it already works for AI or quantum computing or any other potentially lucrative tech that has high barriers to entry; we can solve the same problem in the same way for the things that altruistic billionaires care about, too.
But backing up a bit, why would an altruistic billionaire want to do this as a prize in the first place? Why not use grants, like how most philanthropy works?
Prizes reward results, not promises. With a prize, you know for a fact that you're getting what you paid for; when you hand out grants, you get a boatload of promises and sometimes results.
The investors care a lot about not losing their money. They're also very good at picking which teams are going to win - after all, investors only get rewarded for picking good teams if the teams end up winning.
The issue of figuring out which people are best to work on a problem is totally different from the issue of figuring out which problems to solve. Using a prize system means that you, as a lazy-but-altruistic billionaire, don't have to solve both issues - just the second one. Investors do the work of figuring out who the good teams are; you just need to figure out what problems they should solve.
If you do this often enough - set up prizes for solutions to problems you care about,...
view more