Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The last era of human mistakes, published by owencb on July 25, 2024 on LessWrong.
Suppose we had to take moves in a high-stakes chess game, with thousands of lives at stake. We wouldn't just find a good chess player and ask them to play carefully. We would consult a computer. It would be deeply irresponsible to do otherwise. Computers are better than humans at chess, and more reliable.
We'd probably still keep some good chess players in the loop, to try to catch possible computer error. (Similarly we still have pilots for planes, even though the autopilot is often safer.) But by consulting the computer we'd remove the opportunity for humans to make a certain type of high stakes mistake.
A lot of the high stakes decisions people make today don't look like chess, or flying a plane. They happen in domains where computers are much worse than humans.
But that's a contingent fact about our technology level. If we had sufficiently good AI systems, they could catch and prevent significant human errors in whichever domains we wanted them to.
In such a world, I think that they would come to be employed for just about all suitable and important decisions. If some actors didn't take advice from AI systems, I would expect them to lose power over time to actors who did. And if public institutions were making consequential decisions, I expect that it would (eventually) be seen as deeply irresponsible not to consult computers.
In this world, humans could still be responsible for taking decisions (with advice). And humans might keep closer to sole responsibility for some decisions. Perhaps deciding what, ultimately, is valued. And many less consequential decisions, but still potentially large at the scale of an individual's life (such as who to marry, where to live, or whether to have children), might be deliberately kept under human control[1].
Such a world might still collapse. It might face external challenges which were just too difficult. But it would not fail because of anything we would parse as foolish errors.
In many ways I'm not so interested in that era. It feels out of reach. Not that we won't get there, but that there's no prospect for us to help the people of that era to navigate it better.
My attention is drawn, instead, to the period before it. This is a time when AI will (I expect) be advancing rapidly. Important decisions may be made in a hurry. And while automation-of-advice will be on the up, it seems like wildly unprecedented situations will be among the hardest things to automate good advice for. We might think of it as the last era of consequential human mistakes[2].
Can we do anything to help people navigate those? I honestly don't know. It feels very difficult (given the difficulty at our remove in even identifying the challenges properly). But it doesn't feel obviously impossible.
What will this era look like?
Perhaps AI progress is blisteringly fast and we move from something like the world of today straight to a world where human mistakes don't matter. But I doubt it.
On my mainline picture of things, this era - the final one in which human incompetence (and hence human competence) really matters - might look something like this:
Cognitive labour approaching the level of human thinking in many domains is widespread, and cheap
People are starting to build elaborate ecosystems leveraging its cheapness …
… since if one of the basic inputs to the economy is changed, the optimal arrangement of things is probably quite different (cf. the ecosystem of things built on the internet);
… but that process hasn't reached maturity.
There is widespread access to standard advice, which helps to avoid some foolish errors, though this is only applicable to "standard" situations, and it isn't universal to seek that advice
In some domains, AI performance is significantly bet...
view more