Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Principles For Product Liability (With Application To AI), published by johnswentworth on December 10, 2023 on LessWrong.
There were several responses to
What I Would Do If I Were Working On AI Governance which focused on the liability section, and had similar criticisms. In particular, I'll focus on this snippet as a good representative:
Making cars (or ladders or knives or printing presses or...) "robust to misuse", as you put it, is not the manufacturer's job.
The commenter calls manufacturer liability for misuse "an absurd overreach which ignores people's agency in using the products they purchase". Years ago I would have agreed with that; it's an intuitive and natural view, especially for those of us with libertarian tendencies. But today I disagree, and claim that that's basically not the right way to think about product liability, in general.
With that motivation in mind: this post lays out some general principles for thinking about product liability, followed by their application to AI.
Principle 1: "User Errors" Are Often Design Problems
There's this story about an airplane (I think the B-52 originally?) where the levers for the flaps and landing gear were identical and right next to each other. Pilots kept coming in to land, and accidentally retracting the landing gear. Then everyone would be pissed at the pilot for wrecking the bottom of the plane, as it dragged along the runway at speed.
The usual Aesop of the story is that this was a design problem with the plane more than a mistake on the pilots' part; the problem was fixed by putting a little rubber wheel on the landing gear lever. If we put two identical levers right next to each other, it's basically inevitable that mistakes will be made; that's bad interface design.
More generally: whenever a product will be used by lots of people under lots of conditions, there is an approximately-100% chance that the product will frequently be used by people who are not paying attention, not at their best, and (in many cases) just not very smart to begin with. The only way to prevent foolish mistakes sometimes causing problems, is to design the product to be robust to those mistakes - e.g.
adding a little rubber wheel to the lever which retracts the landing gear, so it's robust to pilots who aren't paying attention to that specific thing while landing a plane. Putting the responsibility on users to avoid errors will always, predictably, result in errors.
The same also applies to intentional misuse: if a product is widely available, there is an approximately-100% chance that it will be intentionally misused sometimes. Putting the responsibility on users will always, predictably, result in users sometimes doing Bad Things with the product.
However, that does not mean that it's always worthwhile to prevent problems. Which brings us to the next principle.
Principle 2: Liability Is Not A Ban
A toy example: a railroad runs past a farmer's field. Our toy example is in ye olden days of steam trains, so the train tends to belch out smoke and sparks on the way by. That creates a big problem for everyone in the area if and when the farmer's crops catch fire. Nobody wants a giant fire. (I think I got this example from David Friedman's book Law's Order, which I definitely recommend.)
Now, one way a legal system could handle the situation would be to ban the trains. One big problem with that approach is: maybe it's actually worth the trade-off to have crop fires sometimes. Trains sure do generate a crapton of economic value. If the rate of fires isn't too high, it may just be worth it to eat the cost, and a ban would prevent that.
Liability sidesteps that failure-mode. If the railroad is held liable for the fires, it may still choose to eat that cost. Probably the railroad will end up passing (at least some of) that cost throug...
view more