The driverless-car liability question gets ahead of itself

Who will pay damages when a driverless car gets into an accident?

Megan McArdle has taken on the question of how liability might work in the bold new world of driverless cars. Here’s her framing scenario:

Imagine a not-implausible situation: you are driving down a brisk road at 30 mph with a car heading towards you in the other lane at approximately the same speed. A large ball rolls out into the street, too close for you to brake. You, the human, knows that the ball is likely to be followed, in seconds, by a small child; you slam on the brakes (perhaps giving yourself whiplash) or swerve, at considerable risk of hitting the other car.

What should a self-driving car do?  More to the point, if you hit the kid, or the other car, who gets sued?

The lawyer could go after you, with your piddling $250,000 liability policy and approximately 83 cents worth of equity in your home. Or he could go after the automaker, which has billions in cash, and the ultimate responsibility for whatever decision the car made. What do you think is going to happen?

The implication is that the problem of concentrated liability might make automakers reluctant to take the risk of introducing driverless cars.

I think McArdle is taking a bit too much of a leap here. Automakers are accustomed to having the deepest pockets within view of any accident scene. Liability questions raised by this new kind of intelligence will have to be worked out — maybe by forcing drivers to take on the liability for their cars’ performance via their insurance companies, and insurance companies in turn certifying types of technology that they’ll insure. By the time driverless cars become a reality they’ll probably be substantially safer than human drivers, so the insurance companies might be willing to accept the tradeoff and everyone will benefit.

(Incidentally, I’m told by people who have taken rides in Google’s car that the most unnerving part of it is that it drives like your driver’s ed teacher told you to — at exactly the speed limit, with full stops at stop signs and conservative behavior at yellow lights.)

But we’ll probably get the basic liability testing out of the way before a car like Google’s hits the road in large numbers. First will come a wave of machine vision-based driver-assist technologies like automatic cruise control on highways (similar to some kinds of technology that have been around for years). These features present liability issues similar to those in a fully driverless car — can an automaker’s driving judgment be faulted in an accident? — but in a somewhat less fraught context.

The interesting question to me is how the legal system might handle liability for software that effectively drives a car better than any human possibly could. In the kind of scenario that McArdle outlines, a human driver would take intuitive action to avoid an accident — action that will certainly be at least a little bit sub-optimal. Sophisticated driving software could do a much better job at taking the entire context of the situation into account, evaluating several maneuvers, and choosing the one that maximizes survival rates through a coldly rational model.

That doesn’t solve the problem of liability chasing deep pockets, of course, but that’s a problem with the legal system, not the premise of a driverless car. One benefit that carmakers might enjoy is that driverless cars could store black box-type recordings, with detailed data on the context in which every decision was made, in order to show in court that the car’s software acted as well as it possibly could have.

In that case, driverless cars might present a liability problem for anyone who doesn’t own one — a human driver who crashes into a driverless car will find it nearly impossible to show he’s not at fault.


This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

tags: , , , ,