The driverless-car liability question gets ahead of itself

Who will pay damages when a driverless car gets into an accident?

Megan McArdle has taken on the question of how liability might work in the bold new world of driverless cars. Here’s her framing scenario:

Imagine a not-implausible situation: you are driving down a brisk road at 30 mph with a car heading towards you in the other lane at approximately the same speed. A large ball rolls out into the street, too close for you to brake. You, the human, knows that the ball is likely to be followed, in seconds, by a small child; you slam on the brakes (perhaps giving yourself whiplash) or swerve, at considerable risk of hitting the other car.

What should a self-driving car do?  More to the point, if you hit the kid, or the other car, who gets sued?

The lawyer could go after you, with your piddling $250,000 liability policy and approximately 83 cents worth of equity in your home. Or he could go after the automaker, which has billions in cash, and the ultimate responsibility for whatever decision the car made. What do you think is going to happen?

The implication is that the problem of concentrated liability might make automakers reluctant to take the risk of introducing driverless cars.

I think McArdle is taking a bit too much of a leap here. Automakers are accustomed to having the deepest pockets within view of any accident scene. Liability questions raised by this new kind of intelligence will have to be worked out — maybe by forcing drivers to take on the liability for their cars’ performance via their insurance companies, and insurance companies in turn certifying types of technology that they’ll insure. By the time driverless cars become a reality they’ll probably be substantially safer than human drivers, so the insurance companies might be willing to accept the tradeoff and everyone will benefit.

(Incidentally, I’m told by people who have taken rides in Google’s car that the most unnerving part of it is that it drives like your driver’s ed teacher told you to — at exactly the speed limit, with full stops at stop signs and conservative behavior at yellow lights.)

But we’ll probably get the basic liability testing out of the way before a car like Google’s hits the road in large numbers. First will come a wave of machine vision-based driver-assist technologies like automatic cruise control on highways (similar to some kinds of technology that have been around for years). These features present liability issues similar to those in a fully driverless car — can an automaker’s driving judgment be faulted in an accident? — but in a somewhat less fraught context.

The interesting question to me is how the legal system might handle liability for software that effectively drives a car better than any human possibly could. In the kind of scenario that McArdle outlines, a human driver would take intuitive action to avoid an accident — action that will certainly be at least a little bit sub-optimal. Sophisticated driving software could do a much better job at taking the entire context of the situation into account, evaluating several maneuvers, and choosing the one that maximizes survival rates through a coldly rational model.

That doesn’t solve the problem of liability chasing deep pockets, of course, but that’s a problem with the legal system, not the premise of a driverless car. One benefit that carmakers might enjoy is that driverless cars could store black box-type recordings, with detailed data on the context in which every decision was made, in order to show in court that the car’s software acted as well as it possibly could have.

In that case, driverless cars might present a liability problem for anyone who doesn’t own one — a human driver who crashes into a driverless car will find it nearly impossible to show he’s not at fault.

This is a post in our industrial Internet series, an ongoing exploration of big machines and big data. The series is produced as part of a collaboration between O’Reilly and GE.

tags: , , , ,

Get the O’Reilly Hardware Newsletter

Get weekly insight and knowledge on how to design, prototype, manufacture, and market great connected devices.

  • Ronald Pottol

    Well, given how Google works now, I’d expect that the car would have a good data set for when something like that is a ball followed by a kid, and when it is a plastic bag, blown by the wind, followed by nothing that matters. So it may well do better than the human even here.

  • r_castleman

    You’re right – her scenario implies questions of technology, regulations / law and insurance.

    The technology itself provides some comfort — no inattentiveness, 360 degree field of view, *much* quicker reaction times. And the system will get smarter over time as more scenarios are encountered and best practices percolate up. And as inter-vehicle communication becomes a thing (though it’s *not* required for autonomous vehicles to work), this particular scenario might be better handled (i.e. both cars react optimally).

    But that doesn’t mean that bad things won’t happen, and someone/some thing must be held responsible. Logically, that means the car manufacturer and/or the provider of the software that’s making the decisions. So, the most likely end-state for the liability and related insurance question is that liability will shift to the manufacturers (and/or the autonomous vehicle software provider). That shift won’t happen quickly or easily because no one yet knows how to price the risk. It’s notable that both states that have approved testing of driverless cars on their roads (NV and CA) have deliberately sidestepped the liability issue.

    For a really awesome (and readable) primer on the legal questions, check out Bryant Walker Smith’s “Automated Vehicles Are Probably Legal in the United States”. [I’d post link but it seems to disable the comment.]

  • Very interesting read! However, I’d like to point out one assumption that is (in my opinion) not correct, at least not for the near future. You assume that the software is bug-free. Given the track record of software projects, that is an unlikely situation. Or at least a very expensive one…

    • That is where you are wrong Niobos. They have made lots of improvements with bugs and the software is 100 percent bug free. Just because the records you are talking about, doesn´t mean that every software will be the same.

  • Eric Mariacher

    We’ve been told that Mercedes is going to sell a driverless car in 2013. But the car will be driverless for 10 seconds i.e. drivers will be allowed to have their hand off the steering wheel for a maximum of 10 seconds. As far as I read, the driver will still be liable if anything bad happens during the time he has his hands off the wheel.

    For the following years, we can reasonably anticipate that these 10 seconds will become 20 then 30 seconds and even a minute. From here let’s ask a series of questions:
    The biggest one being: “What is the limit from where drivers having an accident, when his car is in automated mode, will be able to attack the car manufacturer?”
    * Who will decide what the limit is?
    – car manufacturers, government or insurers?

    * How do we prove that the accident happened while the car was in driverless mode?
    – Do driverless cars need some airplane like black boxes?

  • The liability ceases being an operational cost for the consumer, but is shifted to a capital cost borne by the manufacturer as reflected in the sales price.

  • zoekate8 kate

    Some things in here I have not thought about before.Thanks for making such a cool post which is really very well written.will be referring a lot of friends about this. Used Suzuki Swift