Life, death, and autonomous vehicles

Self-driving cars will make decisions — and act — faster than humans facing the same dangerous situations.

1966PlymouthFuryIII

Plymouth Fury III. Photo by Infrogmation, on Wikimedia Commons.

There’s a steadily increasing drumbeat of articles and Tweets about the ethics of autonomous vehicles: if an autonomous vehicle is going to crash, should it kill the passenger in the left seat or the right seat? (I won’t say “driver’s seat,” though these sorts of articles usually do; there isn’t a driver.) Should the car crash into a school bus or run over an old lady on the side of the road?

Frankly, I’m already tired of the discussion. It’s not as if humans don’t already get into situations like this, and make (or not make) decisions. At least, I have.

When I was a grad student, I bought my first (very used) car: a monstrous 1966 Plymouth Fury III. I was working a part-time job about five miles away from where I lived. I normally commuted by bike, but the day after buying the car, and before I had registered or insured it, it rained. So, I decided: “what could go wrong? I’ll drive.”

Of course, something went wrong. While I was driving home, the brakes were soft. And as I was driving toward Stanford, approaching the underpass on Embarcadero, going downhill, they failed completely — while I was following a Mercedes, in heavy traffic. And I thought, quite literally, should I plow into the Merc or drive into a bridge abutment. So, debates about, “should the car take out the school children or the elderly lady” aren’t entirely academic to me.

Morally, I just froze. And whatever I might have thought that I should do, it was clear that the moral logic was totally irrelevant to what I was actually going to do, which was nothing at all. It certainly didn’t occur to me to think through all the ramifications of my situation: if I drove into the bridge abutment, sparing the Mercedes and its driver, would I have created a greater danger for the cars in back of me? If I took out the Mercedes and its driver, would I be killing the venture capitalist who was going to fund Google in a few years? Was the Mercedes possibly even more massive and collision-resistant than my monstrous Plymouth (man, that was a big car)? You can spin this all sorts of ways, and most of them are pretty silly.

At the last possible instant, I realized why emergency brakes were invented. I managed to get the car under control, drive under the underpass, and park it in a parking lot. I walked home and retrieved the car sometime after midnight, when there wasn’t anyone on the road.

But here’s the real moral of the story: a self-driving car wouldn’t have had to figure out why the emergency brakes were there.

tags: ,

Get the Solid Newsletter

Software / Hardware / Everywhere

The programmable world is creating disruptive innovation as profound as the Internet itself. Be among the first to learn about the latest news, trends, and opportunities.

  • MacCruiskeen

    You might be tired of the discussion, but it is not pointless. Unlike the dumb kid in the Fury, some programmer is going to have to plan ahead for the situation, and give the autonomous car some means of making a choice, of trying to find the least-damaging solution in the time it has, to the extent that the car is able to know what’s going on around it. Some intelligence about what ‘least-damaging’ means is going to have to be provided. After all, the whole sales pitch is that it’s smarter than the dumb kid, right? The car, and the car’s manufacturer can’t just say fuck it, it doesn’t matter. History has shown that that is just a lawsuit waiting to happen.

    • Eric K.

      I think this is exactly the point of the discussion, and I agree it is not pointless.

      When a driver is making a split-second decision during a crisis situation, it is expected that the driver won’t necessarily make the ‘best’ decision, whatever that may be. It’s fruitless to debate the moral consequences of a decision made in such a situation.

      The programmer(s) of an automated car, however, do not have that excuse. They aren’t making their code decisions in a crisis situation, and they don’t have the luxury of ‘doing nothing’. It is guaranteed that whatever decision is made will be debated in a courtroom some day.

      There are immense consequences to be taken into consideration, and I, for one, would rather some of the discussion happen up-front.

      • Mike Loukides

        There are a couple of issues here. First, when I was bearing down on that Mercedes, I was well aware that there was a whole lot of liability involved. And most of that liability was pointed at me. Maybe some of it was pointed at the person who sold me the car, though I do not believe she was aware that the brakes were failing; from a liability standpoint, that’s probably beside the point. Insurance companies wouldn’t have hesitated to drag her into the mess if they thought they could gain from it.

        But the bigger issue is that humans are just not that good at driving safely, for any number of reasons, of which poor judgement (mea culpa) is only one. An autonomous vehicle might do any number of things to avoid the situation entirely, including detecting a loss of pressure in the brake system long before it is apparent to the driver, driving to the side of the road, parking, and refusing to do anything else until it is fixed.

        And I think it’s very odd to obsess over the ethics of a particular situation, when globally, the consequence of autonomous vehicles is safer driving for both passengers and potential victims. If you need an algorithm for who to kill when everything fails: I could live with a random choice. That’s effectively what humans do when they freeze. Autonomous vehicles will have to make that choice far less often than human drivers.

        • MacCruiskeen

          Sure, but you can’t just _assume_ autonomous vehicles are _inherently_ safer. They have to be designed and programmed to be safer. They will have to make decisions, and those decisions will be according to rules given to them by their human creators. Autonomous cars aren’t just going to appear fully formed like Athena from the head of Zeus.

          • Eric K.

            Right… And to be more pedantic, autonomous vehicles will not be ‘making decisions’. The *programmers* of those autonomous vehicles will be making the decisions ahead of time, by the process of writing the code those machines are slave to. And that’s why it’s of the utmost importance to “obsess over the ethics of a particular situation”, as @mike_loukides:disqus put it.

  • floatingbones

    This discussion reminded me of John McPhee’s wonderful book “The Control of Nature” and his extended discussion of the Acthafalaya River (available online at http://www.newyorker.com/archive/1987/02/23/1987_02_23_039_TNY_CARDS_000347146?currentPage=all ). McPhee notes that the amount of water diverted form the Mississippi to the Atchafalaya has been controllable for a while — and no less than 7 different political entities lobby for differing amounts of diversion. Those choices percolated through our legal system, and the controllers of the dials are now empowered to make the decision. Just like the choices those autonomous cars will make, thinking there is a perfect choice for all contingencies on the river `is utter nonsense.

  • Gary G

    Would an autonomous vehicle be guided by the three robot rules and first choose the option that would not harm a human?

  • stepel

    The computer in the car will be programmed to avoid accidents. It will not be programmed to “make decisions” on who to kill or who to save. When your computer crashes, does it decide which files to save? Neither will your driverless car decide which person to save.

    The reason the question is tiring is because it is not important.

  • Dilshad Ali

    i would like to say that when an autonomous vehicle might do any number of things to avoid the situations entirely, including detecting a loss of pressure in the brake system long before it is apparently to the driver, driving to the side of the road, parking, and refusing to do anything else until it is fixed.