Robot cars and “algorithmic morality”

I hadn’t really thought about this. Okay: I don’t really think about self-driving cars much at all, since they seem the least of my possible problems. But still: The car has to be programmed as to what action it should take for avoiding obstacles, right? And especially obstacles that are still wiggling. So what do you tell it to do when there are no good choices? And are you really comfortable with that decision being made by a bureaucrat, or a guy who wears a pocket protector? Or by some random people in a focus group?

Here’s a scenario: A crowd of people appear ahead of the vehicle, too close for stopping. Swerve left and hit a single person? Swerve right and hit a wall? Does “swerve right” mean it’s okay to sacrifice the vehicle’s occupants?

Not with me in the car, and I’m not the only one…

In general, people are comfortable with the idea that self-driving vehicles should be programmed to minimize the death toll.

This utilitarian approach is certainly laudable but the participants were willing to go only so far. “[Participants] were not as confident that autonomous vehicles would be programmed that way in reality—and for a good reason: they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves,” conclude Bonnefon and co.

And therein lies the paradox. People are in favor of cars that sacrifice the occupant to save other lives—as long they don’t have to drive one themselves.

About Joel

You shouldn't ask these questions of a paranoid recluse, you know.
This entry was posted in Uncategorized. Bookmark the permalink.

6 Responses to Robot cars and “algorithmic morality”

  1. Paul Bonneau says:

    Don’t the current crop merely hit the brake? Swerving would be pretty damn ambitious, if not foolhardy.

    I like things to be idiot-proof – up to a point. A couple days ago I was trying to put the magazine for my nephew’s Remington 770 (?) together and there was no clue how it went (I don’t much care for that gun). A good design only lets things go together one way. But driving a car? No thanks…

  2. Judy says:

    Good ol’ NIMBY is at work again.

  3. ben says:

    OK, so we will have sleep-deprived computer wonks writing the code that will drive real cars on real roads. Given the quality of the decisions made by some drivers out there, could that really be any worse than what we have today?

  4. Joel says:

    Yeah, the article makes that point. But politically of course that doesn’t matter.

  5. M J R says:

    On the plus side it may weed out the stupid who can’t bother to stop texting, take the earplugs out and look both ways. Kinda like Darwin on the roads…

    It will also be interesting to see the police running around like chickens with no heads. Who will they target when a fatal accident happens? Will they charge the car, the driver; or the manufacturer/programmer?

    Interesting times these are.

  6. Mark Matis says:

    Can you imagine the fun one could have hacking these things? Since even cars with a REAL driver can be made to do “special” things? And it ain’t just Jeeps. Those hackers said there were several other vehicles that would have also been relatively easy to hack. Including the Cadillac Escalade. But I suppose they were smart enough to understand what the FedPig swill would have done to them had they dared to touch a Government Motors product…

To the stake with the heretic!