I hadn’t really thought about this. Okay: I don’t really think about self-driving cars much at all, since they seem the least of my possible problems. But still: The car has to be programmed as to what action it should take for avoiding obstacles, right? And especially obstacles that are still wiggling. So what do you tell it to do when there are no good choices? And are you really comfortable with that decision being made by a bureaucrat, or a guy who wears a pocket protector? Or by some random people in a focus group?
Here’s a scenario: A crowd of people appear ahead of the vehicle, too close for stopping. Swerve left and hit a single person? Swerve right and hit a wall? Does “swerve right” mean it’s okay to sacrifice the vehicle’s occupants?
In general, people are comfortable with the idea that self-driving vehicles should be programmed to minimize the death toll.
This utilitarian approach is certainly laudable but the participants were willing to go only so far. “[Participants] were not as confident that autonomous vehicles would be programmed that way in reality—and for a good reason: they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves,” conclude Bonnefon and co.
And therein lies the paradox. People are in favor of cars that sacrifice the occupant to save other lives—as long they don’t have to drive one themselves.