What do you think? Leave a respectful comment.

People want self-driving cars to value passenger safety over pedestrians, study says

Motor vehicle accidents caused nearly 40,000 traffic fatalities and 4.5 million serious injuries in the United States in 2015, and 90 percent of those accidents were due to human error. Remove the human component with self-driving vehicles, and many of those accidents could be preventable. Instead, computer-driven cars will face moral dilemmas where they must choose between two bad outcomes: Place a passenger in danger to save a pedestrian or vice versa.

A new study argues that how these vehicles respond to ethical dilemmas could dictate their public safety and widespread adoption by consumers. In particular, a majority of the 2,000 subjects in the study picked autonomous cars that value their lives as passengers over those of pedestrians.

In the study, a trio of researchers based in France and the U.S. surveyed nearly 2,000 individuals on how they would want autonomous vehicles to behave. The team offered vignettes to the participants in which a deadly accident is unavoidable. Each scenario varied the number of passenger or pedestrian lives that could be saved depending on whether the car swerved or stayed on course.

The experiment puts a spin on the “trolley problem” — a classic scenario used in ethical debates and psychological studies. The problem consists of an individual deciding whether to let a runaway trolley kill five people tied to a track, or steer the railcar onto an alternate track where a separate individual will be killed.

Seventy-six percent of participants in the autonomous car study said they would prefer vehicles respond to impending crashes in a ‘utilitarian’ manner, and choose the action that would save the most lives. These responses mirror what is often seen with the trolley problem.

“Even when people imagine being in a car with a family member or even with their own child, they still said the car should kill them for the greater good,” lead author and psychological scientist Jean-Fraçois Bonnefon of the Toulouse School of Economics said during a press conference. “Most people agree, that from a moral standpoint, cars should save the greater number [of people], even if they must be programmed to kill their passengers to do so.”

But ironically, respondents said they would ultimately buy a car programmed to preserve their lives as passengers over a utilitarian vehicle. On a willingness-to-buy scale of one to 100, subjects rated self-preservation of themselves and family as a 50, while the decision for self-sacrifice had a median ranking of 19. Participants also indicated they would be less likely to buy a self-driving car if the government mandated utilitarian technology.

“You can recognize the feeling,” Bonnefon said, “…I want other people to do something, but it would be great not to do it myself. It’s great if everyone paid their taxes, but it’s not bad either if everybody but myself paid the taxes.”

The research team suggests that this moral inconsistency could present roadblocks in the transition to self-driving vehicles.

To gauge your own moral decision-making preferences and participate in the ongoing research on the social dilemmas of autonomous vehicles, check out this interactive website. It was created by MIT’s Media Lab in conjunction with this study, which was published Thursday in the journal Science.

“If we try to use regulation to solve the public good problem of driverless car programming, we would be discouraging people from buying those cars,” said computational social scientist and co-author Iyad Rahwan of MIT, “and that would delay the adoption of the new technology that would eliminate the majority of accidents.”

Latest News