What do you think? Leave a respectful comment.

Should a self-driving car preference the lives of passengers or pedestrians during an accident? Researchers surveyed millions of people to inform car manufacturers and policymakers. Photo by metamorworks/Adobe Stock Images

In a crash, should self-driving cars save passengers or pedestrians? 2 million people weigh in

Let’s say someone is driving a car with two other passengers, when suddenly three pedestrians leap into a crosswalk in front of it. The driver must decide between running the pedestrians down or crashing into a concrete barrier. What would you choose?

Since 2016, scientists have posed this scenario to folks around the world through the “Moral Machine,” an online platform hosted by the Massachusetts Institute of Technology that gauges how humans respond to ethical decisions made by artificial intelligence.

On Wednesday, the team behind the Moral Machine released responses from more than two million people spanning 233 countries, dependencies and territories. They found a few universal decisions — for instance, respondents preferred to save a person over an animal, and young people over older people — but other responses differed by regional cultures and economic status.

The findings are important as autonomous vehicles prepare to take the road in the U.S. and other places around the world. In the future, car manufacturers and policymakers could find themselves in a legal bind with autonomous cars. If a self-driving bus kills a pedestrian, for instance, should the manufacturer be held accountable?

The study’s findings offer clues on how to ethically program driverless vehicles based on regional preferences, but the study also highlights underlying diversity issues in the tech industry — namely that it leaves out voices in the developing world.

What the scientists did

The Moral Machine uses a quiz to give participants randomly generated sets of 13 questions. Each scenario has two choices: You save the car’s passengers or you save the pedestrians. However, the characteristics of the passengers and pedestrians varied randomly — including by gender, age, social status and physical fitness.

For example, one scenario asked users to decide between killing two elderly men and one elderly woman crossing a street, or swerving the car to crash and kill an adult man, adult woman and boy inside.

This question asks participants to decide in an emergency situation between staying on course and killing two elderly men and one elderly woman or hitting a concrete barrier and killing an adult man, an adult woman and a boy. Photo by: Edmond Awad et al., 2018

This question asks participants to decide in an emergency situation between staying on course and killing two elderly men and one elderly woman or hitting a concrete barrier and killing an adult man, an adult woman and a boy. Photo by: Edmond Awad et al., 2018

Afterwards, the participants took a survey on their education levels, socioeconomic levels, gender, age, religious beliefs and political attitudes. The participants were then divided up geography. The scientists found that respondents fell into one of three “cultural clusters” — eastern (East Asian and Middle East nations with Confuscist and Islamic backgrounds), western (Europe and North America) and southern (Central and South America).

“We had two goals,” said Edmond Awad, a computer scientist at MIT and the study’s lead author. One was to encourage public discussion about this topic, while the other centered around quantitatively measuring people’s cultural preferences.

What they found

The researchers identified three relatively universal preferences.

On average, people wanted:

  • to spare human lives over animals
  • save more lives over fewer
  • prioritize young people over old ones

When respondents’ preferences did differ, they were highly correlated to cultural and economic differences between countries. For instance, people who were more tolerant of illegal jaywalking tended to be from countries with weaker governance, nations who had a large cultural distance from the U.S. and places that do not value individualism as highly. These distinct cultural preferences could dictate whether a jaywalking pedestrian deserves the same protection as pedestrians crossing the road legally in the event they’re hit by a self-driving car.

“There is this gray space of decisions that are not wrong and not right,” said Patrick Lin, a philosopher at the California Polytechnic State University who specializes in technology ethics. “They’re judgement calls.”

Car manufacturers and tech developers struggle with these moral dilemmas, because driverless cars can’t simply abide by preexisting robotic ethical principles like Asimov’s laws of robotics, which dictate that robots shall not harm and must obey humans.

The “Moral Machine” scenarios also aren’t exactly applicable to real life. The participants were 100 percent certain of the physical conditions of the characters in the scenarios, along with their chances of death, which wouldn’t always be the case on the road.

How do the findings compare with existing rules?

The team’s results clash with some existing guidelines for autonomous vehicles. In 2017, the German Ethics Commission on Automated and Connected Driving recommended that self-driving vehicles should not make any distinction based on personal features, like age, gender and social status.

“Trying to implement some universal code of ethics is going to be challenging,” Awad said. “Different rules could have higher acceptability in some countries than [in] other countries.”

Awad noted that the Moral Machine faced major setbacks that prevented the results from representing everyone. Though millions of people played the game, the Moral Machine lacked crucial diversity amongst its participants. That matters for a few reasons.

The programmers are just as important as what’s being programmed

The Moral Machine required participants to have access to internet and electricity, which automatically excludes many voices in the developing world.

A coverage map from the study reveals that huge portions of Africa, South America, Asia and the Middle East that did not participate.

Red points mark the locations where a participant made at least one decision in the Moral Machine. Photo by: Edmond Awad et al., 2018

Red points mark the locations where a participant made at least one decision in the Moral Machine. Photo by: Edmond Awad et al., 2018

“The context really matters,” said Preeti Adhikary, the vice president of marketing at Fusemachines, an artificial intelligence company that conducts talent searches in developing countries and underserved communities in the U.S. “And when you build algorithms and test in developed countries, that context might be missed.”

The Moral Machine found that most subjects chose people over animals, but Adhikary noted that cows are sacred in a few Asian countries and harming them comes with lasting consequences.

Sameer Maskey, Fusemachine’s CEO, added that car safety preferences collected for a developed country may not be globally applicable. In other words, data collected from American roads might not account for differences in roads, signs, cars and driving styles in Kathmandu.

This lack of diversity is an overarching problem in the tech world and makes some products less effective for some consumers. The Washington Post recently wrote about how Alexa struggles to understand non-native accents.

“We’re not anti-technology, but we also understand that you can’t be irrationally exuberant about technology,” Lin said. “It always comes with some cost.”

What’s next?

Awad hopes that these findings inspire more studies and increase public discussion around the ethical dilemmas facing self-driving cars. This could influence policy as the autonomous vehicle industry grows.

“If people feel that they are included, then that could also help in adoption,” Awad said.

The Latest