Ethics and Self-Driving Cars

  • By Annie Kim
  • Posted 05.16.18
  • NOVA

Self-driving cars will be confronted with ethical dilemmas. How do we code those into the machines?

Close
Running Time: 2:57

Transcript

Onscreen: Can driverless vehicles make ethical choices? The stakes are high…

 

News video: An Uber autonomous vehicle was driving about 40 mph when it struck a pedestrian.

 

Onscreen: Consider the “Trolley Problem.”

 

Matthew Botvinik: There’s a trolley coming down the tracks.

Onscreen: Towards 5 workers.

Botnivik: And you have to decide whether you’re gonna pull the switch and move it onto other tracks.

Onscreen: Towards just 1 worker. What do you do? Nothing? And let 5 people die? Or take action? And kill 1? Now imagine coding this decision-making directly into driverless cars.

Edmond Awad: This is a big challenge.

Onscreen: The dilemma of life and death remains, but now a computer is at the helm.

Awad: So, for one example: we want always to give preference for humans over animals, and you try to apply this to every situation that you come across. The car can identify the object as human and the other as animals to avoid the object that is human even if that means it pays the cost of sacrificing the animal.

Onscreen: This is a “top-down” approach. There’s also the “bottom-up” approach.

Awad: You can imagine on one side you have a big truck and on the other side you have a cyclist. The car could choose to become a little bit closer to the cyclist or could choose to be closer to the truck. Now in each of these two situations, we are introducing more risk to one side.

Onscreen: In a “bottom-up” approach, a car might make case-by-case decisions based on optimizing things like passenger comfort or manufacturer liability or observing preferences of human drivers. Most likely, car designers will opt for a top-down/bottom-up combo.

Awad: By training the cars on different examples with some feedback about how the car should resolve this kind of scenario.

Onscreen: But it’s not easy. What if a trained car determines that to minimize loss of life the driver must be sacrificed?

Awad: While most people approve of self-driving cars, they’re not willing to buy self-sacrificing cars.

Onscreen: And what about that Trolley Problem? Most people claim they would pull the switch to kill one person if they could save five. But most of the time, driverless cars won’t have to make these kinds of decisions.

Awad: Self-driving cars will be minimizing accidents. You’re actually minimizing risk to your life.

Onscreen: Today, 94% of accidents are caused by human error. A programmed car can’t get drunk, angry, tired, or distracted.

Awad: So, every day that that this adoption is being postponed, that means loss of more lives.

Botvinik: The future of AI technology is not just an engineering story, it’s also a story about how we decide to design our societies. We have to continually decide what kind of world we want to live in.

Credits

PRODUCTION CREDITS

Digital Production
Annie Kim
Production Assistance
Ari Daniel
Editorial Review
Julia Cort
© WGBH Educational Foundation 2018

MEDIA CREDITS

Visuals
flickr | Marco Nürnberger
the Noun Project | Nick Abrams
the Noun Project | Gregor Cresnar
the Noun Project | Jasmine Rae Friedrich
the Noun Project | Gerard Higgins
the Noun Project | Jeevan Kumar
the Noun Project | Yaroslav Samoilov
Shutterstock
Music
­APM
Sound Effects
­freesound.org

POSTER IMAGE

(main image: aerial view of car)
Shutterstock

Related Links