In a future when cars no longer need humans to drive, choices about who might live or die in a crash are already being made — by the so-called “moral codes” that are preprogrammed into a car’s neurology.
Like humans, autonomous cars make countless tiny decisions while navigating the complexities of street traffic. But instead of a brain, driverless cars rely on a preprogrammed set of parameters to decide whether to brake, turn or accelerate.
“Suppose we have some [trouble-making] teenagers, and they see an autonomous vehicle, they drive right at it. They know the autonomous vehicle will swerve off the road and go off a cliff,” said Keith Abney, an associate professor of philosophy at California Polytechnic State University. “But should it?”
While fully driverless cars are still in the research stage, several automakers such as Bosche, Tesla, Nissan Mercedes-Benz, Uber and Audi are already testing partially or completely self-driving cars on the roads.
The cars of the future will face dilemmas that require split-second responses. Since they will only be able to react according to preset codes, some say experts from cognitive and behavioral sciences should be helping programmers who are outsourcing potentially murky decisions to rigid algorithms.
To minimize the potential for harm, “what you want to do is think through these situations beforehand,”Abney said. “You shouldn’t be overoptimistic.”
But some coders say that while these hypothetical situations are interesting, they are misleading because autonomous cars do not make judgments based on value, they make them based on protocol. While moral decisions will come into play when programmers decide how to use which algorithms, an assistant professor in computer science at Carnegie Melon University said the car itself does not have a moral agency.
“The actual split-second decisions, those are not about morality. They’re following prescribed behavior,” Kolter said.
That prescribed behavior is expected to significantly decrease the number of car accidents across the nation. Last year, about 38,000 people were killed in such incidents, according to the National Safety Council. Automakers use these statistics as a talking point in favor of the automated technology. Instead of figuring out reactions to potentially lethal situations, they focus on programming cars to recognize when they should stop, swerve or slow down.
Like neurotransmitters, algorithms enable cars to make calculations. The cars envision a 360-degree digital map of the environment through lasers, cameras or radars, to figure out their placement within the setting and also categorize which objects might move. Through algorithms, driverless cars use these inputs, in addition to the rate of motion and proximity to other objects, to figure out the easiest, safest trajectory.
So instead of preprogramming reactions to specific, dire situations, Google’s driverless car prototype, for example, is preset to recognize unfamiliar objects or situations, and most often reacts by stopping or slowing down.
The life-or-death hypotheticals “are not accurate portrayals of what system needs to think about,” Kolter said.
Regardless, crashes will happen and someone will have to be held accountable. Google’s autonomous car in February, while it was on a test drive in Mountain View, California, sideswiped a public bus as it tried to merge into the bus’s lane. No one was injured in the incident.
“This is a classic example of the negotiation that’s a normal part of driving — we’re all trying to predict each other’s movements,” Google wrote in its monthly statement.
Google also said it refined the software to understand that bigger vehicles are less likely to yield. By that time, their cars had driven more than one million miles, half of which had been on open streets, without causing a crash.
Google took the blame, but Stephen Wu, a lawyer in Silicon Valley who represents companies in the field of semi-driverless technology, said that because crashes are inevitable, liability will be an issue in the future.
Wu said he has heard discussions about special legislation for moral algorithms to help clarify liability. If a car decides to steer away from a large group of people, for example, and instead hit a smaller group in an attempt minimize damage, then the smaller group could still blame the manufacturer because that code would have been preprogrammed, he said.
To better understand unspoken rules of the road, such as the likelihood of a bus yielding, Nissan last year hired an anthropologist to study what humans expect cars to do, and how this changes from one neighborhood to the next.
Melissa Cefkin watches for body language and analyzes patterns of interactions between cars and people in different types of intersections, such as a college campus or an intimate downtown setting. She said each one has a unique personality that she dissects with her colleague, a sociologist.
“So if we go into a field setting, we might take video from three different angles,” she said. “If we take a 10 minute video we can spend hours and hours reviewing it.”
Cefkin said some of her favorite interactions are when people wave or make eye contact to imply what they are going to do next.
“People send signals through their body motion about what their intentions are, for example, turning the wheel is seen as a sign that ‘I’m going to move in another direction,’” she said.
Once they identify quantifiable patterns, they sit down with the programmers who decide what to integrate into the algorithms.
Right now, she said, people are much better at instantly interpreting the world. But her job is to help ensure driverless cars have manners, an area where human drivers can sometimes fall short and cause crashes.
Beyond algorithms, she wonders about how autonomous cars will affect future generations in a country where teenagers look forward to turning 16 for one very specific reason.
“You have to ask yourself,” she said.”What if there’s no such thing as becoming that authorized driver? It does reconfigure so many things about our social lives.”