Look Who's Driving
As self-driving cars take to the streets, investigate how they work, and if they are safe.
After years of anticipation, autonomous vehicles are now being tested on public roads around the world. As ambitious innovators race to develop what they see as the next high-tech pot of gold, some experts warn there are still daunting challenges ahead, including how to train artificial intelligence to be better than humans at making life-and-death decisions. How do self-driving cars work? How close are we to large-scale deployment of them? And will we ever be able to trust AI with our lives? (Premiered October 23, 2019)
More Ways to Watch
Look Who’s Driving
PBS Airdate: October 23, 2019
NARRATOR: A lethal technology we can’t live without—but what if cars got smarter?
CHRIS GERDES (Stanford University): Automated vehicles offer the promise of dramatically reducing collisions and fatalities on our roads and highways.
DANIELA RUS (Massachusetts Institute of Technology): The big dream is to make a car that will never be responsible for a collision.
NARRATOR: The potential payoff is huge.
RAJ RAJKUMAR (Carnegie Mellon University): In the mid-2030s the market could be worth a few trillion U.S. dollars, with a “T” in there.
NARRATOR: Pursuing that pot of gold, companies are already testing their cars on our streets.
MICHAEL FLEMING (Chief Executive Officer, Torc Robotics Virginia Polytechnic Institute and State University Grand Challenge Team): This is a bold and ambitious mission.
NARRATOR: Is this the beginning of a mobility revolution…
AMNON SHASHUA (President and Chief Executive Officer, Mobileye): We are venturing into domains that, 10 years ago, would be deemed science fiction.
NARRATOR: …or are we simply moving too fast?
MISSY CUMMINGS (Duke University): The technologies are deeply flawed. It’s just simply not ready for public consumption.
NARRATOR: Will we ever be safe with a robot behind the wheel? Look Who’s Driving, right now on NOVA.
TEMPE, AZ
MARCH 18, 2018
9:58 P.M.
TEMPE ARIZONA 911 OPERATOR: 911, what is your emergency?
RAFAELA VASQUEZ (Uber Test Driver): Um, yes, I was, um, I hit a bicyclist.
TEMPE ARIZONA 911 OPERATOR: Do you need paramedics?
RAFAELA VASQUEZ: I don’t, but they do.
NARRATOR: Every day in the United States, there are about a hundred fatal car crashes, but on March 18, 2018, one attracts particular attention, because the woman behind the wheel, Rafaela Vasquez, is not driving; a computer is.
The crash puts a spotlight on a controversial new technology: the self-driving car.
TEMPE POLICE OFFICER #1: So what exactly happened?
RAFAELA VASQUEZ: Well, the car was in auto-drive, and all of a sudden, the car didn’t see it. I didn’t see it. All of a sudden, she’s just there; she just shot out in front. And then I think I, I, I know I hit her.
NARRATOR: The victim is 49-year-old Elaine Herzberg. The car that kills her is being tested by Uber, the ride-hailing company. It’s a modified Volvo, equipped with Uber’s self-driving technology. Test drives are permitted in Arizona, as long as there’s a safety driver to take over in case of trouble.
But on this night, Uber’s experiment badly fails. Elaine Herzberg is the first person in history killed by a self-driving car.
MISSY CUMMINGS: A woman was pushing a bike across a road. This is the sweet spot, in theory, where autonomy would be at its best, particularly compared to humans who have terrible vision at night. And I think that’s a, sadly, it’s a very good example of just how far away from safe these cars really are.
TEMPE POLICE OFFICER #2: Now, the question I have for you guys is do you guys have remote access to the cameras and stuff like that that are in there?
UBER REPRESENTATIVE: We technically should, yeah.
NARRATOR: Herzberg’s death quickly becomes an international story.
TV REPORTER #1: Killed on the street by a self-driving Uber car….
TV REPORTER #2: Uber is now suspending all of its self-driving testing.
TV REPORTER #3: Self-driving cars under intense scrutiny after...
TV REPORTER #4: Uber’s C.E.O. tweeting, “Incredibly sad news. We’re thinking of the victim’s family...”
TV REPORTER #5: So, the question here is, “What went wrong?”
NARRATOR: The Tempe crash stokes public fears about self-driving cars. Nearly three out of four Americans say they’d be afraid to ride in one, so why is anyone even trying to make a car that drives itself?
DMITRI DOLGOV (Chief Technology Officer and Vice President of Engineering, Waymo/2007 Urban Challenge Stanford Racing Team): Self-driving vehicles don’t get distracted. They don’t get fatigued; they don’t fall asleep. And, you know, they don’t drive drunk.
NARRATOR: In other words, they promise safety. Each year in the U.S., some 35,000 people die in car crashes, nearly all caused by human error.
MARK ROSEKIND (Chief Safety Innovation Officer, Zoox/Former Administrator, United States National Highway Traffic Safety Administration): They’re because of a choice or an error that we make: a choice to pick up the phone, drive drunk, drive drugged, drive distracted, drive drowsy. You’re looking one direction when you should have been watching in the other direction, 94 percent of crashes.
NARRATOR: The hope is that computers will be able to do a better job.
DANIELA RUS: I really believe that with self-driving cars we will eliminate road accidents. If we get the technology right, those cars will know everything about the road situation, the road condition, well before a human would, so that if there is somebody running around the corner, about to jump in front of the car, the car will know that and there will be no accidents. This is the dream.
NARRATOR: And that dream is about much more than safety. Proponents say self-driving cars could bring about the biggest changes in transportation since horses gave way to the automobile. Instead of having to own a car, people could simply summon one from a circulating fleet of robo-taxis, reducing the number of cars on the road, cutting pollution and eliminating the need to have so many private cars sitting idle all day long.
For people who can’t drive, self-driving cars could also provide greater mobility.
SHAI SHALEV-SHWARTZ (Chief Technology Officer, Mobileye): You can put your kid on an autonomous car and it will take him to school and problem solved.
NARRATOR: Some see a future where cars talk to each other, reducing traffic jams. Others are less sure.
CHRIS GERDES: It could also lead to the nightmare scenario, as well, where inexpensive mobility leads to dramatic consumption of mobility, congested freeways, unsustainable use of energy, and an acceleration of climate change and other issues that we’re facing now.
AMNON SHASHUA: This is a disruption. I’ll call this “Mobility 2.0.” If, if cars today is 1.0 and horse carriages was 0.0, this is a new era of mobility.
NARRATOR: An era which, in some places, seems very close at hand, like here, on the streets of San Francisco.
JESSE LEVINSON (Co-founder and Chief Technology Officer, Zoox): All right, let me go ahead and slide to “Go.” So, we’re off on our autonomous way.
NARRATOR: Jesse Levinson demonstrates how his company’s self-driving car navigates the city’s streets.
JESSE LEVINSON: On the screen here, you can see what the vehicle is planning to do. That’s the green corridor.
NARRATOR: The display shows how the route is constantly adjusted in response to data from cameras and scanners that use radar and lasers. The system is designed to handle anything that might happen, but on test drives the company always has a safety driver.
JESSE LEVINSON: Here’s a really interesting situation we’re about to encounter is a six-way, unprotected intersection. This intersection is so complicated that I’m not sure I know how to drive it. But what we’re doing here is we’re going to make a left turn. We’re checking all the oncoming traffic.
NARRATOR: The car scans its surroundings to determine if its path is clear.
JESSE LEVINSON: We’re also yielding for all these pedestrians in the crosswalk…
NARRATOR: The computer won’t allow the car to proceed until it’s safe.
JESSE LEVINSON: …and literally tracking hundreds of dynamic agents at the same time. Now, we’ve just made our way back. That was a 100 percent autonomous drive, with absolutely no manual interventions. Pretty cool, huh?
NARRATOR: Test drives like this are impressive, but some warn it will be a long time before computers can consistently drive more safely than humans do, because driving, though it may seem easy, is actually a very difficult task.
RAJ RAJKUMAR: Let me make the following bold assertion: driving is the most complex activity that most adults on the planet engage in on a regular basis. When we drive, the vehicle is moving, the environment is changing on a continuing basis. All these pieces of information are actually coming towards our senses, goes to the brain. The brain basically applies the rules of the road and, for the most part, we drive safely.
NARRATOR: Each of the 35,000 annual crash deaths in the U.S. is tragic, but they’re statistically rare. On average, there’s only one for every hundred-million miles of driving.
STEVEN SHLADOVER (University of California, Berkeley): That translates into 3.4-million hours of driving. Three-point-four-million hours is 390 years of continuous 24 hours a day, seven days a week driving in between fatal crashes.
NARRATOR: That’s a very high bar for technology to clear.
STEVEN SHLADOVER: Think about our modern electronic devices that are powered by software that we use every day. And try to imagine that those devices could run without ever giving you the spinning blue donut that said it’s not ready to give you the answer you wanted, because if that computer was driving your vehicle, you crashed. So, getting to the point where we have a software intensive device that can operate without a fault is a huge, huge challenge.
MICHAEL FLEMING: This is a bold and ambitious mission, a mission that really isn’t going to be accomplished overnight.
NARRATOR: Michael Fleming speaks from hard experience. He’s been working on self-driving cars for more than a decade. And like many in the field today, he got his start thanks to a push from a surprising place: the Pentagon.
In 2000, hoping to reduce battlefield casualties, Congress orders the military to develop combat vehicles that can drive themselves. Two years later, the Pentagon’s research agency, DARPA, announces what it calls the “Grand Challenge,” a driverless car race, 142 miles through the California desert. Whoever finishes first will win $1,000,000.
March 13, 2004: Thirteen vehicles, rigged with sensors to detect what’s ahead and software to control speed and steering, set out.
MICHAEL FLEMING: And we and a lot of other teams went out to compete in the DARPA, you know, Grand Challenge, with high hopes of bringing home a million-dollar prize.
NARRATOR: Right away, the vehicles run into trouble. The course is littered with rocks, cliffs, cattle grates and river beds, obstacles that the sensors sometimes miss.
MICHAEL FLEMING: We got a hundred yards out of the gate, and the vehicle just stopped working. And we failed miserably with everyone else.
NARRATOR: Of the full 142 miles, no vehicle goes farther than eight. But a year later, DARPA provides a second chance: a new challenge that doubles the payoff to $2,000,000.
Among the 2005 competitors: a team from Stanford, confident that their car, named Stanley, can make it all the way to the finish line. Their ace in the hole: cutting-edge software that uses AI: artificial intelligence.
SEBASTIAN THRUN (Stanford Racing Team): We built a computer system using artificial intelligence that’s able to actually find the road very, very reliably. What we see here is a map of the environment that’s being built as Stanley drives. The red stuff is stuff that it doesn’t want to drive over, it’s dangerous; the white stuff is the road as found by Stanley; and the grey stuff that you see here is stuff it just doesn’t know anything about, so it’s not going to drive there.
NARRATOR: Stanford’s strategy pays off. Stanley is the first to cross the finish line.
DARPA URBAN CHALLENGE ANNOUNCER #1: We have the green flag.
NARRATOR: Two years later...
DARPA URBAN CHALLENGE ANNOUNCER #1: Launch the bots!
NARRATOR: …the Pentagon stages a final contest, adding a new complexity: traffic. DARPA calls it the “Urban Challenge.”
JESSE LEVINSON: In the 2007 Urban Challenge, they let us drive with other cars, both autonomous cars as well as human-driven cars. And so that was really exciting because all of a sudden now you have to track dynamic objects and predict what they’re going to do in the future, and that’s a much harder problem.
NARRATOR: To help solve it, some half a dozen teams bank on a detection technology called LIDAR, which dramatically improves a car’s ability to see. They rig devices that spin 360 degrees. LIDAR works using laser beams, pulses of invisible light that bounce off everything in their path. A sensor collects the reflections, which provide a precise picture of the environment.
RYAN CHILTON (Torc Robotics): And as those lasers are sweeping around in a circle, you get what’s called a “point cloud.” So, you’ll get thousands, even millions of points coming back to the sensor. And you can build up a point cloud that looks similar to this. And it gives you a very accurate geometric detail.
NARRATOR: Using all that data to pilot a car demands new kinds of software.
DMITRI DOLGOV: I vividly remember the first time when some of my software that I had just written hours ago ran on the car. That was pretty incredible. There was nobody behind the wheel, and it was just doing everything on its own.
NARRATOR: But not quite perfectly.
DARPA URBAN CHALLENGE ANNOUNCER #1: Okay folks, we have got our first autonomous traffic jam.
DARPA URBAN CHALLENGE ANNOUNCER #2: Another historic event, right here.
NARRATOR: This time, six of 11 cars complete the course successfully. It’s a major turning point, but there’s also a growing appreciation for the immense challenge ahead.
JESSE LEVINSON: The reality was the Urban Challenge was a very small step compared to what had to be done to actually get commercial vehicles on the road. It was only a six-hour race. So, basically, if your car could last for six hours without hitting something, you were like, “Yep, we did it,” right? Now, getting into a commercial service with thousands of vehicles and setting a safety bar that’s significantly higher than human-level performance, that’s very difficult.
NARRATOR: Still, the potential of this new technology proves irresistible to engineers and to business.
The DARPA challenges may have seemed like just a geeky science project, but they trigger a race to build a whole new industry. One of the first out of the gate? Tech giant, Google.
DANIELA RUS: Should we do a simple test first?
NARRATOR: In university robotics labs and the startups that spin out of them, a growing army of engineers keeps improving both sensors and software. The big car manufacturers take notice, and they, too, enter the fray.
CONSUMER ELECTRONICS SHOW,
LAS VEGAS, 2019
SPOKESPERSON AT 2019 CONSUMER ELECTRONICS SHOW: Well, hello ladies and gentlemen, and welcome to C.E.S.
NARRATOR: They’re betting that driverless cars are about to become a huge global business.
RAJ RAJKUMAR: The autonomous vehicle market is supposed to be an enormous market in the future. Estimates say that in the mid-2030s the market could be worth a few trillion U.S. dollars, with a “T” in there.
NARRATOR: That potential payday brings Uber to Arizona, whose flat landscape and sunny weather are ideal for testing self-driving cars. Uber sees eliminating its paid drivers as a key to future profitability.
But the March 2018 crash in Tempe casts a dark shadow on the future of self-driving cars. Uber suspends all testing on public roads.
The National Transportation Safety Board launches an investigation: why did neither the computer system nor the safety driver stop the car?
PHILIP KOOPMAN (Carnegie Mellon University/Chief Technology Officer, Edge Case Research): The Uber crash in Tempe wasn’t really about the maturity of the technology. We all knew the self-driving car technology wasn’t ready to deploy. That’s why they were doing testing. The significance of the Uber crash was that the human safety driver failed to prevent the crash.
NARRATOR: In the months after the crash, the N.T.S.B. and Tempe police release new details about what happened that night.
The interior camera reveals that Rafaela Vasquez is not watching the road for nearly seven of the 22 minutes before the crash, including about five of the last six seconds.
TISH HAYNES KEYES (Contestant, The Voice/Film Clip): (Singing) Oooohh, chain, chain, chain…
NARRATOR: The police discover that the whole time the car is moving, she is streaming an episode of a singing competition on her phone.
TISH HAYNES KEYES (Film Clip): (Singing) …chain of fools.
NARRATOR: Vasquez denies watching the video or even looking at her phone. She says she was just checking the control panel. But she does not step on the brake until after the car strikes Herzberg.
The N.T.S.B. findings also point to flaws in the self-driving system. The sensors actually detect Herzberg six seconds before impact, but the system doesn’t alert the safety driver. It’s not designed to.
STEVEN SHLADOVER: So, even though their sensors had detected this target, there was a potentially hazardous situation, they made the decision to not give any of that information to the test driver. I can’t imagine why.
NARRATOR: And the computer doesn’t decide emergency braking is needed until 1.3 seconds before impact.
But even then, the car doesn’t brake, because Uber has disabled the Volvo’s factory-installed emergency braking system. Uber wants to avoid jerky rides caused by unnecessary braking for harmless objects.
STEVEN SHLADOVER: So, the net result of that is they took a safe production vehicle and turned it into an unsafe prototype that caused the pedestrian to be killed.
NARRATOR: Uber tells the N.T.S.B. that it’s the safety driver’s job to correct any mistakes made by the self-driving system.
BRYAN REIMER (Massachusetts Institute of Technology): Now, while many people look at the safety driver in that vehicle as, as perhaps, the villain, the real issue is the system that that safety driver was put into. They were put into a system ripe for failure, where the expectations of failure were far lower than the actual probabilities, perhaps because the computer scientists and the engineers building these systems don’t all appreciate the complexities of, that human behavior brings to the system.
NARRATOR: Uber declined to participate in this film.
But even before Tempe, there were signs that drivers and automation don’t always work together very well. Automotive engineers classify automation into levels from 0 to 5. Fully automated cars, the ones driven by computers, are levels 3, 4 and 5. Uber was testing its car as a level 4 prototype. At the lower levels, humans drive. In level 0 cars they handle everything; level 1 and 2 cars assist drivers by regulating speed or keeping the car in its lane, or even both. They’re partially automated.
BOBBIE SEPPELT (Massachusetts Institute of Technology): Partial automation is the foundation for full automation, and how we respond and adapt to automation at the lower levels gives us our window into the future.
NARRATOR: Millions of level 1 and 2 cars are on the road today. Insurance data suggest that they are less likely to crash because most of them automatically brake the car to avoid collisions.
STEVEN SHLADOVER: And indeed it appears these systems are making people better drivers, because think of it as an extra set of eyes and ears beyond your own eyes and ears. So, you get that additional vigilance from the sensor systems that may detect things that you missed.
NARRATOR: But the added confidence provided by partial automation also poses an unforeseen risk.
BRYAN REIMER: In the Volvo, she’s clearly using pilot assist a lot, which is great, keeping her hands on the wheel, paying attention to what’s going on in front of her.
NARRATOR: A research team at M.I.T. is gathering data on how people use partial automation.
BRYAN REIMER: …seem to use auto pilot a lot?
ANTHONY PETTINAT: Yeah, he’s been using it a little bit more than everyone else, actually.
BRYAN REIMER: The cars we are studying today can automatically steer, accelerate and brake in ways that were not available in production vehicles 20 years ago. So, I think the ultimate question is, you know, how does attention change over time, as we use these cars over months and years? And that’s at the very heart of the question we are hoping to understand more.
NARRATOR: Cathy Urquhart was given a Volvo S90 to test drive. It’s equipped with several automated features, including one for steering that keeps the car in its lane.
CATHY URQUHART (M.I.T. Volvo S90 Test Driver): The first day I was nervous, of course. It’s new technology. And I think it’s the same when you’re driving any new car, you have to get used to it. But I liked the lane monitor to keep you in the, in the lane. It wasn’t too obtrusive, with the car just kind of pulling you over, the wheel pulling you to one side. It was good. I liked that.
NARRATOR: When the Volvo’s steering assistance is on, Cathy could briefly take her hands off the wheel, but she doesn’t.
CATHY URQUHART: I’m not somebody that takes my hands off the steering wheel. I like to have control. I like to hold onto the steering wheel.
NARRATOR: But there are others in the M.I.T. study, like Taylor Ogan, who are much more trusting of automation. He owns a Tesla equipped with software called “Autopilot.”
TAYLOR OGAN (M.I.T. Tesla Test Driver): I actually traded my first car in to get Autopilot, so that the car could drive itself. It helps you stay in your lane and follow the car a distance of the car in front of you. And you really don’t have to touch anything. It’s awesome. I love it. I swear by it.
BETTY LIU: All right, here we go.
ELON MUSK (Entrepreneur, Film Clip): All right.
BETTY LIU (FILM CLIP): All right, oh!
NARRATOR: Tesla’s founder and C.E.O., Elon Musk, has long touted Autopilot as a step along the road to his goal of full autonomy.
ELON MUSK (FILM CLIP): No hands, no feet, nothing. It’s a combination of radar, camera with image recognition and ultrasonic sensors, that’s integrated with maps and real-time traffic.
NARRATOR: But for now, the Tesla system is still level 2, which means it requires just as much driver attention as a regular car, so the company has been criticized for choosing the name “Autopilot.”
PETER NORTON (University of Virginia): It’s a nice, catchy sounding name, but “Autopilot” certainly has a misleading connotation. It suggests that the pilot of the car is in the program, in the computer and not in the driver of the vehicle.
NARRATOR: But Musk claims that the name is not misleading.
ELON MUSK: Autopilot is what they have in airplanes, and it’s where there’s still an expectation that there’ll be a pilot. So the, so if, if, the onus is on the pilot to make sure the autopilot is doing the right thing, we’re not asserting that the car is capable of driving in the absence of driver oversight.
NARRATOR: Yet when he uses Autopilot, Taylor Ogan feels free to look away from the road from time to time.
TAYLOR OGAN: I’m not a distracted driver, but I definitely do things that I think older people who don’t trust technology wouldn’t do. I’m definitely on my phone a lot more, texting, reading articles, when the car is driving itself.
UFUOMA ELVIS-OBUKOWHO (Tela Test Driver): I’m actually going to test this Autopilot thing. Oh, I can eat food! This is the life, look at this.
NARRATOR: Some early Autopilot users push things a lot further and show off their exploits on YouTube.
UFUOMA ELVIS-OBUKOWHO (YouTube Video Clip): Oh, my. That’s a sharp turn. Oh, hoo-hoo, I’m alive. Okay, good. What is technology? What? What?
NARRATOR: Drivers like these may seem hopelessly reckless, but they’re just extreme examples of people placing too much trust in partially automated cars, a problem that in 2016 leads to tragedy.
JOSH BROWN (YouTube Video Clip): Ah, geez, car’s doing it all itself. Don’t know what I’m going to do with my hands down here.
NARRATOR: Thirty-nine-year-old Joshua Brown is a particularly enthusiastic early adopter of Tesla’s Autopilot. He loves to make YouTube videos showing what Autopilot can do.
JOSH BROWN (YouTube Video Clip): So, the camera up here is what’s always watching the road. And it’s looking at your lane markings and it is looking at speed limit signs. And right now we’re actually about to pass…
NARRATOR: Tesla says that Autopilot should only be used on highways with entrance and exit ramps, highways that don’t have cross traffic, like this one.
JOSH BROWN (YouTube Video Clip): Once you’re over, notice the sign is saying…
NARRATOR: But it’s only a recommendation, and Tesla doesn’t block drivers from using it on other roads, too.
JOSH BROWN (YouTube Video Clip): …at a stoplight. There’s a car in front of me, so I’m not going to have to do anything. So it’s determined stuff…
NARRATOR: Brown stresses how important it is for the driver to stay constantly alert.
JOSH BROWN (YouTube Video Clip): …so, just like that. I have my hands on the wheel, but I can still get my foot down to a brake and just keep it like this, because you can react so quickly that if anything goes wrong, you’re going to want to be able to take control very, very quickly, while we’re driving like this.
NARRATOR: On May 7, 2016, near the town of Williston, Florida, Joshua Brown is cruising east on Route 27A, a 4-lane divided highway that has numerous intersections. He is driving at 74 miles-per-hour with Autopilot switched on.
As Brown nears an intersection, a truck driver prepares to make a left turn across his path. The truck is supposed to yield, but it doesn’t. It begins its turn with Brown’s Tesla about a thousand feet away.
Brown now has about 10 seconds to avoid a collision. Records show that he is not using his phone, but he does nothing and neither does Autopilot. The Tesla slams into the trailer, killing Brown.
In the immediate aftermath, a critically important question: who or what is to blame?
Autopilot is an obvious suspect, but the N.T.S.B. clears it, because Autopilot isn’t designed to detect obstacles like that truck, that are crossing the car’s path. It only looks for objects moving in the same direction as the car.
So, the safety board splits the blame between the truck driver, for failing to yield, and Brown, for not paying attention. Had he noticed the truck and stepped on the brake, he could have easily stopped with plenty of room to spare.
And Brown is not the only person killed while using Autopilot.
In 2016, in China, a driver named Gao Yaning crashes into a road sweeper.
REPORTER #6: A deadly Tesla crash in California….
NARRATOR: In 2018, in California, Walter Huang dies when his Tesla crashes into a barrier.
REPORTER #6: …was set on Autopilot.
REPORTER #7: This deadly Tesla crash raising new questions tonight….
REPORTER #8: The roof of the car was ripped off as it passed under the trailer…
NARRATOR: And in 2019, Jeremy Beren Banner is killed in Florida when his Tesla hits a truck crossing its path.
REPORTER #9: ...the circumstances of which are similar to a crash in May of 2016, near Gainesville.
NARRATOR: Despite these fatalities, Tesla says that it continually improves Autopilot. And it asserts that drivers who use Autopilot have a lower crash rate than U.S. drivers as a whole.
Tesla declined to participate in this film.
Some scientists say that Autopilot and other systems like it remain inherently risky.
MISSY CUMMINGS: When cars are fully manual and you must do everything yourself, you bring all your cognitive resources to bear to do that task, but if the automation is doing a good enough job, people will check out in their heads very quickly. Your brain is wired to stop paying attention when the automation starts doing well enough.
CHRIS GERDES: It’s very easy to be lulled into a false sense of security if you’re supervising an automated vehicle that you have never seen have any sort of issue.
NARRATOR: That false sense of security makes it difficult for people to react quickly in emergencies, so many engineers are skeptical of what’s called level 3 autonomy, where a car can fully drive itself, except in an emergency, when the system alerts the driver to take over.
JESSE LEVINSON: If the vehicle all of a sudden can say, “Oh, wait. I don’t actually know what I’m doing. You better take over,” I’m not at all convinced that that can be done in a way that’s actually safe. That’s one of the reasons why we’re making the leap all the way to level 4 and 5 driving, which is to say that as a passenger in our vehicle you have no legal or other responsibility for driving or keeping the vehicle safe.
NARRATOR: Level 4 and 5 cars, which are fully-autonomous, are designed to safely handle any driving situation, even emergencies, with no human intervention at all. The only difference: level 4s could operate only on some roads at certain times; level 5s, anytime, anywhere.
AMNON SHASHUA: You can go to sleep or sit in the back seat, because there is no “take over” request. The car can manage all the situations. And in case something goes wrong, it still has enough redundancies to safely stop, so it doesn’t need the driver to engage.
NARRATOR: But achieving that goal of fully driverless cars will require the people developing them to overcome all kinds of obstacles.
MARK ROSEKIND: We’ve never done this before. We’ve done some things like it. We’ve increased safety in aviation tremendously, we’ve automated different kinds of transportation modes and activities beautifully, but we’ve never done this.
CHRIS GERDES: Trying to develop an automated vehicle that can do everything that a human driver can do is a huge problem, and it requires an awful lot of interlocking pieces.
NARRATOR: To even match human drivers, a self-driving car needs to learn to handle not just one, but three distinct tasks nearly flawlessly. The first: seeing everything that’s around the car; second, understanding what it’s seeing; and third, planning the car’s path and controlling its acceleration, braking, and steering along the way.
First, seeing: most self-driving cars rely on cameras, radar and LIDAR. Each has weaknesses. Cameras work poorly at night, radar doesn’t distinguish very well between different types of objects, and LIDAR can fail completely in rain, snow or fog. But even when sensors are working nearly perfectly, there’s still the second big challenge: understanding what they are seeing.
For us, taking in sensory information and making best guesses about the real world are second nature. For a computer, making meaning out of visual data is fiendishly difficult. In each case, it’s the task called “perception.”
AMNON SHASHUA: The human brain is an expert in perception, but for machines, it’s not natural. So this is one big challenge, that if you want to drive autonomously, you need to perceive the world just like humans do.
NARRATOR: Until recently, no computer could come close, but since 2010, a big leap forward in a branch of A.I. called “machine learning” has come a long way to closing the gap.
DANIELA RUS: Machine learning, it’s about, uh, giving machines the ability to look at data, identify patterns and make predictions.
NARRATOR: Some common examples of machine learning: an app that can recognize human speech or an airport security system that can recognize faces. But before a computer can do these things, it has to be trained, shown countless examples of the things we want it to recognize…
MARTIAL HEBERT (Carnegie Mellon University): At the core of machine learning is the idea of using training data so that one can train the system to do something.
DANIELA RUS: Usually the training stage consists of millions of examples.
NARRATOR: …in this case, a set of images of cats. The computer will find the similarities among them that will help it generalize and be able to recognize any cat, like we do.
DANIELA RUS: It also requires a great breadth of examples, because the performance of the network will be only as good as the data used to train it.
NARRATOR: In Jerusalem, Mobileye, a specialist in autonomous technology, trains its perception software with a huge volume of data. A person has to carefully label each image so that the computer can learn what different things look like.
RONI VISTUCH (Mobileye): She is making sure that all the vehicle are correctly annotated. We are looking for, cars, pedestrian, traffic sign, traffic light.
NARRATOR: Once the software masters the set of training images, it’s ready to tackle the data from a real drive, a stream of pixels coming in from the car’s cameras.
PHILIP KOOPMAN: And it’ll take each image from the video, say one frame out of the video, and say, “Okay, I have a bunch of pixels, a bunch of colored dots.” And it tries to find out what’s in there. At a high level, what it’s doing is it’s going around, sniffing through the image, looking for things like, “Oh, there’s some vertical edges, and there’s some horizontal edges, and there’s something round.” And so it pulls out a bunch of features. And so it’s pulling these video features out and associating them with whether or not it’s a person or a car.
NARRATOR: Today’s software can interpret millions of pixels every second. And thanks to the recent breakthroughs in machine learning, its accuracy has shot up substantially, to as high as 98 percent.
But is that good enough?
PHILIP KOOPMAN: Whenever I hear a good high number like “98 percent accuracy” in the context of self-driving cars, my reaction is that’s not near good enough. One of the problems with the really high accuracy is that’s only about how you do on the training data. If the real world is even slightly different than the training data, which it always is, that accuracy might not really turn out to be the case.
NARRATOR: In Pittsburgh, Phil Koopman’s company helps clients by pushing perception software to its limits; hoping to reveal the unexpected ways it can fail.
PHILIP KOOPMAN: It picks him up for just a little bit, and then it goes away.
The driverless cars have gotten really good at ordinary situations. Going down the highway on a sunny day should be no problem. Even navigating city streets, if nothing crazy’s going on, should be okay. They have a lot of trouble with the things they haven’t seen before, so we call them “edge cases.” Just something you’ve never seen before.
JEN GALLINGANE: And you’ll notice it was raining a little bit that night, so we have a lot of adults carrying umbrellas.
PHILIP KOOPMAN: If they don’t have a lot of umbrellas in the training set, maybe it’s not going to see the people with the umbrellas.
JEN GALLINGANE: It’s not going to see the people.
PHILIP KOOPMAN: So, when it sees something that it’s never seen before, it’s never seen an example, it doesn’t know what to do.
NARRATOR: Today, Koopman and his colleague Jen Gallingane are driving around Pittsburgh in his car, looking for edge cases.
PHILIP KOOPMAN: We have a crossing guard directing traffic in addition to a traffic light.
NARRATOR: They use what they learn to improve the training of the software.
PHILIP KOOPMAN: The red boxes you see are called bounding boxes and what those are, they’re just a rectangle around where the person is in the image. And so the idea is this is where the computer thinks a pedestrian is. When you see those bounding boxes disappear, it means that the computer has lost track of where the person is. In other words, if there’s no red box, it doesn’t see the person.
So, I don’t think it saw him hardly at all, until we were almost on top of him.
NARRATOR: And there are many common variations in the ways that people and things look that can prevent a computer from making the correct I.D.
A delivery man wearing a turban is holding a food tray next to his head, forming shapes that differ from the typical training images of a person’s head. Those unusual shapes make the delivery man an edge case.
PHILIP KOOPMAN: So, when the red box is there, it sees him. And there he is looking right at me. He’s right in front of my car; it doesn’t see him. You can look at the picture and you say, “There’s a person there.” And the perception system says, “Nope, there’s no person there.” And that’s a problem.
NARRATOR: But many engineers say that a botched identification can be overcome, as long as some of the car’s sensors detect that something is there.
JESSE LEVINSON: So, we use sensors like radar and LIDAR to directly measure where everything is around us. And that’s really important, because even if a machine learning system can’t classify exactly what type of object something is, we still know there’s something there, we know how fast it’s moving, and so we can make sure that we don’t hit it.
NARRATOR: Even if the car can see and make sense of its surroundings with unmatched perfection, it could still be a lethal hazard if it fails to handle its third crucial task: planning; yet another daunting challenge, because the planning software has to anticipate what’s likely to happen, then plot the car’s pathway and speed, ready to change both in a split second if the sensors spot trouble.
Just like the car’s perception software, its planning software also needs to be trained.
To do that without putting lives at risk, most companies do a lot of their training in environments they can fully control. One of them is computer simulation.
KATE TARALOVA (Zoox): We are driving in San Francisco, so we have to create a world in simulation that is just as complex and varied as San Francisco. And this is not a small feat. When we run tests in simulation, we take a model of the real car, along with all the sensors that are on the real car, including the A.I. that runs on the real car, and this is what we place in our simulation.
NARRATOR: Here, the A.I. software can practice new skills without putting anyone in danger.
KATE TARALOVA: Suppose that we drive in the real world, and there is a double-parked car situation we don’t know how to deal with, right? So, what we do is we ensure that the vehicle can deal with these situations in simulation first, so that once we actually see that situation in real life, we already know that we’ll be able to deal with it.
NARRATOR: Another kind of safe training environment is a private test facility, like this one in Northern California, known as Castle. It’s run by Waymo, the company that Google spun off to develop self-driving cars.
STEPHANIE VILLEGAS (Waymo): And so, because the site has so many different types of roads, from residential to expressways to arterial roads going between the two, cul-de-sacs and things like that, we’re able to stage, basically, an infinite number of scenarios that you would encounter on those types of roads in the real world.
TONY KUM (Waymo): Great! Let’s go for run, three, two, one…
NARRATOR: Today’s test: on a street where the view is blocked, a car suddenly backs out into the Waymo car’s path.
TONY KUM: Great job, everyone.
STEPHANIE VILLEGAS: We can go a little spicier. We can change the speed at which the auxiliary vehicle exits the driveway. Does it rip out of its driveway like a bat out of hell, really coming out of the driveway ahead of the Waymo vehicle? Or is it slowly kind of meandering down the driveway and taking its time?
TONY KUM: Great job, guys. Let’s restage and stand by.
NARRATOR: Next, the Waymo car has to handle what’s called a “pinch point,” a two-way street made narrow by parked cars.
GUNJAN YAGNIK (Waymo): Waymo rolling.
STEPHANIE VILLEGAS: You encounter this really often on public roads, two oncoming vehicles have to negotiate who will assume the right of way and who will have to yield.
NARRATOR: The other car arrives at the pinch point first, so the software should tell the autonomous car to yield.
GUNJAN YAGNIK: Yielding, yielding right here. Butter going around. [sings] Butter…
NARRATOR: These are just two of the thousands of scenarios the company uses to train the software, and they introduce new ones all the time.
But ultimately, the only way to know for sure how a car will perform on public roads is to test it in the real world, where the stakes for wrong decisions are much higher.
SHAI SHALEV-SHWARTZ: We don’t just, “Let’s go for a test drive just for the, for the fun of it,” because a test drive by itself may be dangerous, okay? But there are things that are very difficult to check without actually doing a test drive. And these are things that involve negotiation with other drivers, because in order to check if you are negotiating normally and properly with other drivers, you need other drivers.
NARRATOR: Today, Mobileye engineers are testing a part of their planning software that handles merging in traffic.
AMNON SHASHUA: So, so we merged fine, but at the last point we’re a bit slow.
Merging into traffic, this multi-agent game that we are playing, requires sophisticated negotiation. You are negotiating. Your motion signals to other road users your intent, so you are negotiating. And this negotiation requires skills, and those skills don’t come naturally. Those skills need to be trained.
You need to now change two lanes, because the exit is a few hundred meters from us.
NARRATOR: The software is trained to be assertive when it has to be. Here, the car signals its intention to exit by speeding up so it can merge.
AMNON SHASHUA: And you saw that we changed two lanes without obstructing the flow of traffic and there are many vehicles here. You need to provide agility that is as good as humans, if you want to be safe.
NARRATOR: The skill gap between autonomous systems and human drivers is narrowing.
CHRIS GERDES: Okay, hold on.
NARRATOR: And to teach self-driving cars to be even better than humans, some engineers are exploiting the things that computers can already do extremely well…
CHRIS GERDES: Okay, you may begin.
NARRATOR: …like control machinery with extraordinary precision.
JONATHAN GOH: Autonomous mode in three, two, one…
NARRATOR: This car is drifting, a kind of controlled skid. And what makes this feat possible is a computer programmed to exploit the laws of physics.
Here, at Thunderhill Raceway in California, a team from Stanford University is developing self-driving software that can take evasive action to escape danger.
CHRIS GERDES: In an emergency situation, you want to be able to use all of the capabilities of the tires to do anything that’s required to avoid that collision. Our automated vehicles are able to put the vehicle into a very heavy swerve when that is the best choice for how to avoid the collision.
NARRATOR: The Stanford team applies what they’ve learned from the undisputed masters of pushing car performance to the limit, race car drivers.
CHRIS GERDES: Race car drivers are always pushing up to the limits, but they’re trying to avoid accidents when they do that. So, for instance, race car drivers are trying to use all the friction between the tire and the road to be fast. We want to use all the friction between the tire and the road to be safe.
NARRATOR: While it might take a race car driver years to perfect skills like these, once the software masters them, a download could pass them on to a whole fleet of cars in just minutes.
CHRIS GERDES: We feel this is a fundamental building block of any type of automated vehicle that you would want to develop. The vehicle should be able to use all of its capabilities to move out of harm’s way.
You guys rock.
NARRATOR: So despite all the obstacles, many engineers think they’re closing in on the prize: self-driving cars safe enough to trust. Cars that could proliferate very rapidly. If and when that happens, they will share the road with human drivers. How will that work? Will we all get along?
In Michael Fleming’s test drives around the country, he sees a clash brewing. His company’s autonomous car is named Asimov, after Isaac Asimov, famed for his science fiction books on robots.
MICHAEL FLEMING: Asimov has a clear-cut rule book and is very consistent, but oftentimes, when we drive safely and follow the letter of the law, we get honked at. And do you know who we get honked at by? The aggressive drivers, the rule breakers.
NARRATOR: Fleming and his team analyze the data from the car’s strange encounters and use it to improve their software. Whether it’s a careless pedestrian…
MICHAEL FLEMING: This lady just steps out in the middle of the road. And if you notice, she doesn’t even turn her head.
NARRATOR: ...or a wrong-way car.
MICHAEL FLEMING: We were driving down a one-way street with multiple lanes, and we see this white truck driving the wrong way down a one-way road. So, clearly we have a rule breaker here, doing something that they shouldn’t do.
You know, the development of self-driving technology would be pretty simple, if everyone just followed the rules of the road, if everyone came to a stop at a stop sign, if everyone used crosswalks. But the reality is, they don’t. They break rules all the time.
NARRATOR: Conflicts between rule-bound automated cars and impatient humans are just one potential problem. If robo-taxi services become very cheap, might traffic actually increase and pollution grow worse? How many people who now earn their living from driving might lose their jobs? If millions of cars are electronically connected, what risks might that pose to our privacy and our security? Finally, an ethical question: are we willing to accept self-driving cars that kill some people, as long as they kill fewer people than human drivers do?
Despite these looming questions, proponents say self-driving cars could make transportation both easier and safer.
MARK ROSEKIND: I think new technology offers the biggest safety tool that we’ve had in a hundred years. That’s the cultural transformation that’s coming.
NARRATOR: But it is far from certain that driverless cars will ever deliver on that promise.
MISSY CUMMINGS: I think it is hubris to believe that driving is such a simple task that since there’s so much more automation in the world how hard could this be? I’m a big fan of where we’re going with this technology, but I also work on a day-to-day basis with this technology, and it’s just simply not ready for public consumption to any verifiable degree of safety.
JESSE LEVINSON: I think that we, as developers in the industry, need to earn the public’s trust, and not the other way around. I think we need to be able to demonstrate why our system is, in fact, safer than human drivers.
NARRATOR: If self-driving cars eventually do win public trust, their adoption may be less of a revolution than a slow evolution.
DANIELA RUS: In my opinion, the safe solutions today work at low speeds in low-complexity environments. So, this includes driving on private roads, on campuses, retirement communities, airports, but we do not have solutions that work, in general, at high speeds, in congestion and in really difficult road conditions.
MARTIAL HEBERT: Having a fully autonomous vehicle being able to take you anywhere, anytime is very, very far in the future. In fact, I don’t have even a guess as to how far in the future that would be.
STEVEN SHLADOVER: It’s not like a mobile phone app where, you know, if the mobile phone app doesn’t work 10 percent of the time, big deal. This has got to work all the time. There’s a pot of gold out there at the end of the rainbow for those who can actually get this to work. Now the challenge is how to get it to work safely.
PRODUCED BY
Kiki Kapany
WRITTEN AND PRODUCED BY
Edward Gray
PRODUCED AND DIRECTED BY
Michael Schwarz
EDITED BY
Jennifer Brooks
CO-PRODUCED BY
Alyn Divine
CAMERA
Vicente Franco
Eddie Marritz
Dror Lebendiger
Stephen McCarthy
Eric Coughlin
NARRATED BY
Craig Sechler
ORIGINAL MUSIC BY
Christopher Hedge
ADDITIONAL ORIGINAL MUSIC BY
John Paul Labno
ANIMATION
Todd Ruff
SOUND
Ray Day
Ryan Barrett
Mark Mandler
Samson Yanai
Jack Morris
Glenn Syska
COORDINATING PRODUCER
Stacey Toal
PRODUCTION ASSISTANT
Wes Richardson
PRODUCTION ACCOUNTANT
Rebecca Haseleu
DRONE OPERATORS
Alyn Divine
Shiran Granot
ONLINE EDITOR
Alyn Divine
COLORIST
Gary Coates
AUDIO MIX
Phoenix Sound Design
POST PRODUCTION SUPERVISOR
Alyn Divine
POST PRODUCTION ASSISTANT
Wes Richardson
ARCHIVAL RESEARCH
Stacey Toal
ARCHIVAL MATERIAL
The Associated Press
Brian Gerkey
Carnegie Mellon University
DARPA
Edge Case Research
Erick Iglesias Umana
Florida Highway Patrol
Getty Images
Jukin Media
Matthew Sostrom
MobileyeNational Transportation Safety Board
Oddball Films
Pond5
Qing (Catherine) Guo
Robert Trudell
Stanford UniversityTempe Police Department removed
Torc Robotics
Ufuoma Elvis-Obukowho
Waymo LLC
Zoox, Inc.LEGAL SERVICES PROVIDED BY
Hillary Brill, Esquire
Glushko-Samuelson Intellectual Property Clinic American University Washington College of Law
COMMUNITY & EDUCATIONAL OUTREACH PARTNER
Arizona State University School for the Future of Innovation in Society
ADVISORS
Srikanth Saripalli
Brandon Schoettle
Chuck Thorpe
Jameson Wetmore
SPECIAL THANKS
Ben Hastings
Daniel Moodie
Adam Shoemaker
James Stout
Sarah Tariq
NOVA SERIES GRAPHICS
yU + co.
NOVA THEME MUSIC
Walter Werzowa
John Luker
Musikvergnuegen, Inc.
ADDITIONAL NOVA THEME MUSIC
Ray Loring
Rob Morsberger
CLOSED CAPTIONING
The Caption Center
POST PRODUCTION ONLINE EDITOR
Lindsey Rundell Denault
-----
DIGITAL PRODUCTION ASSISTANT
Lorena Lyon
DIGITAL EDITOR
Katherine Wu
DIGITAL ASSOCIATE PRODUCERS
Arlo Perez
Ana Aceves
DIGITAL PRODUCER
Emily Zendt
DIGITAL MANAGING PRODUCER
Kristine Allington
SENIOR DIGITAL PRODUCER
Ari Daniel
-----
PROGRAM MANAGER, NOVA SCIENCE STUDIO
Tenijah Hamilton
OUTREACH COORDINATOR
Gina Varamo
DIRECTOR OF EDUCATION AND OUTREACH
Ralph Bouquet
-------
DIRECTOR OF NATIONAL AUDIENCE RESEARCH
Cory Allen
PUBLICITY
Eileen Campion
AUDIENCE ENGAGEMENT EDITOR
Sukee Bennett
DIRECTOR OF AUDIENCE DEVELOPMENT
Dante Graves
DIRECTOR OF PUBLIC RELATIONS
Jennifer Welsh
-------
PRODUCTION ASSISTANT
Angelica Coleman
PRODUCTION COORDINATOR
Linda Callahan
ASSOCIATE RESEARCHER
Christina Monnen
-------
TALENT RELATIONS
Janice Flood
PARALEGAL
Sarah Erlandson
RIGHTS MANAGER
Hannah Gotwals
BUSINESS MANAGER
Elisabeth Frele
LEGAL AND BUSINESS AFFAIRS
Susan Rosen
-------
POST PRODUCTION ASSOCIATE PRODUCER
Jay Colamaria
SENIOR PROMOTIONS PRODUCER AND EDITOR
Michael H. Amundson
SUPERVISING PRODUCER
Kevin Young
BROADCAST MANAGER
Nathan Gunner
----
SENIOR EXECUTIVE PRODUCER, EMERITA
Paula S. Apsell
----
SCIENCE EDITOR
Robin Kazmier
NOVA PRODUCER
Caitlin Saks
DEVELOPMENT PRODUCER
David Condon
PROJECT DIRECTOR
Pamela Rosenstein
COORDINATING PRODUCER
Elizabeth Benjes
SENIOR SCIENCE EDITOR
Evan Hadingham
SENIOR SERIES PRODUCER
Melanie Wallace
DIRECTOR, BUSINESS OPERATIONS & FINANCE
Laurie Cahalane
EXECUTIVE PRODUCERS
Julia Cort Chris Schmidt
A NOVA Production by Kikim Media for WGBH Boston.
© 2019 Kikim Media and WGBH Educational Foundation
All Rights Reserved
This program was produced by WGBH, which is solely responsible for its content. Some funders of NOVA also fund basic science research. Experts featured in this film may have received support from funders of this program.
Original funding for this program was provided by Draper, the David H. Koch Fund for Science, the Alfred P. Sloan Foundation, Margaret and Will Hearst, the Montgomery Family Foundation and the Corporation for Public Broadcasting.
IMAGE:
Image credit: (empty cockpit of vehicle)
© metamorworks/Shutterstock
- Ryan Chilton, Missy Cummings, Dmitri Dolgov, Michael Fleming, Chris Gerdes, Martial Hebert, Philip Koopman, Jesse Levinson, Peter Norton, Taylor Ogan, Raj Rajkumar, Bryan Reimer, Mark Rosekind, Daniela Rus, Bobbie Seppelt, Shai Shalev-Shwartz, Amnon Shashua, Steven Shladover, Kate Taralova, Sebastian Thrun, Cathy Urquhart, Stephanie Villegas, Roni Vistuch
Explore More

Look Who's Driving

Ethics and Self-Driving Cars

Can Autonomous Cars Learn to be Moral?

The Transportation Revolution Is Happening Faster Than You Think

Three Tasks Driverless Cars Need to Learn

Testing Self-Driving Cars in the Real World

The Self-Driving Uber Crash—What Does It Mean?
