How close are we to fully automated weaponry? Are there actual killer robots out there right now, ready to fight wars?
Last week a group of high-profile robotics scientists, researchers and entrepreneurs warned in an open letter that without a blanket international treaty outlawing autonomous weapons systems — aka military machines capable of selecting and engaging targets without the need for human intervention — an artificial intelligence arms race could be upon us within years, not decades. In the letter they call for the immediate ban of these weapons. To date, more than 18,000 people have co-signed.
At this point, no government or military has deployed a fully autonomous weapon system — yet. But so much of the world’s defense systems are already partially automated, using a mixture of human and computerized calculations to identify and fire at objects. Many weapons are considered semi-autonomous, and could function by themselves with just the flick of a switch.
We talked to four experts in the field of global politics, defense and artificial intelligence: Matthew Bolton of Pace University, David Akerson of the University of Denver, Todd Harrison of the Center for Strategic and Budgetary Assessments and Noel Sharkey of the University of Sheffield.
They highlighted a few semi-autonomous weapons to be aware of:
Perhaps you wouldn’t consider landmines artificial intelligence, but Matthew Bolton of Pace University’s Political Science department refers to them as a type of “analogue” autonomous weapon. “In many ways an autonomous armed robot is just a high-tech landmine that can fly and possibly follow you around.” Bolton argues that the Anti-Personnel Landmine Ban Convention — a 1997 treaty signed by 162 countries that prohibits the use of landmines and aims to ultimately eliminate them — is an indication of inherent international opposition to “automated, victim-activated killing.”
The Phalanx System
Essentially a large machine gun on U.S. naval ships, the Phalanx switches on like a household alarm system once a missile is detected and automatically destroys anything coming its way. Often nicknamed R2-D2 for its resemblance to the famous Star Wars droid, it’s effective because humans often aren’t quick enough to respond to incoming rockets. But it also requires that a human turn on a switch before it can choose what to attack and when. That’s what makes these guns semi-autonomous. According to David Akerson of the University of Denver, fully-autonomous systems actually have to go find someone, but “these defense systems only target an incoming rocket, which has already established hostility.”
Yes, the same Samsung that makes TVs and smartphones, also makes robot military sentries. Samsung Techwin, a subsidiary of the multinational conglomerate Samsung Group, manufactures the Samsung SGR-A1, a surveillance system that substitutes human security guards at the Demilitarized Zone, or DMZ, between North and South Korea. According to the South Korean government, the SGR-A1 can detect targets up to two miles away with heat and motion sensors and has the ability to shoot at objects that do not respond to a verbal warning. The robot is manned by humans but can be switched to automatic mode. Akerson points out that the DMZ also has landmines, which indiscriminately target whoever steps on them. “One argument in favor of autonomous weapons is that if a robot occupies the same function as landmines … [the robot] would at least be able to be fully removed, unlike landmines.”
Iron Dome is a border system used as part of Israel’s defense system (with financial support from the United States) to create a type of “protective bubble” that destroys rockets before they hit populated areas. Like the Phalanx system, Iron Dome makes up for the loss of time that humans require to analyze data and decide whether to shoot an incoming object. Instead, the weapon’s computers make that decision based on an algorithm programmed into it by humans. Todd Harrison, Senior Fellow for Defense Budget Studies at the Center for Strategic and Budgetary Assessments, says there’s at least one good thing about a computer making these decisions. “They always follow the criteria set for them, unlike a human which may not due to fear, biases, lack of sleep, etc.” While Iron Dome doesn’t require human intervention, manual operation can be activated if needed. Iron Dome drew significant attention in spring 2014 when photos circulating online of the latest fighting between Israel and Hamas showed disproportionate destruction on the Palestinian side as compared to Israel. According to Israeli officials, Iron Dome had a 90 percent deflection rate of missiles fired from Gaza.
Taranis and X-47B
While some experts debate whether the Taranis and X-74B are actually autonomous (a human is “in the loop” at some point or another), these two are far more automated than your average drone. Noel Sharkey, Professor of Artificial Intelligence and Robotics at Sheffield University, says that with a drone “You have a pilot and sensor operator with large screens in front of them. [The drone] flies to the coordinates, [the pilots] select the targets.” But the U.K. Ministry of Defense’s Taranis basically flies itself. Designed to be a test model, Taranis can find enemies but must be triggered by a human to fire. The U.S. Navy’s X-47B, which Sharkey calls “something Batman would fly,” is built to do several things without a human, including carrying out missions with zero human involvement, fueling itself mid-air and landing. It’s important to note that both are solely prototypes at this stage. In fact, once X-47B is done with demonstrations, it’s headed to a museum. But considering that the U.K. has opposed an international ban on autonomous weapons, and the U.S. Department of Defense’s policy on military A.I. isn’t exactly bulletproof in its call for “appropriate levels of human judgment in the use of force,” lethal, autonomous machines could be here sooner than we expect.