Joshua FoustBack to OpinionJoshua Foust

The false fear of autonomous weapons

Last month, Human Rights Watch raised eyebrows with a provocatively titled report about autonomous weaponry that can select targets and fire at them without human input. “Losing Humanity: The Case Against Killer Robots,” blasts the headline, and argues that autonomous weapons will increase the danger to civilians in conflict.

In this report, HRW urges the international community to “prohibit the development, production, and use of fully autonomous weapons” because these machines “inherently lack human qualities that provide legal and non-legal checks on the killing of civilians.”

While such concern is understandable, it is misplaced. For starters, as HRW concede in their report, no country, including the U.S., has decided to either develop or deploy fully autonomous armed robots. Shortly after the report was published, the Pentagon released a directive on the development of autonomy that called for “commanders and operators to exercise appropriate levels of human judgment over the use of force.”

So if the Pentagon doesn’t want fully autonomous weapons, why is there such concern about them?

Part of the reason, arguably, is cultural. American science fiction, in particular, has made clear that autonomous robot are deadly. From the Terminator franchise, the original and the remake of Battlestar Galactica, to the Matrix trilogy, the clear thrust of popular science fiction is that making machines functional without human input will be the downfall of humanity.

It is under this sci-fi “understanding” of technology that some object to autonomous weaponry. However, the Pentagon directive shows that the military certainly doesn’t want total weaponry autonomy. A deeper look at this type of weapon reveals that the perceived threat may not be valid. In fact, re-examination might suggest more plausible alternatives to this technology than full-bore prohibition.

Many of the processes that go into making lethal decisions are already automated. The intelligence community (IC) generates around 50,000 pages of analysis each year, culled from hundreds of thousands of messages. Every day analysts reviewing targeting intelligence populate lists for the military and CIA via hundreds of pages of documents selected by computer filters and automated databases that discriminate for certain keywords.

In war zones, too, many decisions to kill are at least partly automated. Software programs such as Panatir collect massive amounts of information about IEDs, analyze without human input, and spit out lists of likely targets. No human could possibly read, understand, analyze, and output so much information in such a short period of time.

Automated systems already decide to fire at targets without human input, as well. The U.S. Army fields advanced counter-mortar systems that track incoming mortar rounds, swat them out of the sky, and fire a return volley of mortars in response without any direct human input. In fact, the U.S. has employed similar (though less advanced) automated defensive systems for decades aboard its navy vessels. Additionally, heat-seeking missiles don’t require human input once they’re fired – on their own, they seek out and destroy the nearest intense heat source regardless of identity.

It’s hard to see how, in that context, a drone (or rather the computer system operating the drone) that automatically selects a target for possible strike is morally or legally any different than weapons the U.S. already employs.

The real debate should surround the philosophy behind utilizing these systems: what are they designed to do, and can they be made to do it more effectively? Humans are imperfect – targets may be misidentified, vital intelligence can be discounted because of cognitive biases, and outside information just might not be available to make a decision. Autonomous systems can dramatically improve that process so that civilians are actually much better protected than by human inputs alone.

Additionally, automated analysis systems reflect the attitudes and assumptions of the people who program them; American values are reflected in the way these systems analyze and why certain pieces of data are highlighted or ignored. In other words, automated systems already reflect our priorities and us. Therefore, there is no reason to think more automation would do something else. For example, the fear that autonomous drones would use less discretion before firing a weapon means such automation would be deliberate in design and not inherent to their automation.

Alternatively, these programs could be changed to better reflect our values and priorities. The possibility of full autonomy poses a number of questions about the use of force and how to maintain accountability when people take a less active role. But it could also make warfare less deadly, more accountable, and ultimately more humane; knee-jerk reactions against such a future don’t further the debate anymore than uncritically embracing such technology.

 

Comments

  • kcbill13

    R U Nuts?

  • Anonymous

    Love the logic: ‘autonomous killing machines are fine, because our killing is already partially automated’

  • Foust Screed

    It’s called Palantir.

  • Anonymous

    What a total load of intellectual claptrap. I don’t even know where to begin…”American values?” –Any male in a designated zone is a militant, whether they are or not. –Bug splat. –Assassination of US citizens w/o due process. –indefinite detention. –Unconstitutional spying on americans’ communications. –The US gov’t has killed more human beings than any other country in the history of the world. Who is paying your salary, Raytheon?

  • http://twitter.com/catfitz CatherineFitzpatrick

    The reason a lot of science fiction is scary is not because it’s about machines, but because it’s about people — people against other people, and not under a shared sense of the rule of law.

    The computer code that runs the killing machines is made by humans and is a concretization of their will, not something uncontrollable or entirely automatic and escaped from their control.

    More importantly, the decisions about where to deploy the machines that involve targets drawn from Panatir data-dredging, or heat-seeking missiles or counter-mortar systems are made by humans. Before drones are deployed, humans sometimes have to do things like call up leaders of countries and seek their intelligence and their clearance. So, in the first place, nothing that is portrayed here as automatic is in fact as automatic as Foust strangely makes it seem, because it’s in a context and a system where humans do make decisions about the very theaters of war in the first place.

    Yet precisely because in our time, the weapons are far more automated, and in the case of drones, there is a greater acceleration and precision — and therefore ease and seeming moral comfort — in their use, we have to look at the moral dimension. Foust seems content if drones just don’t miss very often or don’t have much collateral damage. But if they get so easy to use, won’t the temptation be to do more killing with them and make them more automatic? Where will it stop and who will be authorized to make the judgement call?

    There’s also the question of whether it’s really the case that drones *are* so precise, given how many reports there are from human rights groups and local lawyers about non-combatants, including children, who are hit. These victims can’t seek compensation, as their counterparts killed by regular US or NATO actions with more traditional weapons can, because drones are in a secret program run by the CIA, and not the military. This is apparently because of the need to keep them secret, apparently particularly from the governments of Pakistan and Afghanistan.

    So this raises questions of governance, as to whether we can morally retain these weapons as secret and unaccountable, and whether we should put them under the regular armed forces’ leadership.

    More automation can in fact decouple the moral imperative from the results of the action of weapons particularly because of the acceleration and capacity for devastation.

    Foust has a curious coda to yet another unconscionable piece in defense of drones as efficient war-machines — he posits the idea that a less active role by people — i.e. less compunction about use and nature of targets and consequences — could somehow be a goal, and that more automation need not diminish our values. How?

    In fact, if these programs reflect our values, they would have to become less secret, and attacks less common. Foust has already stripped away the moral context by pretending to find all kinds of “good” uses of “automation” that in fact a) aren’t automation as he claims because of the prior choices about war in the first place, and theaters of war, and targets and b) have more unintended consequences than he prepares to admit.

    As Foust notes, the Pentagon released a directive on “appropriate levels of human judgement,” but Foust seems to think the radiant future can contain more automated processes if we can just all agree on our priorities.

    There’s nothing wrong with a cultural heritage that seems autonomous robots as deadly; they are. Pentagon planners and the CIA don’t wish to kill civilians who are not combatants. Yet they do. They do because the targets often tend to have their families around them and the military can’t wait until they get into the clear. That’s the crux of the problem.

    There’s a strange notion that raising any moral questions about killing machines, as Human Rights Watch has done, is motivated by “fear”. It seems simply to be more motivated by morality, and also the practical sense that machines don’t have consciences, and code never renders human interaction as perfectly as real life.

  • deichmans

    How can someone claim to speak about morality or legality without a single mention of Just War ethics? If you had even a modicum of awareness of Jus in Bello, you would never make so specious an equivalence as suggesting autonomous offensive systems are “…any different than weapons the U.S. already employs.” The moment we choose to equate an opponent’s lives with not our own lives but rather our technology, we surrender the moral high ground.

  • http://www.facebook.com/profile.php?id=579651675 Clint LeClair

    Lets return to this discussion of accepting autonomous lethal technology after the first event of hacked hardware is used against civilians, friendlies, or our own troops. Any civilian can already capture a drone by cloning and then trapping a drone using a stronger (falsified) gps satellite ping, giving it a false location. Maybe this discussion will happen sooner than most of us anticipate if the software used to hack, and redirect a predator drone (after it is over-ridden from its human joystick jockeys) is automated itself. Not food for thought…hints of what is already capable.