The Pentagon is adopting new ethical principles as it prepares to accelerate its use of artificial intelligence technology on the battlefield.
The new principles call for people to “exercise appropriate levels of judgment and care” when deploying and using AI systems, such as those that scan aerial imagery to look for targets.
They also say decisions made by automated systems should be “traceable” and “governable,” which means “there has to be a way to disengage or deactivate” them if they are demonstrating unintended behavior, said Air Force Lt. Gen. Jack Shanahan, director of the Pentagon’s Joint Artificial Intelligence Center.
An existing 2012 military directive requires humans to be in control of automated weapons but doesn’t address broader uses of AI.
The new principles outlined Monday follow recommendations made last year by the Defense Innovation Board, a group led by former Google CEO Eric Schmidt.
While the Pentagon acknowledged that AI “raises new ethical ambiguities and risks,” the new principles fall short of stronger restrictions favored by arms control advocates.
“I worry that the principles are a bit of an ethics-washing project,” said Lucy Suchman, an anthropologist who studies the role of AI in warfare. “The word ‘appropriate’ is open to a lot of interpretations.”
Shanahan said the principles are intentionally broad to avoid handcuffing soldiers with specific restrictions that could become outdated.
“Tech adapts. Tech evolves,” he said.
The Pentagon hit a roadblock in its AI efforts in 2018 after internal protests at Google led the tech company to drop out of the military’s Project Maven, which uses algorithms to interpret aerial images from conflict zones. Other companies have since filled the vacuum, and Shanahan said the new principles are helping to regain support from the tech industry.
“There was a thirst for having this discussion,” he said, while also saying that “sometimes I think the angst is a little hyped.”