In a report commissioned by the Pentagon’s Defense Science Board, top defense and technical professionals argue that the United States needs to prepare for artificial intelligence in warfare – because other countries might prepare for it first.
“While evident that the DoD is moving forward in the employment of autonomous functionality, it is equally evident that the pull from diverse global markets is accelerating the underlying tech base and delivering high-value capabilities at a much more rapid pace,” the introduction to the 121-page report reads.
Along with this comes a caveat that the study isn’t recommending major projects, “given the current budget environment.” Instead, it suggested a wide variety of experimental projects, plus recommending that the DoD reach out to private “non-traditional R&D communities.”
The report operated under the idea that autonomous weapons capabilities were going to be developed eventually, and that the United States needs to avoid an AI Cold War. Doing that involves beefing up the U.S.’s autonomous armory, most of which, they say, can be employed in non-lethal military applications. The writers are aware of the public fear of “the use of autonomous weapons systems with potential for lethality,” and notes that development of technology that reeks of killer robots “may meet with resistance unless DoD makes clear its policies and actions across the spectrum of applications.”
So, most autonomous systems might be focused on piloting vehicles, or communicating between human warfighters – or at least, the report focused on those. The exception to this rule is a recommendation for a “minefield of autonomous lethal UAVs,” which could prevent unwanted incursion into American-held zones either on the land or underwater. These would be designed to be “cascaded,” or to deploy smaller autonomated weapons in order to control specific areas.
This also brings us back to the idea of an autonomous war designed to prevent autonomous war – “large UA (unmanned aircraft) could be designed to dispense small UA”.
One major potential problem with autonomous military systems is trust. People working with a truly autonomous system must be able to trust that will do what it is expected to do. They must also have a way to prove that it is “operating reliably and within its envelope of competence.” It’s one thing to tell whether a drone has hit a target, but another to let a completely autonomous system hunt on its own – and what if that system’s autonomy also allows it to lie to its allies? What about cyber defense issues? An enemy hacker could take control of an autonomous system and not significantly alter its behavior.
The key, the study said, is to develop technology faster than adversaries can.
Meanwhile, we’ll be stepping into a field in which we control AI that could be very alien. “For some specific algorithm choices—such as neuromorphic pattern recognition for image processing, optimization algorithms for decision-making, deep neural networks for learning, and so on—the “reasoning” employed by the machine may take a strikingly different path than that of a human decision-maker,” the report reads.
As of 2014, the U.S. military employed almost 10,000 unmanned aerial systems.
Filed Under: Aerospace + defense