If we create autonomous killing machines, expect shoddy code and hacking

What if humans miss one tiny error in the code of killer robots or autonomous weapons? What if enemy nations hack those killing machines?

terminator t800
Dick Thomas Johnson

It’s one thing if a botched software update causes a Nest or Hive “smart” thermostat to either freeze or swelter people in their homes, but what if humans miss one tiny error in the code of killer robots or autonomous weapons? What if enemy nation states hack those killing machines?

Paul Scharre, who previously worked on autonomous weapon policy for the Office of the Secretary of Defense, is the Project Director for the 20YY Warfare Initiative at the Center for a New American Security. In addition to his interesting posts on Just Security and Defense One about “killer robots,” his new report, “Autonomous Weapons and Operational Risk” (pdf), examines the dangers of deploying fully autonomous weapons.

One of our instincts regarding autonomous systems, Scharre writes, “is one of robots run amok, autonomous systems that slip out of human control and result in disastrous outcomes.” While he believes dystopian science fiction feeds such fears, he added, “these concerns also are rooted in our everyday experience with automated systems.”

Anyone who has ever been frustrated with an automated telephone call support helpline, an alarm clock mistakenly set to “p.m.” instead of “a.m.,” or any of the countless frustrations that come with interacting with computers, has experienced the problem of “brittleness” that plagues automated systems. Autonomous systems will do precisely what they are programmed to do, and it is this quality that makes them both reliable and maddening, depending on whether what they were programmed to do was the right thing at that point in time. Unlike humans, autonomous systems lack the ability to step outside their instructions and employ “common sense,” adapting to the situation at hand.

Deputy Secretary of Defense Bob Work previously said, “Our adversaries, quite frankly, are pursuing enhanced human operations. And it scares the crap out of us, really.” In other words, Russia and China are reportedly “enhancing humans” to make super soldiers. Work didn’t say the DoD will take that route, but he did say DoD scientists are working on autonomous weapons.

It’s a bit unnerving when Scharre, having worked for the Pentagon developing unmanned and autonomous weapon policy, warns of the ways autonomous weapons – which could target and kill people without any human in the loop to intercede – could go horribly wrong. He however is not necessarily opposed to “centaur warfighting,” the melding of man and machine.

Scharre defines an autonomous system as “one that, once activated, performs a task on its own. Everyday examples range from simple systems like toasters and thermostats to more sophisticated systems like automobile intelligent cruise control or airplane autopilots. The risk in employing an autonomous system is that the system might not perform the task in a manner that the human operator intended.”

An autonomous weapon will use its programming, but will “select and engage targets on its own.” If that goes off the rails from what humans intended, it could result in “mass fratricide, with large numbers of weapons turning on friendly forces,” as well as “civil causalities, or unintended escalation in a crisis.”

There are several reasons why an autonomous weapon might flip out; the systems are extremely complex and a part could fail, or it could be due to “hacking, enemy behavioral manipulation, unexpected interactions with the environment, or simple malfunctions or software errors,” Scharre explained.

The more complex an autonomous system is, the harder it is for a human to predict what it will do in every situation. Even complex ruled-based automated systems can bug out such as by an error in extremely long code. Scharre cited a study which found the software industry averages between 15-50 errors per 1,000 lines of code; he also mentioned an Air Force chief scientist calling for new techniques to verify and validate autonomous software as “there are simply too many possible states and combination of states to be able to exhaustively test each one.”

If there is a flaw which is susceptible to hacking, then the same flaw will be in identically replicated autonomous systems. What would the aggregate damage be if all autonomous weapon systems were hacked and did their own unintended thing at the same time? Yes, we should expect there to be adversarial hacking. Scharre wrote:

In an adversarial environment, such as in war, enemies will likely attempt to exploit vulnerabilities of the system, whether through hacking, spoofing (sending false data), or behavioral hacking (taking advantage of predictable behaviors to “trick” the system into performing a certain way). While any computer system is, in principle, susceptible to hacking, greater complexity can make it harder to identify and correct any vulnerabilities.

The complexity predicament gets even more migraine-inducing when it comes to cutting-edge AI systems which have neural networks. Some visual classification AIs can tell the difference between a human and an object, but Scharre pointed out that such AIs have been 99.6% confident in what they identified when it ends up being completely incorrect. Hopefully that autonomous AI isn’t in charge of picking targets and launching missiles.

Basically Scharre makes a case for keeping humans in the loop and doing everything possible to mitigate risk, but even then complex systems “can be made safer but never 100% safe.”

Copyright © 2016 IDG Communications, Inc.

Bing’s AI chatbot came to work for me. I had to fire it.
Shop Tech Products at Amazon