Skip the navigation

U.S. Air Force envisions drone that makes attack decisions by itself

A key question: Who's responsible for mistakes?

By Michael Cooney
July 27, 2009 04:04 PM ET

Network World - By 2047, the Air Force says unmanned aircraft with blazing artificial intelligence systems could fly over a target and determine whether or not to unleash lethal weapons – without human intervention.

Such intelligent unmanned aircraft were described in the Air Force’s wide-ranging “Unmanned Aircraft Systems Flight Plan 2009-2047” report which outlines the service’s future use of drones. The report details major new responsibilities for unmanned aircraft from the ability to refuel other aircraft to the capacity to swarm multiple drones on a single target or to attack enemy targets on its own.

In 2047 technology onboard an unmanned aircraft will be able to observe, evaluate and act on a situation in micro or nanoseconds. According to the Air Force: “Increasingly humans will no longer be “in the loop” but rather “on the loop” – monitoring the execution of certain decisions. Simultaneously, advances in AI will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input.” The loop in this case is a concept known as observe-orient-decide-act or OODA which describes the process by which a person or computer would go through before taking action.

There obviously would be some stop-gap measures. The Air Force went on to say that assuming the decision is reached to allow some degree of aircraft autonomy, commanders must retain the ability to refine the level of autonomy the systems will be granted by mission type, and in some cases by mission phase, just as they set rules of engagement for the personnel under their command today.

The trust required for increased autonomy of systems will be developed incrementally. The systems’ programming will be based on human intent, with humans monitoring the execution of operations and retaining the ability to override the system or change the level of autonomy instantaneously during the mission. Such unmanned aircraft must achieve a level of trust approaching that of humans charged with executing missions, the Air Force stated.

Authorizing a machine to make lethal combat decisions is contingent upon political and military leaders resolving legal and ethical questions. These include the appropriateness of machines having this ability, under what circumstances it should be employed, where responsibility for mistakes lies and what limitations should be placed upon the autonomy of such systems, the Air Force stated.

The super-intelligent drone was just one of myriad plans the Air Force floated in it report. Some of the other intriguing unmanned aircraft possibilities included matching conventional, manned jet fighters with a drone, or “loyal wingman” as the Air Force called it that could help protect a pilot on a critical mission or drop additional ordnance on a target.

Reprinted with permission from NetworkWorld.com. Story copyright 2012 Network World, Inc. All rights reserved.
Our Commenting Policies