U.S. Air Force envisions drone that makes attack decisions by itself

A key question: Who's responsible for mistakes?

By 2047, the Air Force says unmanned aircraft with blazing artificial intelligence systems could fly over a target and determine whether or not to unleash lethal weapons – without human intervention.

Such intelligent unmanned aircraft were described in the Air Force’s wide-ranging “Unmanned Aircraft Systems Flight Plan 2009-2047” report which outlines the service’s future use of drones. The report details major new responsibilities for unmanned aircraft from the ability to refuel other aircraft to the capacity to swarm multiple drones on a single target or to attack enemy targets on its own.

In 2047 technology onboard an unmanned aircraft will be able to observe, evaluate and act on a situation in micro or nanoseconds. According to the Air Force: “Increasingly humans will no longer be “in the loop” but rather “on the loop” – monitoring the execution of certain decisions. Simultaneously, advances in AI will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input.” The loop in this case is a concept known as observe-orient-decide-act or OODA which describes the process by which a person or computer would go through before taking action.

There obviously would be some stop-gap measures. The Air Force went on to say that assuming the decision is reached to allow some degree of aircraft autonomy, commanders must retain the ability to refine the level of autonomy the systems will be granted by mission type, and in some cases by mission phase, just as they set rules of engagement for the personnel under their command today.

The trust required for increased autonomy of systems will be developed incrementally. The systems’ programming will be based on human intent, with humans monitoring the execution of operations and retaining the ability to override the system or change the level of autonomy instantaneously during the mission. Such unmanned aircraft must achieve a level of trust approaching that of humans charged with executing missions, the Air Force stated.

Authorizing a machine to make lethal combat decisions is contingent upon political and military leaders resolving legal and ethical questions. These include the appropriateness of machines having this ability, under what circumstances it should be employed, where responsibility for mistakes lies and what limitations should be placed upon the autonomy of such systems, the Air Force stated.

The super-intelligent drone was just one of myriad plans the Air Force floated in it report. Some of the other intriguing unmanned aircraft possibilities included matching conventional, manned jet fighters with a drone, or “loyal wingman” as the Air Force called it that could help protect a pilot on a critical mission or drop additional ordnance on a target.

The idea of swarming multiple unmanned aircraft on a target is also in the plan. Swarm technology will let a commander use a virtual world to monitor the drones and a wireless ad-hoc network will connect the drones to each other and the swarm commander. The aircraft within the swarm will fly autonomously to an area of interest while also avoiding collisions with other UAS in the swarm. These UAS will automatically process imagery requests from low level users and will "detect" threats and targets through the use of AI, sensory information and image processing.

The Air Force also said it will work to develop better communications technology to facilitate more unmanned aircraft flying in national airspace. Last year the Government Accountability Office said routine unmanned aircraft access to the national airspace system poses technological, regulatory, workload, and coordination challenges.

A key technological challenge is providing the capability for unmanned aircraft to meet the safety requirements of the national airspace system. For example, a person operating an aircraft must maintain vigilance so as to see and avoid other aircraft. However, because the airplanes have no person on board, on-board equipment, radar, or direct human observation must substitute for this capability. No technology has been identified as a suitable substitute for a person on board the aircraft in seeing and avoiding other aircraft, the GAO report stated

Additionally, the aircraft’ communications and control links are vulnerable to unintentional or intentional radio interference that can lead to loss of control of an aircraft and an accident, and in the future, ground control stations—the unmanned airplane equivalent to a manned aircraft cockpit—may need physical security protection to guard against hostile takeover, the GAO said.

This story, "U.S. Air Force envisions drone that makes attack decisions by itself" was originally published by NetworkWorld .

FREE Computerworld Insider Guide: IT Certification Study Tips
Editors' Picks
Join the discussion
Be the first to comment on this article. Our Commenting Policies