AI researcher says amoral robots pose a danger to humanity

Rensselaer Polytechnic professor wants robots to understand good from evil

With robots becoming increasingly powerful, intelligent and autonomous, a scientist at Rensselaer Polytechnic Institute says it's time to start making sure they know the difference between good and evil.

"I'm worried about both whether it's people making machines do evil things or the machines doing evil things on their own," said Selmer Bringsjord, professor of cognitive science, computer science and logic and philosophy at RPI in Troy, N.Y. "The more powerful the robot is, the higher the stakes are. If robots in the future have autonomy..., that's a recipe for disaster.

"If we were to totally ignore this, we would cease to exist," he added.

Bringsjord has been studying artificial intelligence, or AI, since he was a grad student in 1985 and he's been working hand-in-hand with robots for the past 17 years. Now he's trying to figure out how he can code morality into a machine.

That effort, on many levels, is a daunting task.

Robots are only now beginning to act autonomously. A DARPA robotics challenge late last year showed just how much human control robots -- especially, humanoid robots -- still need. The same is true with weaponized autonomous robots, which the U.S. military has said need human controllers for big, and potentially lethal, decisions.

But what happens in 10 or 20 years when robots have advanced exponentially and are working in homes as human aides and care givers? What happens when robots are fully at work in the military or law enforcement, or have control of a nation's missile defense system?

It will be critical that these machines know the difference between a good action and one that is harmful or deadly.

Bringsjord said it may be impossible to give a robot the right answer on how to act in every situation it encounters because there are too many variables. Complicating matters is the question of who will ultimately decide what is right and wrong in a world with so many shades of gray.

Giving robots a sense of good and bad could come down to basic principles. As author, professor and visionary Isaac Asimov noted in writing the The Three Laws of Robotics , a robot will have to be encoded with at least three basic rules.

  1. Don't hurt a human being, or through inaction, allow a human being to be hurt.
  2. A robot must obey the orders a human gives it unless those orders would result in a human being harmed.
  3. A robot must protect its own existence as long as it does not conflict with the first two laws.

"We'd have to agree on the ethical theories that we'd base any rules on," said Bringsjord. "I'm concerned that we're not anticipating these simple ethical decisions that humans have to handle every day. My concern is that there's no work on anticipating these kinds of decisions. We're just going ahead with the technology without thinking about ethical reasoning."

With robots becoming increasingly powerful and autonomous, RPI Professor Selmer Bringsjord says it's important that they know good from evil. These autonomous robots were part of a recent demonstration in Fort Benning, Ga. (The U.S. Army is looking at how robots can help soldiers in the field.)

1 2 Page
From CIO: 8 Free Online Courses to Grow Your Tech Skills
Join the discussion
Be the first to comment on this article. Our Commenting Policies