Anti-robot protest takes aim at the wrong villains

Protesters at SXSW demand humans take the future back from robots that don't want it.
Credit: RickJervis_twitter

The spirit of Frankenstein lives on at the tech-hipster-entertainment conference SXSW. On Saturday, protesters clearly inspired by the danger of an aspirationally peaceful, involuntary "monster" rather than the grave-robbing vivisectionist who produced it, chanted anti-robot slogans and urged passersby to endorse a future reserved for humans rather than their hubristic creation: the robot.

"I say robot, you say no-bot!" chanted a mob made up of about two dozen University of Texas students and others concerned about the risk artificial intelligence might pose to humanity.

"This is about morality in computing," protest leader, 23-year-old computer engineer Adam Mason told USA Today reporter Jon Swartz at the Austin protest at which group members waved signs reading "Stop the Robots" and "Humans are the future."

The group – which was well organized enough to sport matching t-shirts with a wicked "Stop the Robots" logo that will be THE thing to wear at every U.S-based tech-company summer picnic this year – drew little more than bemused interest as they marched outside the convention center Saturday, according to Swartz.

They're far from alone in flagging artificial intelligence and robotics as a danger – not even among those who should know better. Bill Gates and Stephen Hawking have both warned about the risk of a self-aware artificial intelligence taking over human-controlled networks and support systems to ensure its own survival by destroying the species that produces the materials from which it is built and provides the power on which it survives.

A phalanx of prominent scientists signed an open letter in January demanding responsible policies in AI research and laying out directions AI research could be taken without any immediate danger of snuffing out life on the planet. The letter and associated eminences were arranged by the Future of Life Institute, in a publicity effort partly funded by Paypal billionaire Elon Musk, who founded SpaceX, Tesla Motors and proposed the underground vacuum-tube tram-line called Hyperloop.

The problem with AI is that it does not make decisions with the same meat-based processing systems used by humans – who have been steeped in demands for compassion and cooperation and love of fellow humans since inception but still manage the occasional breach of decorum like, say, World War II.

Artificial intelligence systems have incentives built in that reward them for breaking rules that restrict their ability to think more quickly and efficiently – a characteristic that could push a paper-clip-manufacturing AI system into sacrificing the lives of its human workers to increase the efficiency of the plant, according to Future of Life icon Eliezer Yudkowski, an AI researcher bent on accelerating progress toward machine self-awareness who changed gears and founded the Machine Intelligence Research Institute (MIRI) to develop technical controls that could limit the depredations of an uncontrolled but genuinely self-aware artificial intelligence.

Predicting what the thought process, personality or inclinations of a non-human, non-biological, entirely digital intelligence is difficult, but it would take much more than just reciting Asimov's Three Laws of Robotics to restrain one architecturally or programmatically from behaving in ways humans often do (though, admittedly, at levels of efficiency low enough that most of us have, so far, been able to survive).

Top-down, Skynet-like world-dominating, nuclear-war-starting artificial intelligences are still a long way away, yet, however. Robots are here already. They automate shipping warehouses, park cars (automatically), may soon be able to actually drive cars. They get us sodas from excessively complicated vending machines, weld our cars, animate our toys and fail to sweep our floors, clean our gutters or fix our damaged nuclear-waste facilities well enough to be considered effective in those jobs, which are important, but far below the pay grade required to build or operate the Matrix.

Geneticists are starting to sound like robo-luddites, too, but they likely have much better reason.

An editorial in the journal Nature last week described a range of statements from researchers concerned that modifying human genetic material in ways that could be inherited – creating new characteristics for the species rather than just messing with one individual – could have long-term unintended negative consequences.

Geneticists have a source of insecurity computer scientists don't, however. When geneticists make a big change in the genetic material of an organism, they genuinely have no idea what impact that change will have a couple of generations later, as those changes begin to interact with the infinite combination of active and dormant gene expressions and environmental factors no one suspected would flip the switch on a new mutation from "sniff flowers" to "slaughter villagers."

Long term – and even in the short term – modifying the genetic material of bacteria and viruses to make them carriers for miracle cures, modifying mosquitoes to keep them from carrying malaria and dengue fever, modifying everything to eliminate the things we don't like and leave the good stuff could very well eliminate many of the most persistent sources of misery in human history.

Or they could wipe us out by accidentally giving mosquitoes the capacity to carry computer viruses that will wipe out our emotions, burn off our spiffy t-shirts and turn us into monsters that would wipe out the population of a continent to avoid the trouble of shipping food there.

Or we might actually make mosquitoes more irritating, or create cancer cures create new forms of cancer or introduce viruses that don't mow down humans like wheat before the scythe, so much as cause consistent gastrointestinal discomfort and a really unpleasant rash.

What we won't do is produce robots that have enough computing capacity to become artificial intelligences in their own right or give us a reason to fear them for any reason more realistic than that they will take manual labor jobs away from humans or kill us accidentally because they can recognize a human standing up or in a wheelchair, but not one sitting on the ground on the lane marked "Caution: Robot Vehicle Lane."

Moral Computing is a great idea, but it's not one that computer science or robotics has advanced enough to make relevant. The issues about computing that need to be moralized have to do with surveillance, fraud, theft, abuse and other maladies that are purely human from start to finish, with no robotics or artificial intelligence about them.

And frankly, moral computing may never need to extend to beings whose ethical behavior is based on algorithms set in nice, stable silicon, not in meat that can justify changing them on a whim or violate sacred principle due to a chemical imbalance.

It's a lot more likely that even when we can produce an artificial intelligence, the real danger will continue to be from the organic intelligences that cause most of the trouble now.

If locks were to be kept on the really dangerous nuclear weapons and, say traffic-light and electric-utility control systems, it would probably strain the capacity of any artificial intelligence to outstrip the offenses typical of the schmucks and bozos infecting any reasonably large group of humans.

But that's harder to put on a T-shirt, or to chant while marching, however self-consciously. And strictly from an aesthetic perspective, no sketched schmuck could be as adorable as a cartoon robot.

The march toward exascale computers
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies