When artificial intelligence truly becomes super smart, will we be pets to our robot overlords or ants to be squashed? Neither scenario is very encouraging, but both were questions asked by Apple co-founder Steve Wozniak as he joined the ranks of prominent people warning that AI will eventually take over and get rid of humans.
"Computers are going to take over from humans, no question," Wozniak said during an interview with Australian Financial Review. His opinion on artificial intelligence now aligns with worrisome predictions by Tesla’s chief executive Elon Musk, Professor Stephen Hawking and even Microsoft co-founder Bill Gates.
Wozniak stated:
Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they'll think faster than us and they'll get rid of the slow humans to run companies more efficiently.
Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don't know about that … But when I got that thinking in my head about if I'm going to be treated in the future as a pet to these smart machines … well I'm going to treat my own pet dog really nice.
Before Woz, Gates was the latest to join the AI-will-end-humanity club. During a Reddit "Ask me Anything" session, Gates was asked “How much of an existential threat do you think machine superintelligence will be?”
“I am in the camp that is concerned about super intelligence,” Gates replied. “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.”
Back in December, Professor Stephen Hawking warned, “The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever increasing rate," he said. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."
Musk has been very vocal about his belief that AI potentially could be “more dangerous than nukes” and “our biggest existential threat.” He added, “With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn't work out.”
Both Hawking and Musk signed an opened letter about research priorities for robust and beneficial artificial intelligence. The research priorities paper (pdf) cited a Stanford study on “Loss of Control of AI systems.” The study voiced concerns that “we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes – and that such powerful systems would threaten humanity.”
Yet Wozniak suggested the day super-smart machines take over the world might never come if Moore’s Law fails. Since 2005, various experts have suggested that computer processing speeds cannot keep doubling every two years so Moore’s Law may only continue until 2015 – 2020. If our processors don’t get faster and faster every couple years, then quantum computing will never really take off and robot overlords can’t take over.
However Wozniak is hopeful about quantum computers, which use "subatomic particles to process complex calculations almost instantaneously." He said, "I hope it does come, and we should pursue it because it is about scientific exploring. But in the end we just may have created the species that is above us."
It might be worth noting that earlier this month University of California Santa Barbara researchers developed the “first-ever quantum device that detects and corrects its own errors.” The UC Santa Barbara news release explained, “When scientists develop a full quantum computer, the world of computing will undergo a revolution of sophistication, speed and energy efficiency that will make even our beefiest conventional machines seem like Stone Age clunkers by comparison.”
Yeah, but those “stone-age clunkers” aren't smart enough to evolve into Skynet and end humanity either.