Worried that one day we'll have robot overlords? You're in good company.
Renowned physicist, cosmologist and author of A Brief History of Time, Stephen Hawking said this week that robots, powered by artificial intelligence (A.I.), could overtake humans in the next 100 years.
Speaking at the Zeitgeist conference in London, Hawking said: "Computers will overtake humans with AI at some within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours," according to a report in Geek.
This isn't the first time Hawking has spoken about the threat that comes along with machine learning, A.I. and robotics.
In December, Hawking said, "the development of full artificial intelligence could spell the end of the human race."
In an interview with the BBC Hawking said A.I. poses no threat to the human race today but could in the future as machines -- specifically robots -- become smarter, bigger and stronger than their human developers.
"It would take off on its own, and re-design itself at an ever-increasing rate," Hawking said at the time. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."
Hawking also has some well-known company talking about the dangers of A.I.
Last fall, Elon Musk, the CEO of electric car maker Tesla Motors and CEO and co-founder of SpaceX, said while speaking at MIT that A.I. and all of the research going into it, poses a definite threat to humanity.
"I think we should be very careful about artificial intelligence," Musk said, answering a question about the state of A.I. during the MIT event. "If I were to guess at what our biggest existential threat is, it's probably that.... With artificial intelligence, we are summoning the demon."
Not all tech people and scientists are as concerned about A.I. as Hawking and Musk seem to be, though. A lot of people tend to think of A.I. as the brains behind robotics. But it also powers smartphones, email spam filters and apps that make restaurant recommendations.
A.I. is a long way away from creating a robot that easily can learn and is self-aware enough to cast its human operators aside and take over the world. And talk about these fears runs the risk of slowing down AI research, some have worried.