Tech leaders' warnings about artificial intelligence taking over the world

As artificial intelligence becomes increasingly smarter, it is clear to see that there are many things the technology can do now that machines and computers were not capable of some years back.

From the fears of robots taking jobs to AI technology being used for bad, some of the biggest names in tech not seem entirely pleased about the future of AI - and in fact are concerned it poses a threat to humanity.

Giants of the tech industry, from academic heavyweights Tim Berners-Lee and Stephen Hawking, to industrial titans Elon Musk and Bill Gates, have all weighed in on their concerns for an AI beyond human control.

Read next: 10 tech giants investing in artificial intelligence

Elon Musk warns about superintelligent AI dictators
© Teslarati

Elon Musk warns about superintelligent AI dictators

SpaceX founder and Tesla CEO, Elon Musk once again shared his concerns on the emerging technology of artificial intelligence in 2018.

In a documentary titled ‘Do you trust this computer?’ Musk said that robots which are smarter than humans have the potential to be the ultimate tyrannical leaders.

He also warned of an AI “superintelligence” that will be able to achieve more advanced brainpower than their human creators.

“At least when there’s an evil dictator, that human is going to die. But for an AI, there would be no death," he said. "It would live forever and then you’d have an immortal dictator from which we can never escape.”

Elon Musk (again)

Elon Musk (again)

A group of figures from the robotics and AI industry, including Tesla founder Elon Musk, has penned an open letter to the United Nations, urging for protection in writing from autonomous weapons.

The letter congratulated the efforts to establish a Group of Governmental Experts (GGE) to examine the use of “Lethal Autonomous Weapon Systems”.

It goes on to say that these autonomous weapons “threaten to become the third revolution in warfare”.

The letter finishes: “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.

“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close."

The signatories include academics and industry figures from across the world, including DeepMind founder Mustafa Suleyman.

Sir Tim Berners-Lee

Sir Tim Berners-Lee

The architect of the world wide web Sir Tim Berners-Lee spoke about the nightmarish scenario where artificial intelligence (AI) could become the new 'masters of the universe' by creating and running their own companies at a conference in London in April 2017.

Read next: Sir Tim Berners-Lee lays out nightmare scenario where AI runs the financial world

He laid out the scenario where AI could decide which companies to acquire and took this to its logical conclusion: "So when AI starts to make decisions such as who gets a mortgage, that's a big one. Or which companies to acquire and when AI starts creating its own companies, creating holding companies, generating new versions of itself to run these companies.

"So you have survival of the fittest going on between these AI companies until you reach the point where you wonder if it becomes possible to understand how to ensure they are being fair, and how do you describe to a computer what that means anyway?"

Professor Stephen Hawking

Professor Stephen Hawking

Speaking at the opening of the Leverhulme Centre for the Future of Intelligence at Cambridge University, Hawking outlined the potentially devastating pitfalls of artificial intelligence.

"I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence, and exceed it."

Hawking went on to describe two possible paths, one of disease and poverty eradication and the other of autonomous weapons and machines unable to be controlled by humans.

"In short, the rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which.

Stephen Hawking (take 2)

Stephen Hawking (take 2)

Professor Stephen Hawking first warned of the impact of unregulated artificial intelligence growth during a BBC interview back in 2014, claiming that artificial intelligence could end mankind.

"The development of full artificial intelligence could spell the end of the human race.

"It would take off on its own, and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded," he added.

Just a year later, speaking at the Zeitgeist Hawking provided a timeline of potential rapid AI growth.

“Computers will overtake humans with AI at some within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours.”

Elon Musk (again)
©Flickr/OnInnovation

Elon Musk (again)

In an extensive piece in the April 2017 edition of Vanity Fair, Elon Musk laid out his many concerns regarding AI, including the strawberry picking scenario.

"Let's say you create a self-improving AI to pick strawberries and it gets better and better at picking strawberries and picks more and more and it is self-improving, so all it really wants to do is pick strawberries. So then it would have all the world be strawberry fields. Strawberry fields forever." No room for human beings.

Also, during an interview at the AeroAstro Centennial Symposium in 2014, Musk said:

“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”

Google chairman Eric Schmidt

Google chairman Eric Schmidt

Reported in the Washington Post, Schmidt said: “I think that this technology will ultimately be one of the greatest forces for good in mankind’s history simply because it makes people smarter."

“I’m certainly not worried in the next 10 to 20 years about that. We’re still in the baby step of understanding things. We’ve made tremendous progress in respect to [artificial intelligence].”

Microsoft co-founder Bill Gates
iStock

Microsoft co-founder Bill Gates

In an interview with the BBC in 2015, Bill Gate's revealed his thoughts on the potential threats posed by artificial intelligence.

"I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well.

"A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."

Apple co-founder Steve Wozniak

Apple co-founder Steve Wozniak

Steve Wozniak spoke out about the impact AI will have on the human race in 2015, most notably claiming that "humans will be robots' pets."

Speaking to the Australian Financial Review, Wozniak said: "I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they'll think faster than us and they'll get rid of the slow humans to run companies more efficiently."

However, it must be noted that Wozniak has taken a U-turn, claiming at the Freescale technology forum in Austin in June 2015, that “they’re going to be smarter than us and if they’re smarter than us then they’ll realise they need us."

Skype co-founder Jaan Tallinn
©Techworld/Sam Shead

Skype co-founder Jaan Tallinn

Speaking to Techworld, Tallinn said:

“There’s something very different about this AI summer. There are quick ways to make money from marginal advances in AI. Once you make a ranking algorithm one percent better, that immediately means a few hundred million dollars for Google.

"I think it’s too early to think about very concrete [AI] monitoring mechanisms. It’s more important right now to build consensus in the industry and academia around what are the things that would have a chilling effect."

Open letter from Future of Life (FLI)
iStock

Open letter from Future of Life (FLI)

“Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls,” the FLI’s letter says. “Our AI systems must do what we want them to do.”

“There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase,” the letter reads. “The potential benefits are huge, since everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable.”

More information here

Copyright © 2018 IDG Communications, Inc.