Q&A: Google’s Geoffrey Hinton — humanity just a 'passing phase' in the evolution of intelligence

The Google engineering fellow who recently resigned was key to the development of generative AI and chatbots; he now believes he underestimated the existential threat they pose, and once AI can create its own goals, humans won't be needed.

1 2 Page 2
Page 2 of 2

You want to speak out about this and feel more comfortable doing that without it having any sort of blowback on Google. But in some sense, talk is cheap if we don’t have actions. What do we do? "I wish it was like climate change, where you could say, ‘If you have half a brain, you’d stop burning carbon.’ It’s clear what you should do about it. It’s painful, but it has to be done.

"I don’t know of any solution like that to stop these things taking over for us. And I don’t think we’ll stop developing them because they’re so useful. They’ll be incredibly useful in medicine and everything else. So, I don’t think there’s any chance of stopping development. What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are for the benefit of us. That’s called the alignment problem. But we need to do that in a world where there are bad actors who want to build robots that kill people. And it seems very hard to me.

"So, I’m sounding the alarms that we have to worry about this. And if I had a nice, simple solution I could push [I would], but I don’t. But I think it’s very important that people get together and think hard about it and see if there is a solution. It’s not clear there is a solution."

You spent your career on the technicalities of this technology. Is there no technical fix. Can we not build in guardrails? Can you make them worse at learning or restrict the way they can communicate if those are the two strings of your argument? "Suppose it did get really smart. At least in programming, they can write programs. And suppose you gave them the ability to execute those programs, which we’ll certainly do. Smart things can outsmart us. I’m writing your two-year-old and saying, ‘My dad does things I don’t like, so I’m going to make rules for what my dad can do.’ You can probably figure out how to live with those rules and still get what you want."

But there still seems to be a step where these machines still have motivation of their own. "Yes. That’s a very good point. So, we evolved and because we evolved, we have certain built-in goals we find hard to turn off. Like, we try not to damage our bodies; that’s what pain is about. We try to get enough to eat. We try and make as many copies of ourselves as possible — maybe not with that intention, but we’ve been wired up so that there’s pleasure in making as many copies of ourselves as possible. And, that all came from evolution and it’s important that we can’t turn it off. If you could turn it off, you wouldn’t do so well. There was a wonderful group called the Shakers, who were related to the Quakers, and who made beautiful furniture but didn’t believe in sex. And there aren’t any of them around anymore.

"So, these digital intelligences didn’t evolve. We made them. So they don’t’ have these built-in goals. So the issue is, if we can put the goals in, maybe it’ll be OK. But my big worry is sooner or later someone will wire into them the abilty to create their own sub-goals, because we almost have that already. There are versions of GPT called ChatGPT, and if you give something the ability to create its own subgoals in order to achieve other goals, I think you’ll quickly realize that getting more control is a very good subgoal because it helps you to achieve other goals.

"And if these things get caried away with getting more control, we’re in trouble."

So, what’s the worst-case scenario that’s conceivable? "I think it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence. You couldn’t directly evolve digital intelligence. It would require too much energy and too much careful fabrication. You need biological intelligence to evolve so that it can create digital intelligence, but digital intelligence can then absorb everything people ever wrote in a fairly slow way, which is what ChatGPT is doing, but then it can get direct access experience from the world and run much faster. It may keep us around for a while to keep the power stations running, but after that, maybe not.

"So the good news is we figured out how to build beings that are immortal. When a piece of hardware dies, they don’t die. If you’ve got the weights stored in some medium and you can find another piece of hardware that can run the same instructions, then you can bring it to life again.

"So, we’ve got immortality but it’s not for us."

When I hear you say this, I want to run into the streets and start unplugging computers. "You can’t do that."

You suggested a few months ago that there shouldn't be a moratorium on AI advancement, and I don’t think you think that’s a very good idea. Why? Should we just not stop? You’ve also spoken of the fact that you’re a personal investor...in some companies like [LLM and chatbot vendor] Cohere that are building these large language models. I’m just curious how you feel about your personal responsibility and what our responsibility is. What should we be doing? Should we try to stop this? "I think if you take the existential risk seriously, as I now do. — I used to think it was way off, but I now think it’s very serious and fairly close. It might be quite sensitive to just stop developing these things any further, but I think it’s completely [unrealistic] to think that would happen. There’s no way to make that happen. The US won’t stop developing and the Chinese won’t. They’re going to be used in weapons, and just for that reason alone, governments aren’t going to stop developing them. So, yes, I think stopping developing them would be a rational thing to do, but there’s no way it’s going to happen. So I think it’s silly to sign petitions saying, ‘please stop now.’

"We did have a holiday from about 2017 for several years. Google developed the technology first. It developed the transformers. And it didn’t put them out there for people to use and abuse. It was very careful with them because it didn’t want to damage its reputation and it knew there could be bad consequences. But that can only happen if there’s a single leader. Once OpenAI had built similar things, using transformers and money from Microsoft and Microsoft decided to put it out there; Google didn’t have much of a choice. If you’re going to live in a capitalist system, you can’t stop Google competing with Microsoft.

"So, I don’t’ think Google did anything wrong. I think it was very responsible, to begin with. But it’s inevitable in a capitalist system or a system where there’s competition, like there is between the US and China, that this stuff will be developed.

"My one hope is — because if we allow it to take over, it could be bad for all of us — we could get the US and China to agree like we could with nuclear weapons, which we all agree is bad for all of us. We’re all in the same boat with respect to the existential threat so we all ought to be able to cooperate on trying to stop it."

[Joe Castaldo, a reporter for The Globe and Mail] Do you intend to hold onto your investments in Cohere and other companies, and if so, why? "Well, I could take the money and put it in the bank and let them profit from it. Yes, I’m going to hold onto my investments in Cohere, partly because the people at Cohere are friends of mine. I still believe their large language models are going to be helpful. The technology should be good and it should make things work better; it’s the politics we need to fix for things like employment. But when it comes to the existential threat, we have to think how we can keep control of the technology. The good news is we’re all in the same boat so we might get…cooperation.

"One of the things that made me leave Google and go public with this is [a professor], he used to be a junior professor, but now he’s a now a middle-ranked professor who I think very highly of. He encouraged me to do this. He said, 'Geoffrey, you need to speak out about this. They’ll listen to you. People are just blind to this danger.'"

Copyright © 2023 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2
Bing’s AI chatbot came to work for me. I had to fire it.
Shop Tech Products at Amazon