Pausing AI development is a foolish idea

The recent call by tech leaders for a slowdown in the development of generative AI tools won't work now — the AI horse is already out of the barn.

ai artificial intelligence circuit board circuitry mother board nodes computer chips

A group of influential and informed tech types recently put forward a formal request that AI rollouts be paused for six months. I certainly understand concerns that artificial intelligence is advancing too fast, but trying to stop it in its tracks is a recurring mistake made by people who should know better.

Once a technology takes off, it’s impossible to hold back, largely because there’s no strong central authority with the power to institute a global pause — and no enforcement entity to ensure the pause directive is followed. 

The right approach would be to create such an authority beforehand, so there’s some way to assure the intended outcome. I tend to agree with former Microsoft CEO Bill Gates that the focus should be on assuring AI reliability, not trying to pause everything.

Why a pause won’t work

I’ll step around arguing that a pause is a bad idea and instead focus on why it won’t work. Take the example of a yellow flag during a car race. This is kind of how a pause should work: everyone holds their position until the danger is passes — or in the case of AI, until we understand better how to mitigate its potential dangers. 

But just as in a car race, there are countries and companies that are ahead and other at various distances behind. Under a yellow flag in a car race, the cars that are behind can catch up to the leading cars, but they aren’t allowed to pass. The rules are enforced by track referees who have no analogue in the real world of companies and countries. Even organizations like the UN have little to no visibility into AI development labs, nor could they assure those labs stand down. 

As a result, those leading the way in Ai technology are unlikely to slow their efforts because they know those following won’t — and those playing catch up would use any pause to, well, catch up. (And remember, the people working on these projects are unlikely to take a six-month, paid vacation; they’d continue to work on related technology, regardless.)

There simply is no global mechanism to enforce a pause in any technological advance that has already reached the market. Even development on clones, which is broadly outlawed, continues across the world. What has stopped is almost all transparency about what and where it’s being done (and clone efforts have never reached the level of use that generative AI has achieved in a few short months.)

The request was premature; regulation matters more

Fortunately, generative AI isn’t yet general purpose AI. This is the AI that should bring with it the greatest concern, because it would have the ability to do most anything a machine or person can do. And even then, a six-month pause would do little beyond perhap shuffling the competitive rankings, with those adhering to any pause falling behind those who don’t.

General AI is believed to be more than a decade in the future, giving us time to devise a solution that’s likely closer to a regulatory and oversight body than a pause. In fact, what should have been proposed in that open letter was the creation of just such a body. Regardless of any pause, the need is to ensure that AI won’t be harmful, making oversight and enforcement paramount.

Given that AI is being used in weapons, what countries would allow adequate third-party oversight? The answer is likely none — at least until the related threat rivals that of nuclear weapons. Unfortunately, we haven’t done a great job of regulating those either. 

Since there’s no way to get global consensus (let alone enforce a six-month pause on AI), what’s needed now is global oversight and enforcement coupled with backing for initiatives like the Lifeboat Foundation’s AIShield or some other effort to create an AI defense against hostile AI. 

One irony associated with the recent letter is that signatories include Elon Musk, who has a reputation of being unethical (and tends to rebel against government direction), suggesting such a mechanism wouldn’t even work with him. That’s not to say an effort wouldn’t have merit. But the correct path, as Gates lays out in his post, is setting up guard rails ahead of time, not after the AI horse has left the barn.

Copyright © 2023 IDG Communications, Inc.

It’s time to break the ChatGPT habit
Shop Tech Products at Amazon