What's Next for the x86?

Can the CPU stalwart continue to crush the competition?

It's impossible to look at the x86 family of microprocessors without wondering if, after a three-decade run, the x86 might be running out of steam. Intel Corp., naturally, says it still has legs, while hastening to add that its battles with competing architectures are far from over.

Justin Rattner, Intel's chief technology officer, cites the architecture's flexibility as a key to both its past and future success. Although people often refer to the x86 instruction set as if it were some kind of immutable specification, he says, both the instruction set itself and the architecture that implements it have gone through tremendous evolution over the years.

For example, the x86 beat back an assault in the 1990s from a raft of specialized media processors with its built-in MMX and SSE instruction-set extensions, which sped up the number-crunching needed for multimedia and communications applications. Rattner also cites advancements such as hardware support for memory management and virtualization that have been added to the chip and refined over the years.

Equally important, Rattner notes, is that Intel has maintained backward compatibility across the x86 family at each step of the evolution. Advances in the instruction set plus intrafamily compatibility have enabled the x86 to span a very wide range of single-user and enterprise computers, from portables to supercomputers.

"It's important to understand that the x86 is not a frozen design," says David Patterson, a computer science professor at the University of California, Berkeley. "They have added about one instruction per month for 30 years. So they have something like 500 instructions in the x86 instruction set, and every generation, they add 20 to 100 more. Backward compatibility is sacrosanct, but adding new things for the future happens all the time."

A Shift in Strategy

"There have been tremendous technical challenges in continuing to shrink the size of transistors and other things, and Intel has invested tremendously in that," says Todd Mowry, a computer science professor at Carnegie Mellon University and an Intel research consultant. One of those challenges led to what Intel calls a "right-hand turn" at the company. Heat became such a problem as circuits shrank that now performance advancements can come only from adding more processor cores to the chip, not by increasing the clock speed of the processor.

And that has shifted the quest for performance from hardware to software, says Mowry. "In the research community now, the focus is not so much on how do we build one good core as much as on how do we harness lots of cores."

One of the most promising approaches today to exploiting the parallelism in multicore chips is the use of something called "software transactional memory," Mowry says. That's a way to keep parallel threads from corrupting shared data without having to resort to locking, or blocking, access to that data. It's an algorithmic approach, but support for the technique can be built into the x86 hardware, he notes.

Mowry says the only limit to the continued addition of more cores to processor chips is the ability of software developers to put them to good use. "The biggest hurdle is to go from thinking sequentially to thinking in parallel," he says.

Rattner predicts that we'll see core counts in "the low hundreds" per chip in the next five to seven years. Since each will have multithread capabilities, the number of parallel threads of execution supported by those chips might be around 1,000, he says. But Rattner acknowledges that "there aren't too many people walking around the planet today who know how to make use of 1,000 threads."

What Else Is Coming?

Rattner mentions some other "pretty interesting" things being developed in Intel's labs. For example, he says the x86 will include new hardware support for security -- "making it more robust in the face of belligerent attacks" -- but he declines to elaborate.

He also points to the coming x86-based Larrabee chip, a graphics processing unit to compete with the dedicated GPUs from Nvidia Corp. and the ATI unit of Advanced Micro Devices Inc. Larrabee will contain "an entirely new class of instructions aimed at visual computing," Rattner says.

He says that, unlike the highly specialized GPUs of its competitors, Larrabee is significant because it is an extension of the general-purpose x86 architecture.

"Here we are making a strong assertion about the robustness and durability of the architecture: that we can take it into domains that most people felt were beyond its capabilities," he says.

AMD has a similar plan. In January, the company said it would introduce a hybrid CPU-GPU chip called Fusion as an extension of its existing Phenom line of processors. It will ship in 2009 first as a two-core unit for notebook computers, AMD said.

VIA Technologies Inc., which just announced its VIA Nano Processors (formerly Isaiah) for the mini-notebook market, says it will continue to target the mobile market with its power-efficient line of x86 processors but will also edge toward the desktop market.

"There is an inherent limit for the x86 at the low end, for something like your toaster or the fuel injector in your car," says Glenn Henry, president of the Centaur unit of VIA. "And there probably is a limit at the very high end if you are going to do something like simulate atomic bombs. In between, the x86 has proven over and over that it can adapt."

Might some brand-new microprocessor architecture come along and blow the x86 out of the water? Rattner says Intel is still partly protected by that Wintel software inventory.

"Unless you can come in and say that if you use this different instruction set, you'll get five times better performance, there just isn't a big enough incentive to switch," he says.

But that's not to say the x86 instruction set won't be implemented in new ways as silicon transistors increasingly bump up against the laws of physics. For 40 years, transistors have been under the surface of the silicon wafer. Now technology is emerging to allow them to be placed on top of that surface.

That would make it possible to build the transistors out of materials other than silicon -- materials like gallium arsenide that have better energy and performance characteristics. "We won't be at the surface for another generation or two," Rattner says (each generation is about two years). "But the decade ahead will see a lot of innovation in materials."

Although Intel is working to develop new transistor-based electronics, Rattner says the company is "not more than dabbling" in more far-out possibilities such as processors for quantum and DNA computing. "Those really change the mathematical foundation of computing and are much more risky," he explains. Moreover, he says, they are likely to be restricted to narrow application domains, not to general-purpose computing.

Mowry predicts that a move to those esoteric technologies is 20 years out. "My guess is it won't be until we really start reaching the end of what we can do with conventional technologies that people will get serious about these things," he says. "When you are trying to build wires out of single strands of atoms, things get very strange and you don't know what to do exactly."

Related:

Copyright © 2008 IDG Communications, Inc.

  
Shop Tech Products at Amazon