Happy birthday, x86! An industry standard turns 30

Intel's x86 microprocessor architecture has dominated large swaths of computing for three decades. Here's why.

1 2 Page 2
Page 2 of 2

The floating-point fiasco

Perhaps as gut-wrenching as the RISC threat was a crisis that began in the summer of 1994, when Intel test engineers discovered a tiny flaw in the floating-point division circuitry of its new Pentium chip. The flaw occurred so rarely and was so minor in its impact that Intel elected to just fix it and put the chip back into production without recalling the flawed chips.

But a few months later, Thomas Nicely, a math professor at Lynchburg College in Virginia, discovered the flaw in his PC. He was unable, Intel was to admit later, to find anyone at Intel who would even listen to his complaint. So he posted his findings on the Internet, and before long, Intel was engulfed in a firestorm of criticism that would ultimately lead to a public relations disaster and a $475 million recall of the chip.

"It was a painful rite of passage, but we finally learned to behave like a consumer company," recalls Albert Yu, a former Intel senior vice president, in his book, Creating the Digital Future.

Mixing and matching

Another defining moment in x86 history occurred in 1995, says Todd Mowry, a computer science professor at Carnegie Mellon University and an Intel research consultant. That's when Intel introduced the Pentium Pro, a microprocessor with some radical new features, such as the ability to look ahead in a stream of instructions, guess which ones would be needed and then execute them out of order. That kept the processor busy a larger percentage of time and, combined with a new, extremely fast on-chip cache, it offered huge performance gains in some applications.

"The thing that was radically different," Mowry says, "was that they used the benefits of RISC without changing the instruction set. They did that by translating the x86 instructions into micro-operations that are more like RISC instructions. So what you had was a RISC machine inside an x86 machine, and overnight, that eliminated the performance gap."

Mowry says the Pentium Pro resulted from a top-down design process. "They started out with the design of a fast machine and then figured out how to make the x86 run on it," he says.

That approach -- finding good ideas in non-x86 architectures and working backward from them -- was just how it worked, Gelsinger says. "The Pentium was a dramatic architectural leap. We took the best ideas from minis and mainframes and just implemented them better, because we had a superior canvas to paint them on, called silicon."

Unlike a mainframe, which spreads processing components over a wide area inside the box, putting everything on a single, tiny, tightly integrated chip gives microprocessor designers more flexibility and their designs more power, he says. Indeed, over the years, the performance of silicon chips has marched smartly along according to Moore's Law, while systems of interconnected components have not improved as fast.

(Story continues below sidebar.)

The competition heats up

Intel has not enjoyed immunity from competition even on its x86 home turf. For example, Taiwan-based VIA Technologies was founded in Silicon Valley in 1987 to sell core logic chip sets, some using x86 technology, for use in motherboards and other electronic components. VIA now makes a wide variety of products and aims its x86 processors at low-power mobile and embedded markets.

Advanced Micro Devices Inc., the world's No. 2 maker of microprocessors, has become a competitive thorn in Intel's side since about 2000. Throughout most of the 1980s and 1990s, AMD had been a me-too maker of x86 chips and was hardly any concern to Intel. (It still has only about 15% of the x86-compatible desktop and mobile market, according to Mercury Research.)

But AMD scored a technical and public relations coup in 2003 with its introduction of x86-64, a 64-bit superset of the x86 instruction set. As a superset, it meant that users of new x86-64 machines could use them to natively run their old 32-bit software.

At the time, Intel's 64-bit offering was Itanium, an architecture developed by Intel and Hewlett-Packard Co. for superscalar execution on big iron, and it was not directly compatible with 32-bit x86-based software. Intel responded to the AMD threat with its own 64-bit x86 instruction superset, the EM64T, in 2004. AMD, and the press, made much of the fact that the company had beaten Intel to the 64-bit market that mattered most.

"It's an example of where the flexibility of the x86 instruction set was used against Intel," says Patterson. "So even though Intel dominates the market, another company can change directions for the x86."

Going to extremes

Today, Intel's x86 is chipping away at the extremes in computing. On April 28, the company announced it would team with Cray Inc. to develop new supercomputers based on Intel x86-based processors. (Cray already uses AMD's x86-based 64-bit Opteron processors.)

And at a Shanghai developer conference on April 2, Intel announced the Atom x86-based processor, the company's smallest. It draws less than 2.5 watts of power, compared with about 35 W for a typical laptop processor. The company shipped two new Atom chips for small laptops and desktops just this week.

So can the x86 thrive, or even survive, another 30 years? There are forces in play that will fundamentally transform microprocessor designs, even in the near term (see "What's next for the x86?"). But few are predicting the demise of the venerable x86. Says Carnegie Mellon's Mowry, "It's difficult to see any reason why another instruction set would take over, because there is so much valuable software that runs on [the x86]."

Read more:

Copyright © 2008 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2
It’s time to break the ChatGPT habit
Shop Tech Products at Amazon