Microprocessors march on

For more than three decades, microprocessors have doubled in power every 18 to 24 months. That progress will continue for another 10 years or so, chip makers say; then some new technology may have to be found to replace the silicon semiconductor (see story).

Unfortunately, the companies that make microprocessors and use them to build computer systems can't just catch a free ride on the back of Moore's Law. As silicon transistors grow smaller -- there will be a billion on a single chip in five years -- chips become exponentially more expensive to design, manufacture and test. And the laws of physics intrude: In the mysterious realm called "deep submicron," for example, power dissipation gets nearly impossible to control, and cosmic rays cause random processing errors.

"The power-dissipation problem will prevent the further scaling after 10 years. Improvements will come about from system-level integration rather than transistor-level enhancements," says Bijan Davari, technology vice president at IBM's microelectronics division.

About 60% of the total performance gains in microprocessors have come from higher clock frequencies resulting from smaller and faster transistors. The balance have come from processing architectures that allow the execution of more than one instruction per clock tick. A microprocessor can do that by predicting the flow of a program through several branches of program logic or by executing instructions "speculatively" -- before they are needed. But pushing those tricks further is becoming difficult and expensive.

Dies like this one for an Intel Pentium 4 are used to fabricate microprocessors.
1pixclear.gif
Dies like this one for an Intel Pentium 4 are used to fabricate microprocessors.
1pixclear.gif
"We've gone from being able to execute two instructions at a time to eight or more," says James Hoe, a professor of electrical and computer engineering at Carnegie Mellon University in Pittsburgh. "But we are at the limit. The architecture is not scalable." Hoe says microprocessor developers will increasingly rely on the following ambitious schemes to find "parallelism" in programs and job streams:

Multithreading: Breaking a single program into multiple instruction streams, or threads, to be processed simultaneously. Each thread could handle a data packet or transaction, for example.

Simultaneous multithreading: A technique that makes a single physical processor appear to software as two processors, so two programs can execute simultaneously, boosting total throughput. Intel Corp. calls it "hyperthreading."

Chip multiprocessing: The placement of two or more physical processor "cores" on one chip. The cores can run independently but share some resources. IBM is shipping a dual-core Power4 processor, and Sun Microsystems Inc. is expected to unveil one later this year in its UltraSPARC IV. Intel will introduce a dual-core Itanium chip in 2005.

Runtime optimization: Using a combination of special processor circuits and a dynamic runtime compiler to continuously analyze program behavior and reorder instructions for better performance. While this doesn't make the processor run faster, it does improve what the user cares about: throughput.

"It's becoming exponentially expensive to find more parallelism in a single instruction stream," says Justin Rattner, an Intel senior fellow and director of microprocessor research. "So there will be increasing emphasis on thread-level parallelism, the number of threads per processor and the number of processors per chip."

Rattner says Intel is also doing research on processors and compilers that together optimize program performance in real time. "We are looking at program-visible instrumentation so the compiler has access to [runtime conditions]," he says. "This is on the fly; this is the compiler in the loop."

The technique has improved performance by a factor of two to four, Rattner says. Improvements in basic semiconductor technology will triple microprocessor clock speeds in five years, he predicts. But those clock improvements plus improved exploitation of parallelism by various means will boost total throughput by a factor of six to seven, Rattner says.

Multithreading and chip multiprocessing will be especially important in servers, because they routinely handle workloads -- transaction-processing, Web and database applications -- that are inherently threaded.

Desktop PCs are more likely to run single-user, single-threaded applications. As a result, the relentless race for higher processing speeds on the desktop may soon be meaningless, says Kevin Krewell, a senior analyst and editor of MicroDesign Resources' "Microprocessor Report" newsletter.

"In servers, more power and scalability are always welcome," he says. "But on the desktop, what do you do with 3 GHz, 4 GHz, 5 GHz? There could be a plateau, when we get the 'good enough' processor." Krewell says designs for desktop processors, and especially notebooks, will increasingly go after other things, such as low power consumption, low mass and quiet operation.

In the Silicon Trenches

While the microprocessor vendors work to boost throughput, at another level they toil to find ways to dodge the laws of physics. Current silicon processors have circuit features that are 130 nanometers (nm) wide. Future generations, coming at two-year intervals, will shrink that to 15 nm or so -- about as low as you can go in silicon. Getting there won't be easy.

"As we go from 130 nm to 90 nm to 65 nm and then to 42 nm, the standby power dissipation is the single most important problem at the silicon and circuit design level," says IBM's Davari. The leakage of power, which is wasteful and generates heat, increases "dramatically, exponentially," he says.

IBM and other companies are turning to "strained silicon," a technique that boosts performance and lowers power consumption by stretching silicon molecules farther apart, allowing electrons to flow though the transistors up to 70% faster. Chip makers are also experimenting with new materials and methods for making "gates" -- which control the electrical flow through a transistor -- smaller, faster and more efficient. "These things all started as performance solutions, but now they are solving power problems," Davari says.

Davari says IBM may eventually extend its existing dual-core architecture to hundreds of processors on a chip. It will also integrate dynamic RAM with logic on a single chip, greatly reducing CPU-memory communication delays, increasing throughput and lowering power consumption. And it will move application-specific functions, such as encryption, video compression or speech processing, from software or off-chip hardware to the processor chip, he says.

Dual-core processor chips will bring performance gains, but there may be cost drawbacks, Krewell says. The question is whether software vendors will view a dual-core processor as one or two processors for licensing purposes. "Intel convinced Microsoft that hyperthreading is one processor, although it looks to the software like two processors," he says. "But as you put two cores on there, then four cores, will vendors still consider it one processor?"

Copyright © 2003 IDG Communications, Inc.

7 inconvenient truths about the hybrid work trend
Shop Tech Products at Amazon