Shining a light on the chip-interconnect bottleneck

Silicon photonics are being harnessed to prevent data traffic jams within chips

Moore's Law could be rendered moot by the so-called interconnect bottleneck, strangling the possibility of performance improvements with data traffic jams. Silicon photonics, a way of moving data around via optical components, may help.

Moore's Law, of course, states that the number of transistors that can be built into a competitively priced integrated circuit can be expected to double every other year. More components enable higher performance, and this has been the basis of five decades of constantly improving system speeds.

But that could change.

"At some point, perhaps in five or ten years, we will reach a point where the processors will not be able to deliver better performance no matter how fast they are -- even infinite speed won't matter," says Linley Gwennap, principal analyst at The Linley Group, a semiconductor market research firm in Mountain View, Calif. "No one has solved the problem, and everyone is using Band-Aids to move it forward."

"We are always racing toward the wall," adds Tom Halfhill, senior analyst with the Microprocessor Report. "True, they keep moving the wall back, but on the other hand the train keeps speeding up."

Marek Tlalka, marketing vice president at photonics vendor Luxtera Inc., figures it this way: Using high-performance server blades, you could have 500 gigabits per second of bandwidth that need to be moved. With signals that run at 10 gigabits per second, you would need 50 data paths to move that bandwidth in one direction or 100 paths to move it in both directions. If you're using differential signaling to achieve noise immunity in a low-voltage setting, two lines are needed for each data path, and each line requires a connector pin.

Thus, implementing the connection in both directions would require 200 pins on the device. Since all of today's blades are low voltage, they require those 200 pins.

So, in other words, higher bandwidth requires adding more pins to the chip and more through-holes, traces, and layers to the board. "It can be done, but every time you add a pin you are adding cost to the package and to the board," Gwennap says. Processors already often have about 1,000 to 1,200 pins, and some are approaching 2,000 pins, but there is a finite amount of real estate on each circuit board, he notes.

Chip wafer
A 12-inch traditional chip wafer is seen at Taiwan Semiconductor Manufacturing Co. in June 2010. Photo credit: REUTERS/ Pichi Chuang.

Meanwhile, adding more wires means the component draws more current, compounding heat dissipation problems, notes Jay Owen, AMD's program manager for external research.

Speeding up the data lines so that fewer are needed is not likely to be the answer. Even at a bandwidth of 40 gigabits per second, the top speed of today's copper wires, the range of the signal falls to inches, Tlalka complains.

Even without range considerations, "The speed limit for copper has been debated," adds Jim McGregor, chief technology strategist at In-Stat, a market research firm in Scottsdale, Ariz. "We used to think it was a little over 10 gigabits per second, and while it is now over 40 gigabits per second, it's doubtful that we can reach 100 gigabits per second. Adding more and more pins eventually reaches the point of diminishing returns -- which is why we are looking at optical interconnects."

1 2 3 4 Page 1
Page 1 of 4
7 inconvenient truths about the hybrid work trend
Shop Tech Products at Amazon