Shining a light on the chip-interconnect bottleneck

Silicon photonics are being harnessed to prevent data traffic jams within chips

Moore's Law could be rendered moot by the so-called interconnect bottleneck, strangling the possibility of performance improvements with data traffic jams. Silicon photonics, a way of moving data around via optical components, may help.

Moore's Law, of course, states that the number of transistors that can be built into a competitively priced integrated circuit can be expected to double every other year. More components enable higher performance, and this has been the basis of five decades of constantly improving system speeds.

But that could change.

"At some point, perhaps in five or ten years, we will reach a point where the processors will not be able to deliver better performance no matter how fast they are -- even infinite speed won't matter," says Linley Gwennap, principal analyst at The Linley Group, a semiconductor market research firm in Mountain View, Calif. "No one has solved the problem, and everyone is using Band-Aids to move it forward."

"We are always racing toward the wall," adds Tom Halfhill, senior analyst with the Microprocessor Report. "True, they keep moving the wall back, but on the other hand the train keeps speeding up."

Marek Tlalka, marketing vice president at photonics vendor Luxtera Inc., figures it this way: Using high-performance server blades, you could have 500 gigabits per second of bandwidth that need to be moved. With signals that run at 10 gigabits per second, you would need 50 data paths to move that bandwidth in one direction or 100 paths to move it in both directions. If you're using differential signaling to achieve noise immunity in a low-voltage setting, two lines are needed for each data path, and each line requires a connector pin.

To continue reading this article register now

7 inconvenient truths about the hybrid work trend
Shop Tech Products at Amazon