Skip the navigation

The Grand Challenges of IT

Researchers are inventing new ways to tackle old problems.

By Thomas Hoffman
September 27, 2004 12:00 PM ET

Computerworld - Fundamental research on how to make computer hardware more powerful and software smarter goes back 50 years or more, but many of the traditional methods have nearly reached their limits. Now, researchers moving in bold new directions may be setting the course of IT for decades to come.


There are literally dozens of grand challenges that scientists and economists are attacking, ranging from societal issues to technical advances. Here, we take a look at the challenges in three key areas of IT research: processor performance, chip miniaturization and artificial intelligence.


Processor Performance


In 1965, Gordon Moore, a co-founder of Intel Corp., observed that the transistor density of semiconductor chips doubled every 18 to 24 months, so new PCs are roughly twice as powerful as their predecessors. Most researchers expect Moore's Law to hold true for at least another 10 years.


But processor performance is reaching its limits, especially in terms of the amount of power that's dissipated by transistors. "As Moore's Law has gone along, it becomes harder and harder to get devices to work at very small scales using CMOS," says Phil Kuekes, senior computer architect in quantum sciences research at HP Labs in Palo Alto, Calif.


It's becoming increasingly difficult to cool chips economically, says Bijan Davari, an IBM fellow and vice president of next-generation computing at IBM's Thomas J. Watson Research Center in Yorktown Heights, N.Y. The power-density limitation for air-cooled processors is on the order of 100 watts per square centimeter of chip area, says Davari, "and it gets a lot more expensive to inject liquid into the backside of a chip, which could permit cooling to about twice that power density."










The Blue Mountain supercomputer at Los Alamos, built by the Cray unit of Silicon Graphics, was one of the fastest in the world when it was installed in 1998. It is now giving way to an even faster machine.
The Blue Mountain supercomputer at Los Alamos, built by the Cray unit of Silicon Graphics, was one of the fastest in the world when it was installed in 1998. It is now giving way to an even faster machine.


Depending on voltage ranges and other factors, Itanium and Pentium 4 chips burn 100 to 150 watts of power, Davari adds. "For future processors, we'd like to reduce the power dissipation into tens of watts per processor core, and more importantly, we need to contain the power density, or power dissipation per unit area," he says.
Given those restrictions, researchers are now focusing on new approaches to improving processor performance, such as placing multiple processor cores on a single chip, in addition to taking new approaches to chip architecture and design.


For example, IBM is evaluating—and in some cases already implementing—the use of new materials on a chip, such as copper, silicon-on-insulator and silicon germanium, to improve the performance of devices, lower the power density or provide a combination of the two. In addition, the materials allow researchers to fabricate smaller chips that consume less power, Davari says.



Our Commenting Policies
Consumerization of IT: Be in the know
consumer tech

Our new weekly Consumerization of IT newsletter covers a wide range of trends including BYOD, smartphones, tablets, MDM, cloud, social and what it all means for IT. Subscribe now and stay up to date!