Skip the navigation

Supercomputing Goes Global

By Mark Willoughby
January 10, 2005 12:00 PM ET

Computerworld - Size matters in supercomputers because size translates into speed. And supercomputers are all about speed. The quest for the fastest computer to discover new drugs, crack ciphertext or model global weather and nuclear reactions has set a lot of records in a short time.
Supercomputers are defined loosely by IDC as systems that cost more than $1 million and are used in very-large-scale numerical and data-intensive applications. Today, their power is measured in trillions of floating-point operations per second, or TFLOPS.
The current world record for computing speed is 70.72 TFLOPS, posted in November by IBM's BlueGene/L system, which is destined for the U.S. Department of Energy's Lawrence Livermore National Laboratory. But supercomputers run as much on the testosterone of competition as on DC power, so the latest performance benchmark isn't likely to last very long.
Claiming bragging rights as the world's fastest computer has been a 20-year game of technical leapfrog, involving almost as many companies as have been delisted by Nasdaq this year. The contest spans the globe. There's considerable national pride invested in the quest to build a faster machine to discover that next subatomic particle lurking just beyond the bandwidth of today's champ.
An architectural shift took place in supercomputing in the 1990s, and that shift was the background for a legendary wager. Gordon Bell, principal designer at the venerable and defunct Digital Equipment Corp., bet Danny Hillis that the world's fastest machine at the end of 1995 would be a supercomputer with fewer than 100 processors. Bell was betting against the inexorable march of technology, saying that the bugs could not be worked out of massively parallel machines before the deadline. Hillis, a professor in MIT's artificial intelligence lab and a founder of gone-but-not-forgotten Thinking Machines Corp., was an early proponent of massively parallel computing. Smart money backed Hillis.
Hillis lost the bet. He was slightly ahead of his time because massive parallelism is more of a software problem than a hardware problem. Software developments rarely keep pace with hardware breakthroughs.
Back then, supercomputers were measured in millions of FLOPS. Since then, even supercomputers with performance in the billions of FLOPS have been relegated to the dustbin of computing history, alongside Digital and Thinking Machines. The new IBM BlueGene/L world champ has 16,384 dual-core processors grouped in 16 clusters, with each processor linked to one of five internal communications buses.
The evolution of supercomputers is like that of factory power in the Industrial Age. The first large factories were served by big, expensive, centralized power plants driving overhead belts and pulleys that



Our Commenting Policies