IBM to build massive supercomputer for U.S. government
Remember Roadrunner's 1 petaflop? The new system will reach 20 petaflops
It's an ambitious claim by IBM in a business where jumbo-size claims are the norm. The planned Sequoia system, capable of 20 petaflops, will be used by the U.S. Department of Energy in its nuclear stockpile research. The fastest systems today can only reach 1 petaflop, a remarkable achievement in its own right that was met only last year.
It "is the biggest leap of computing capability ever delivered to the lab," said Mark Seager, assistant department head for advanced technology at the Lawrence Livermore National Laboratory in Livermore, Calif., where the system will be housed. It's expected to be up and running in 2012.
IBM is actually building two supercomputers under this contract. The first one, to be delivered by midyear, is called Dawn and will operate at around 500 teraflops. Researchers will use Dawn to help prepare for the larger system.
Sequoia will use approximately 1.6 million processing cores, all IBM Power chips, running Linux, which dominates high-performance computing at this scale. IBM is still developing a 45-nanometer chip for the system and may produce something with eight or 16 cores -- or more -- for it. Although the final chip configuration has yet to be determined, the system will have 1.6TB of memory and be housed in 96 "refrigerator-size" racks.
The cost of the system wasn't disclosed.
The supercomputer is also helping to drive a massive power upgrade at Lawrence Livermore, which is increasing the amount of electricity available for all its computing systems from 12.5 megawatts to 30 megawatts. To achieve the upgrade, it will run more power lines to its facility. Sequoia alone is expected to use about 6 megawatts, according to Seager.
The world's first computer to break the teraflop barrier was built at Sandia National Laboratories in 1996. A teraflop equals a trillion floating points a second; a petaflop is 1,000 trillion (one quadrillion) sustained floating-point operations per second.
It takes government funding to build systems of this scale and size, but that also means that the U.S. is paying for much of the problem-solving it takes to scale across more than a million cores. "This is what's so good about it," said Herb Schultz, manager of deep computing at IBM. "They [the national lab] end up proving that you can get codes to scale that high."
In effect, by solving those problems, the national lab's work will pave the way for broader adoption of massive systems that could improve weather research, forecasts, tornado tracking, and work on a variety of other research problems. Large systems such as Sequoia help researchers reduce uncertainty and improve precision in simulations that can, for instance, predict tornado paths. The more compute power available, the more fine tuned and accurate the simulation.
The major problem in running a system of this scale is "the applications -- porting the applications and scaling them up is a critical problem we are facing," said Seager.
There are two petaflop systems in the U.S., IBM's Roadrunner at Los Alamos National Laboratory, which passed the petaflop barrier last May, and Cray Inc.'s XT Jaguar at the Oak Ridge National Laboratory.
IBM plans to build Sequoia at its Rochester, Minn., plant.
Read more about High Performance Computing in Computerworld's High Performance Computing Topic Center.
- Big Data, Big Mess: Sound Risk Intelligence Through Complete Context This paper examines the insecurity of the small businesses in the supply chain and offers tips to close those backdoors into the enterprise.
- Using Cyber Insurance and Cybercrime Data to Limit Your Business Risk This paper examines the challenges of understanding cyber risks, the importance of having the right cyber risk intelligence, and how to use this...
- 5 Tips to Secure Small Business Backdoors in the Enterprise Supply Chain This paper examines the insecurity of the small businesses in the supply chain and offers tips to close those backdoors into the enterprise.
- Confront consumerization with convergence Virtualization expert Elias Khnaser spotlights the security, compliance, and governance issues that arise when enterprise users "consumerize" with shadow IT and public cloud...
- NSS Labs & Cisco Present: Evaluating Leading Breach Detection Systems Today's constantly evolving advanced malware and APTs can evade point-in-time defenses to penetrate networks. Security professionals must evolve their strategy in lockstep to...
- Will the Real Endpoint Threat Detection and Response Please Stand Up? This webinar explores new technologies & process for protecting endpoints from advanced attackers as well as the innovations that are pushing the envelope... All High Performance Computing White Papers | Webcasts