IBM to build massive supercomputer for U.S. government
Remember Roadrunner's 1 petaflop? The new system will reach 20 petaflops
It's an ambitious claim by IBM in a business where jumbo-size claims are the norm. The planned Sequoia system, capable of 20 petaflops, will be used by the U.S. Department of Energy in its nuclear stockpile research. The fastest systems today can only reach 1 petaflop, a remarkable achievement in its own right that was met only last year.
It "is the biggest leap of computing capability ever delivered to the lab," said Mark Seager, assistant department head for advanced technology at the Lawrence Livermore National Laboratory in Livermore, Calif., where the system will be housed. It's expected to be up and running in 2012.
IBM is actually building two supercomputers under this contract. The first one, to be delivered by midyear, is called Dawn and will operate at around 500 teraflops. Researchers will use Dawn to help prepare for the larger system.
Sequoia will use approximately 1.6 million processing cores, all IBM Power chips, running Linux, which dominates high-performance computing at this scale. IBM is still developing a 45-nanometer chip for the system and may produce something with eight or 16 cores -- or more -- for it. Although the final chip configuration has yet to be determined, the system will have 1.6TB of memory and be housed in 96 "refrigerator-size" racks.
The cost of the system wasn't disclosed.
The supercomputer is also helping to drive a massive power upgrade at Lawrence Livermore, which is increasing the amount of electricity available for all its computing systems from 12.5 megawatts to 30 megawatts. To achieve the upgrade, it will run more power lines to its facility. Sequoia alone is expected to use about 6 megawatts, according to Seager.
The world's first computer to break the teraflop barrier was built at Sandia National Laboratories in 1996. A teraflop equals a trillion floating points a second; a petaflop is 1,000 trillion (one quadrillion) sustained floating-point operations per second.
It takes government funding to build systems of this scale and size, but that also means that the U.S. is paying for much of the problem-solving it takes to scale across more than a million cores. "This is what's so good about it," said Herb Schultz, manager of deep computing at IBM. "They [the national lab] end up proving that you can get codes to scale that high."
In effect, by solving those problems, the national lab's work will pave the way for broader adoption of massive systems that could improve weather research, forecasts, tornado tracking, and work on a variety of other research problems. Large systems such as Sequoia help researchers reduce uncertainty and improve precision in simulations that can, for instance, predict tornado paths. The more compute power available, the more fine tuned and accurate the simulation.
The major problem in running a system of this scale is "the applications -- porting the applications and scaling them up is a critical problem we are facing," said Seager.
There are two petaflop systems in the U.S., IBM's Roadrunner at Los Alamos National Laboratory, which passed the petaflop barrier last May, and Cray Inc.'s XT Jaguar at the Oak Ridge National Laboratory.
IBM plans to build Sequoia at its Rochester, Minn., plant.
Read more about High Performance Computing in Computerworld's High Performance Computing Topic Center.
- The 20 Best iPhone/iPad Games of 2013 So Far
- 9 Steps to Build Your Personal Brand (and Your Career)
- 7 Consumer Technologies Coming to an Enterprise Near You
- 11 Signs Your IT Project is Doomed
- A walking tour: 33 questions to ask about your company's security
- 15 social media scams
- The 7 elements of a successful security awareness program
- IT Certification Study Tips
- Register for this Computerworld Insider Study Tip guide and gain access to hundreds of premium content articles, cheat sheets, product reviews and more.
- Federal IT Innovation Caught in a Catch-22
- Fed resources shoring up old infrastructure, holding back new technologies.
- Harness IT -- An Introduction to Business Intelligence Solutions
- Learn the key selection criteria required to provide your organization with the capability to address structured data, unstructured data and mobile demands so...
- Business Intelligence Shows its Smarts
- Today's Business Intelligence (BI) tools provide a new way to think about data with self-service capabilities and user-friendly analytics that can be used...
- Proactive Planning for Big Data
- Big data is less about the terabytes and more about the query tools and business intelligence needed to make sense of massive amounts...
- Inquiry Spotlight: Consumer-Facing Identity
- The challenges of consumer-facing identity management, access management, and authentication differ in ways subtle and dramatic from those of the employee-facing variety. All Government IT White Papers
- Becoming An Analytics Driven Organization
- Join us on Tuesday, June 18, 2013, 11:00 AM EDT and learn how your agency can create an analytics culture that will enable...
- 3 Reasons Why Sepaton is the World's Fastest Backup Solution
- Leading analyst, Storage Switzerland learns how Sepaton backs up and deduplicates massive data volumes while maintaining the industry's fastest performance - all in...
- Enterprise File Sharing: All You Need to Know
- Security. Scalability. Control. These are just some of the many benefits of enterprise cloud file-sharing that you'll discover in this KnowledgeVault, packed with...
- Bridging HTTP and FTP with FileXpress Internet Server
- What if you could take an FTP server on your internal network, and allow external users (partners or customers) to securely access it...
- MFT and FileXpress - An Overview
- Business users and applications exchange files on a regular basis. File transfer is a core part of the flow of business activity. All Government IT Webcasts