Supercomputers with 100 million cores coming by 2018
The push is on to build exascale systems that can solve the planet's biggest problems
Computerworld - There is a race to make supercomputers as powerful as possible to solve some of the world's most important problems, including climate change, the need for ultra-long-life batteries for cars, operating fusion reactors with plasma that reaches 150 million degrees Celsius and creating bio-fuels from weeds and not corn.
Supercomputers allow researchers to create three-dimensional visualizations, not unlike a video game, to run endless "what-if" scenarios with increasingly finer detail. But as big as they are today, supercomputers aren't big enough -- and a key topic for some of the estimated 11,000 people now gathering in Portland, Ore. for the 22nd annual supercomputing conference, SC09, will be the next performance goal: an exascale system.
Today, supercomputers are well short of an exascale. The world's fastest system at Oak Ridge National Laboratory, according to the just released Top500 list, is a Cray XT5 system, which has 224,256 processing cores from six-core Opteron chips made by Advanced Micro Devices Inc. (AMD). The Jaguar is capable of a peak performance of 2.3 petaflops.
But Jaguar's record is just a blip, a fleeting benchmark. The U.S. Department of Energy has already begun holding workshops on building a system that's 1,000 times more powerful -- an exascale system, said Buddy Bland, project director at the Oak Ridge Leadership Computing Facility that includes Jaguar. The exascale systems will be needed for high-resolution climate models, bio energy products and smart grid development as well as fusion energy design. The later project is now under way in France: the International Thermonuclear Experimental Reactor, which the U.S. is co-developing.
"There are serious exascale-class problems that just cannot be solved in any reasonable amount of time with the computers that we have today," said Bland.
As amazing as supercomputing systems are, they remain primitive and current designs soak up too much power, space and money. It wasn't until 1997 that the first teraflop system, ASCI Red at Sandia National Lab, broke the teraflop barrier, reaching one trillion calculations per second. In 2008 IBM's Roadrunner at the Los Alamos National Laboratory achieved petaflop speed, or one thousand trillion (one quadrillion) sustained floating-point operations per second.
The Energy Department, which is responsible for funding many of the world's largest systems, wants two machines somewhere in the 2011-13 timeframe that will reach approximately 10 petaflops, said Bland.
But the next milestone now getting attention from planners is something that can reach an exaflop, or a million trillion calculations per second, (one quintillion). That's 1,000 times faster than a petaflop.
The exaflop will likely arrive around 2018. The big performance leaps are expected to happen every decade or so. Moore's Law, which says the number of transistors on a chip will double every 18 months or so, helps to explain the roughly 10-year development period. But the problems involved in reaching exaflop scale go well beyond Moore's Law.
- Best iPhone, iPad Business Apps for 2014
- 14 Tech Conventions You Should Attend in 2014
- 10 Desktop Apps to Power Your Windows PC
- How to Add New Job Skills Without Going Back to School
- Slideshow: 7 security mistakes people make with their mobile device
- iOS vs. Android: Which is more secure?
- 11 sure signs you've been hacked
- Case Study: Murphy USA Gains Application Visibility Without Agents Murphy USA has more than 700 stores that share a 10Mbps VSAT link. So when something goes wrong with their applications, it's the...
- Gartner 2013 Magic Quadrant for Enterprise Backup/Recovery Software See why CommVault was positioned as the #1 leader in Gartner's 2013 Magic Quadrant for Enterprise Backup/Recovery software for the 3rd year in...
- Forrester Report: CommVault is a Leader in Enterprise Backup and Recovery In this report, Forrester takes a deep dive into the evaluation criteria, how CommVault is positioned and the features and functionality that make...
- Forrester Wave for Enterprise Backup and Recovery Read this report to see how CommVault continues to outpace its competitors and why Forrester positioned CommVault Simpana as the top backup and...
- Four Myths of High-Productivity App Dev Debunked Debunk the main myths surrounding high-productivity application development and how both platforms have overcome them.
On-Demand Webcast: 7 Reasons to Choose VoIP
Thinking about a new phone system for your business?
Be sure to watch this informative webcast. Steve Strauss, small business columnist for USA...
All High Performance Computing White Papers |