U.S., EU, Russia set aside $13.6M for exascale software work
Look to upgrade open source model that can't produce next generation on its own
Computerworld - An coalition of countries, including the United States, has agreed to fund projects set up to develop software for the next generation of supercomputers, which are expected to arrive in 2019 and be 1,000 times more powerful than the fastest machines today.
Most of the software components that run supercomputers today were built using open source procedures like discussion lists and code repositories, which has left some development gaps.
By agreeing to set aside funds for supercomputer software development projects, the U.S. Canada, France, Germany, Japan, Russia and the United Kingdom, are heeding the arguments of some top researchers who believe that the open source development model alone cannot deal with all the issues posed by exascale technology, or even by the just arrived petascale systems.
The G8 Research Councils in the nations backing this effort this month quietly began a program offering offering 10 million Euros ($13.6 million U.S.) for projects that support exascale software development. Developers have until May to submit preliminary proposals for the money.
Long before the movie Avatar, supercomputers have been creating complex 3-D simulations of natural disasters, climate change and other events. Simulation -- and modeling -- "has become the third pillar of science," said G8 in announcing the availability of the development funds. The G8 specifically singled out climate change, energy, water and environment as a key focus of study for the next generation computing systems.
The challenge of developing software for these new systems "is really daunting," said Jack Dongarra, a professor of computer science at University of Tennessee and a distinguished research staff member at Oak Ridge National Laboratory. Machines that have a quarter of million compute cores today are expected, within the decade, to have as many as 100 million cores.
"We're interested at looking at what is needed in terms of standards, in terms of a real software stack for exascale, and we have to start planning now," said Dongarra.
These exascale systems, capable of million trillion, or a quintillion, calculations per second, are an order of magnitude beyond what today's software can deal with, said Dongarra. There is a lack of programming languages that can deal with parallelism on an exascale level, he said. Dongarra also said the software will have problems related to fault tolerance when handling component failures. Communications delays would also be an issue.
A year ago, Dongarra and Pete Beckman, director of Argonne Leadership Computing, helped form the International Exascale Software Project to help develop roadmaps and coordinate research for exascale systems.
The international agreement to spend on software development projects comes at the same time that nations have been cutting back spending on HPC projects focusing on climate and weather systems. In 2009, worldwide spending on high performance computing climate and weather projects was $353 million versus $392 million in 2008, according to market research firm IDC.
HPC spending on weather and climate projects is expected to increase to $470 million worldwide in 2013, said IDC.
Climate change is increasingly getting more government attention. On Feb. 8, the National Oceanic and Atmospheric Administration announced a reorganization and creation of the NOAA Climate Service to focus climate change issues.
While the funding for high performance computing may be uncertain, the path of supercomputing development is not. Even though the architecture and technology of exascale is still a work in progress, advances in computing power have occurred at predictable points. The first petascale system, running at one thousand trillion (one quadrillion) sustained floating-point operations per second, was produced in 2008 by IBM.
The G8 forecast for the near future is: 10 petaflops by 2013, 100 petaflops by 2016 and one exaflop by 2019.
Patrick Thibodeau covers SaaS and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at @DCgov, send e-mail to firstname.lastname@example.org or subscribe to Patrick's RSS feed .
Read more about High Performance Computing in Computerworld's High Performance Computing Topic Center.
- Silicon Valley's 19 Coolest Places to Work
- Is Windows 8 Development Worth the Trouble?
- 8 Books Every IT Leader Should Read This Year
- 10 Hot Hadoop Startups to Watch
- Slideshow: 7 security mistakes people make with their mobile device
- iOS vs. Android: Which is more secure?
- 11 sure signs you've been hacked
- Case Study: Murphy USA Gains Application Visibility Without Agents Murphy USA has more than 700 stores that share a 10Mbps VSAT link. So when something goes wrong with their applications, it's the...
- Logicalis eBook: SAP HANA: The Need for Speed Without timely business insights, organizations today can suffer logistical, manufacturing, and even financial disaster in a matter of minutes
- Neustar 2014 DDoS Attacks and Impact Report For the third consecutive year, Neustar surveyed hundreds of companies on distributed denial of service (DDoS) attacks. The survey reveals evidence that the...
- Acxiom Case Study This case study, which focuses on Acxiom, explores how the company was able to secure employee data, reduce migration costs and boost productivity...
- Top 4 Digital Signage Fails Join RMG Networks for a look at four of the most common reasons digital signage fails in corporate businesses. Learn about strategies to...
- Building Tomorrow's Infrastructure Listen to this podcast to discover how Crider Foods worked with PC Connection to update their IT infrastructure, while maintaining compliance and control. All High Performance Computing White Papers | Webcasts