Welcome to the world's largest supercomputing grid

With 20 petabytes of storage, and more than 280 teraflops of computing power, TeraGrid combines the processing power of supercomputers across the continent

A unique, federally funded computing effort is making it easier for corporations to access the largest-scale computers on the planet. Dubbed TeraGrid, the effort spans nine different academic and government institutions and has reached a critical mass this year.

The notion is to combine the largest supercomputers into a global processing and storage grid to tackle the thorniest computing problems. "We want to make available high-end resources to the broadest community," says Dane Skow, who is the director of the Grid Infrastructure Group and performs the operational coordination of TeraGrid from the University of Chicago's Argonne National Lab. "We want to leverage our top-of-the-line equipment for people who don't have the skills to do it themselves."

TeraGrid began with grants in 2000 to the Pittsburgh Supercomputing Center. It has grown by adding other supercomputer centers around the country, and it just completed a second user conference held at the University of Wisconsin earlier this month.

Part of the TeraGrid is a simple user interface for the world's largest distributed computing environment, the ultimate GUI on steroids. "The point of TeraGrid is to pull together the capabilities and intellectual resources for problems that can't be handled at a single site," says Rob Pennington, the deputy director of the National Center for Supercomputing Applications (NCSA). "We make it easier for researchers to use these multiple computing sites with a very small increment in training and technical help."

Possible reaction pathway for the oxygen reduction reaction on a catalytic surface. Such calculations are helping to make low-temperature fuel cells more viable commercially. This visualization is based on simulations completed at the National Center for Supercomputing Applications and the San Diego Supercomputer Center by the University of Wisconsin's Manos Mavrikakis, Anand Nilekar, and other collaborators.
 
Possible reaction pathway for the oxygen reduction reaction on a catalytic surface. Such calculations are helping to make low-temperature fuel cells more viable commercially. This visualization is based on simulations completed at the National Center for Supercomputing Applications and the San Diego Supercomputer Center by the University of Wisconsin's Manos Mavrikakis, Anand Nilekar, and other collaborators.

Big numbers

The numbers are staggering, even for IT managers who are used to big projects. The TeraGrid network currently spans more than 20 petabytes of storage -- that's enough to hold a billion encyclopedias -- and more than 280 teraflops of compute power.

While there are big numbers involved in describing TeraGrid, "we want to be more than just a source of computing cycles," says Pennington. What TeraGrid is trying to accomplish is to produce a common means of accessing processing power and storage on the largest scale, freeing people from doing custom programming jobs. "We are trying to make it better on the front end," says Skow.

"This isn't just about providing some time on a big machine but being able to solve all the plumbing problems so that we can have a uniform end-to-end and integrated experience for all kinds of research," says Bill Bell, the deputy director of the NCSA.

1 2 3 Page 1
Page 1 of 3
7 inconvenient truths about the hybrid work trend
 
Shop Tech Products at Amazon