Skip the navigation

U.S. Army Laboratory Makes Major Linux Computing Cluster Move

Purchase will more than double the MSRC's computing capability

February 27, 2006 12:00 PM ET

Computerworld - A U.S. Army supercomputing center with a legacy that dates back to the Electronic Numerical Integrator and Computer (ENIAC) launched in 1946 is moving to Linux-based clusters that will more than double its computing capability.

The Army Research Laboratory Major Shared Resource Center (MSRC) in Aberdeen, Md., is buying four Linux Networx Inc. Advanced Technology Clusters, including a system with 4,488 processing cores, or 1,122 nodes, with each node made up of two dual-core Intel Xeon chips. A second system has 842 nodes. In total, the purchase will increase the MSRC's computing capability from 36 trillion floating-point operations per second to more than 80 TFLOPS, Army officials said.

The decision to move into commodity clusters was not made quickly, said Charles J. Nietubicz, director of the MSRC.

The lab held a symposium in 2003 to explore the issue and began running a small, 256-processor cluster system. "We saw that cluster computing was this new kid on the block and was interesting," said Nietubicz. But the center wasn't about to start scrapping its other systems, made by Silicon Graphics Inc., Sun Microsystems Inc. and IBM, he said.

Cost Advantage

The MSRC isn't disclosing the purchase price, but Earl Joseph, an analyst at IDC in Framingham, Mass., said the average cost for a cluster works out to about $2,000 per processor, compared with $12,000 per processor for a RISC-based system.

Nietubicz said other vendors will need to improve their systems' performance or "reduce the price to provide equivalent performance."

Bluffdale, Utah-based Linux Networx builds systems using Advanced Micro Devices Inc. and Intel Corp. chips. The MSRC sale is the vendor's largest supercomputing order ever.

Nietubicz said he was convinced that clusters can work based on the MSRC's ability to get certain computational codes used in fluid dynamics, structural mechanics and other processes to scale to multiple processors mostly by using Message Passing Interface protocol-based code. MPI is used to create parallel applications. Clusters accounted for about half of the total $9.1 billion in sales in the high-performance computing market last year, according to IDC.

A major consideration for moving to clusters is whether the high-performance software can scale to multiple processors. Systems that have been written in MPI can do so, but Joseph said companies that use off-the-shelf software usually find it difficult to make the switch because commercial applications don't use MPI. Government labs and universities, which own their own code, can usually invest the time to convert their code into MPI, he said. Nietubicz doesn't see any major limitations to clusters, and while not all code can scale on clusters, he said the same problems occurred as the center moved from vector to shared memory.

Read more about Hardware in Computerworld's Hardware Topic Center.



Our Commenting Policies