New SGI supercomputer to scale Linux to 1,024 CPUs
The National Center for Supercomputing Applications will use it for research
Computerworld - Silicon Graphics Inc. is building an Altix supercomputer for the National Center for Supercomputing Applications (NCSA) that will run a single Linux operating system image across 1,024 Intel Corp. Itanium 2 processors and 3TB of shared memory.
Rob Pennington, interim director of the NCSA, which is based at the University of Illinois Urbana-Champaign, said the new machine will be very different from the existing machines at the facility, which include several Linux cluster supercomputers. Until now, the largest shared-memory supercomputer available to scientists there was an IBM p690 machine with 12 32-processor nodes.
With the new Altix machine, researchers will have far more computing power for their work, which includes weather-data analysis, simulations of black-hole collisions and other large-scale events in the evolution of the universe.
Earlier cluster supercomputers at the NCSA used multiple images of the Linux operating system -- one for each node -- along with dedicated memory allocations for each CPU. What makes this system more powerful for researchers is that all of the memory will be available for the applications and calculations, helping to speed and refine the work being done, Pennington said.
"The users get one memory image they have to deal with," he said. "This makes programming much easier, and we expect it to give better performance as well."
Initially, Pennington said, the system will use two images of Linux -- one per 512 processors -- while it's being tested and configured. Later, all 1,024 processors will address one image of the SGI Advanced Linux operating system being used. That operating system is based on Red Hat Enterprise Linux.
Dan Kusnetzky, an analyst at market research company IDC in Framingham, Mass., said the Altix system follows a path of innovation that SGI has offered for years in the supercomputing market. "SGI has often led the field in how many processors they could run on one operating system," he said.
The system, which is being called Cobalt, is a symmetric multiprocessor machine that will be connected to a 370TB SGI InfiniteStorage shared-file system, according to Mountain View, Calif.-based SGI. The storage will also be accessible to the other supercomputers at the NCSA.
The construction of Cobalt began with the delivery of the storage equipment last month, and the machine is expected to be fully online by March 1. It has a potential peak performance of more than 6 trillion floating-point operations per second (TFLOPS), which will bring the total computing power at NCSA to more than 35 TFLOPS and disk storage to three-quarters of a petabyte.
SGI uses a proprietary NUMAflex shared-memory architecture thatallows memory to be shared across multiple commodity processors and SGI ProPack for Linux, an application package that allows Linux to scale to larger requirements, according to a statement from the company.
No price tag was announced for the deal.
Read more about Hardware in Computerworld's Hardware Topic Center.
- An Insightful Approach to Optimizing Mainframe MLC Spend This paper discusses how you can penetrate the complexity of IBM mainframe MLC products and the MLC price model to gain insight into...
- Meeting the Exploding Demand for New IT Services In this eBook, explore the top trends driving the New IT for IT Service Management, and how leading organizations are evolving to focus...
- Hybrid IT-A Low-Risk Path from On-Premise to ITaaS This white paper provides a strategy to move part or all of your ITSM suite to the cloud as a stepping stone to...
- Paving the Windows XP Migration Path to Success Support for Windows XP has ended, leaving organizations with three choices: Windows 8, Windows 7 or a combination. With the right planning and...
- Increase Your Data Center IQ Discover how to improve network efficiency, lower IT costs and more proactively manage your physical, virtual and cloud environments.
- Optimize Data Center Resources and Plan for the Future Eliminate over-provisioning and capacity shortfalls with pro-active capacity optimization. Join us in the evolution from capacity monitoring to capacity optimization in your data... All Hardware White Papers | Webcasts