Computerworld - VINCENT F. SCARAFINO
Title: Manager of numerically intensive computing, Ford Motor Co.
Observation: Japan recently grabbed the supercomputer lead from the U.S. with its Earth Simulator for climate modeling. Operating at 36 trillion operations per second, it's the fastest supercomputer in the world.
Prediction: Without a resumption in federal support for supercomputer research and development, the U.S. will fall behind in many areas of science and engineering.
Several years ago, the federal government shifted its funding for high-performance computing from exotic architectures to clusters of commodity processors. The clusters are fine for some jobs, but not for the most demanding ones, says Ford supercomputer user Vincent F. Scarafino. He explained to Computerworld's Gary H. Anthes the potential consequences of the U.S. losing the supercomputer race to Japan.
Why worry about U.S. leadership in supercomputing? Why can't Ford just buy supercomputers from Japan if that country makes the best machines? Advanced supercomputers enable breakthroughs in leading-edge science. Access to these leading-edge supercomputers has, through the years, provided Ford with a competitive advantage. If the U.S. loses leadership in this area, U.S. science and industry will lose early access to the fastest, most capable machines. The Japanese Earth Simulator has already shown this effect. Japanese interests are the primary ones being served. American scientists have limited access to the machine, but not at the same level as if it were an American resource available here.
The Earth Simulator is made up of NEC supercomputers that are a refinement over the last vector supercomputer we made here in the mid-1990s, the Cray T-90. Japanese auto companies are formidable competitors. We don't need to hand them yet another advantage.
What should the federal government do to boost U.S. supercomputing technology? Fund high-end processor design and supporting system components. The goal would be ultrafast processors with memory and I/O systems well matched to the computational speeds.
The government used to do just that, sponsoring development of high-end supercomputer architectures like the Cray vector machines. But now it seems to favor huge clusters of commodity microprocessors. Yes, in the mid-1990s they said that microprocessors were getting faster and faster, and we just need to put a whole bunch of them together and we've got a supercomputer. Well, it doesn't work quite that way. Microprocessors are fast at computing, but in order to run real difficult problems, they have to have real fast access to memory and be able to do I/O quickly. And memory subsystems are extremely expensive.
If you look at the very large machines made up of off-the-shelf components, they get about 5%
- An Insightful Approach to Optimizing Mainframe MLC Spend This paper discusses how you can penetrate the complexity of IBM mainframe MLC products and the MLC price model to gain insight into...
- Meeting the Exploding Demand for New IT Services In this eBook, explore the top trends driving the New IT for IT Service Management, and how leading organizations are evolving to focus...
- Hybrid IT-A Low-Risk Path from On-Premise to ITaaS This white paper provides a strategy to move part or all of your ITSM suite to the cloud as a stepping stone to...
- Paving the Windows XP Migration Path to Success Support for Windows XP has ended, leaving organizations with three choices: Windows 8, Windows 7 or a combination. With the right planning and...
- Increase Your Data Center IQ Discover how to improve network efficiency, lower IT costs and more proactively manage your physical, virtual and cloud environments.
- Optimize Data Center Resources and Plan for the Future Eliminate over-provisioning and capacity shortfalls with pro-active capacity optimization. Join us in the evolution from capacity monitoring to capacity optimization in your data... All Hardware White Papers | Webcasts