The supercomputer on your desktop

While there's still a need for ginormous hardware, some traditional high-performance applications are already on the desktop and others are heading that way

High-performance computing (HPC) has almost always required a supercomputer -- one of those room-size monoliths you find at government research labs and universities. And while those systems aren't going away, some of the applications traditionally handled by the biggest of Big Iron are heading to the desktop.

One reason is that processing that took an hour on a standard PC about eight years ago now takes six seconds, according to Ed Martin, a manager in the automotive unit at computer-aided design software maker Autodesk Inc. Monumental improvements in desktop processing power, graphics processing unit (GPU) performance, network bandwidth and solid-state drive speed combined with 64-bit throughput have made the desktop increasingly viable for large-scale computing projects.

Thanks to those developments, a transition to "a supercomputer on your desk" is in full force.

Earthquake simulations, nuclear-stockpile simulations and DNA research are staying put on traditional supercomputers for now. But as processor technology advances to multiple cores in the next 10 years, even those activities, or portions of them, could conceivably make their way to the desktop.

In the meantime, here are some examples of high-performance applications that are already running on smaller computers.

Building better drugs for anesthesia

Today, doctors know how to administer anesthesia-inducing drugs, and they know the effects, but they do not actually know what the drugs' molecules are doing when the patient drifts off to sleep. This analysis requires intense computational power to see not only when the anesthetic enters the respiratory system, but also how it starts making changes.

Anesthesia modeling
Researchers are modeling the effects of applying anesthetics on molecules within nerve cells.

At Temple University, researchers have developed models that measure the effects of applying anesthesia on molecules within nerve cells. The models currently run on a supercomputer, but plans are underway to perform the calculations on an Nvidia GPU cluster with four nodes. This will both save money and give researchers more flexibility to conduct tests when they're ready to do so (instead of having to wait for their scheduled time to use a supercomputer).

In that scenario, each GPU has the computational power of a small HPC cluster. GPU calculations involve mathematical calculations on the scale of those normally used to, say, render pixels in a video game.

Dr. Axel Kohlmeyer, a researcher on the project, says the best way to understand the simulation is to imagine a box filled with rubber balls, where each ball is a slightly different size and moves at a slightly different rate, interconnected with springs. Some springs are stronger or weaker than others, and some of the balls move faster or react differently. In the simulation, Kohlmeyer can follow the movements of all molecules to see the effects of anesthetics in the human body.

"Groups of particles will form and go where they like to be as determined by the magnitude of their interactions," says Kohlmeyer, explaining how the simulation evolves to the point where the interactions become balanced. Temperature variants produce vibrations and introduce new molecular activity. "The computational model is actually simple, but the challenge is you need so many millions of interactions. We do not want to just know the interactions at one point, but rather how they change over time."

1 2 3 4 Page
FREE Computerworld Insider Guide: IT Certification Study Tips
Join the discussion
Be the first to comment on this article. Our Commenting Policies