Skip the navigation

Nvidia chief scientist: CPUs slowed by legacy design

By Joab Jackson
November 17, 2010 03:27 PM ET

IDG News Service - When it comes to power-efficient computing, CPUs are weighed down by too many legacy features to outperform GPUs (graphics processing units) in executing common tasks in parallel, said the chief scientist for the GPU vendor Nvidia.

CPUs "burn a lot of power" executing tasks that may be unnecessary in today's computing environment, noted Bill Dally, chief scientist and senior vice president of research for Nvidia, during his keynote Wednesday at the Supercomputer 2010 conference in New Orleans..

The GPU "is optimized for throughput," while "the CPU is optimized for low latency, for getting really good thread performance," he said.

Dally pointed to some of the features that most modern CPUs posses that waste energy in their pursuit of low latencies.

"They have branch predictors that predict a branch every cycle whether the program branches or not -- that burns gobs of power. They reorder instructions to hide memory latency. That burns a lot of power. They carry along a [set of] legacy instructions that requires lots of interpretation. That burns a lot of power. They do speculative execution and execute code that they may not need and throw it away. All these things burn a lot of power," he said.

Although the GPU was originally designed for rendering graphics on the screen, vendors such as Nvidia and Advanced Micro Devices are now positioning their GPU cards as general computation engines, at least for workloads that can be broken into multiple parts and run in tandem.

At least some industries are taking note of this idea, notably the world of high performance computing (HPC). Earlier this week, China's newly built Tianhe-1A system topped the latest iteration of the Top 500 List of the world's most powerful supercomputers. That system includes 7,168 Nvidia Tesla M2050 GPUs in addition to its 14,000 CPUs. Nvidia claims that without the GPUs, the system would need almost four times as many CPUs, twice as much floor space and three times as much electricity to operate.

And although Dally focused his remarks on use in HPC, he said that the general idea will permeate the computing world as a whole.

"HPC is, in many ways, an early adopter, because they run into problems sooner because they operate at a larger scale. But this applies completely to consumer applications as well as to server applications," he said, in an interview following the keynote.

Dally said that while not many current applications are written to run in parallel environments, eventually programmers will move to this model. "I think over time, people will convert applications to parallel, and those parallel segments will be well-suited for GPUs," he said. He even predicted that systems will one day be able to boot off the GPU as well as the CPU, though he said he knows of no work in particular to build a GPU-based operating system.

Reprinted with permission from IDG.net. Story copyright 2014 International Data Group. All rights reserved.
Our Commenting Policies
Consumerization of IT: Be in the know
consumer tech

Our new weekly Consumerization of IT newsletter covers a wide range of trends including BYOD, smartphones, tablets, MDM, cloud, social and what it all means for IT. Subscribe now and stay up to date!