Until recently, you could reasonably expect this year's software to run faster on next year's machines, but that's not necessarily true going forward. For the foreseeable future, significant performance improvements are likely to be achieved only through arduous reprogramming.
Some time ago, computer vendors passed the point of diminishing returns concerning processor clock speeds, and could no longer keep hiking frequency rates. To maintain continued performance improvements, suppliers turned to installing multiple instances of the processor -- multiple cores -- on a processor chip, and as a result, multicore processors are now mainstream for desktops. But to realize any performance improvements the software has to be able to use those multiple cores.
And to do that, most software will need to be rewritten.
"We have to reinvent computing, and get away from the fundamental premises we inherited from von Neumann," says Burton Smith, technical fellow at Microsoft Corp., referring to the theories of computer science pioneer John von Neumann (1903 - 1957). "He assumed one instruction would be executed at a time, and we are no longer even maintaining the appearance of one instruction at a time."
But software cannot always keep up with the advances in hardware, says Tom Halfhill, senior analyst for the Microprocessor Report newsletter in Scottsdale, Ariz. "If you have a task that cannot be parallelized and you are currently on a plateau of performance in a single-processor environment, you will not see that task getting significantly faster in the future."
New law in town
For four decades, computer performance progress was defined by Moore's Law, which said that the number of devices that could economically be placed on a chip would double every other year. A side effect was that the smaller circuits allowed faster clock speeds, meaning software would run faster without any effort from programmers. But overheating problems on CPU chips have changed everything.
"The industry has hit the wall when it comes to increasing clock frequency and power consumption," says Halfhill. There are some chips edging above 4GHz, "but those are extreme cases," he says. The mainstream is still below 3GHz. "The main way forward is through multiple processors."
By adding more cores to the CPU, vendors offer the possibility of higher performance. But realizing higher performance through multiple cores assumes that the software knows about those cores, and will use them to run code segments in parallel.
Even when the software does that, the results are gated by Amdahl's Law. Sometimes called Amdahl's Curse, and named for computer pioneer Gene Amdahl, it lacks the upbeat outlook of Moore's Law. It says that the expected improvement from parallelization is 1 divided by the percentage of the task that cannot be parallelized plus the improved run time of the parallelized segment.
In other words, "It says that the serial portion of a computation limits the total speedup you can get through parallelization," says Russell Williams, chief architect for Photoshop at Adobe Systems in San Jose, Calif. "If 10% of a computation is serial and can't be parallelized, then even if you have an infinite number of infinitely fast processors, you could only get the computation to run 10 times faster."