Skip the navigation

The Power of Parallelism

By Gary Anthes
November 19, 2001 12:00 PM ET

Few technologies have a more interesting history than parallel computing, in which multiple processors in a single system combine to tackle a problem. A chronicle of events in parallel computing says that IBM researchers John Cocke and Daniel Slotnick discussed the use of "parallelism" in a 1958 memo. In 1962, Burroughs Corp. introduced the D825, a four-processor computer that accessed up to 16 memory modules via a crossbar switch. In 1976, Floating Point Systems Inc. shipped a 38-bit computer that could execute multiple instructions per clock cycle.

A pinnacle of parallelism was reached in 1986 when Thinking Machines Corp. shipped a futuristic-looking black cube with 65,536 blinking red lights, one for each of its processors. Companies piled on to the parallel bandwagon with all kinds of exotic architectures aimed at doing multiple tasks simultaneously.

But during the next decade, most of the parallel processing specialists -- including Convex, Alliant, MasPar, Kendall Square, Multiflow, ETA Systems, Encore and Thinking Machines -- closed their doors or moved into other lines of business.

What happened? The parallel processing machines were relatively expensive for what they could do. They were often marketed poorly. Some companies couldn't get the bugs out. There were too many players in the field. Perhaps most important, the machines were hard to program. The computers usually ran at a tiny fraction of their theoretical peak speeds because the software couldn't be easily broken into multiple, parallel streams of instructions.

To be sure, parallel computing lives on today in various forms in computers from IBM, Hewlett-Packard, Sun and others, often borrowing on the very concepts pioneered by those defunct companies. And if you define "parallel processing" a bit more broadly, a number of developments have been made in, well, parallel.

Perhaps the most important of these was the introduction of a "superscalar" processor by Intel in 1992. The 60-MHz P5 chip could execute two instructions simultaneously. (Interestingly, Intel sped the development of the chip by employing what it called a "two-in-a-box" management structure, in which two people shared the same job. They worked in parallel.)

Intel Itanium processors predict the flow of a program through several branches by looking ahead in the program. They also execute instructions "speculatively" -- before they're needed -- and hold the results in suspense until the predicted branches are confirmed.

Much of the work that a programmer once had to do to allow that to happen can now be done by ultrasmart compilers. Research in both chip and compiler design promises more and more parallelism -- and, hence, performance gains -- largely shielded

Our Commenting Policies
Internet of Things: Get the latest!
Internet of Things

Our new bimonthly Internet of Things newsletter helps you keep pace with the rapidly evolving technologies, trends and developments related to the IoT. Subscribe now and stay up to date!