Few technologies have a more interesting history than parallel computing, in which multiple processors in a single system combine to tackle a problem. A chronicle of events in parallel computing says that IBM researchers John Cocke and Daniel Slotnick discussed the use of "parallelism" in a 1958 memo. In 1962, Burroughs Corp. introduced the D825, a four-processor computer that accessed up to 16 memory modules via a crossbar switch. In 1976, Floating Point Systems Inc. shipped a 38-bit computer that could execute multiple instructions per clock cycle.
A pinnacle of parallelism was reached in 1986 when Thinking Machines Corp. shipped a futuristic-looking black cube with 65,536 blinking red lights, one for each of its processors. Companies piled on to the parallel bandwagon with all kinds of exotic architectures aimed at doing multiple tasks simultaneously.
But during the next decade, most of the parallel processing specialists -- including Convex, Alliant, MasPar, Kendall Square, Multiflow, ETA Systems, Encore and Thinking Machines -- closed their doors or moved into other lines of business.
What happened? The parallel processing machines were relatively expensive for what they could do. They were often marketed poorly. Some companies couldn't get the bugs out. There were too many players in the field. Perhaps most important, the machines were hard to program. The computers usually ran at a tiny fraction of their theoretical peak speeds because the software couldn't be easily broken into multiple, parallel streams of instructions.
To be sure, parallel computing lives on today in various forms in computers from IBM, Hewlett-Packard, Sun and others, often borrowing on the very concepts pioneered by those defunct companies. And if you define "parallel processing" a bit more broadly, a number of developments have been made in, well, parallel.
Perhaps the most important of these was the introduction of a "superscalar" processor by Intel in 1992. The 60-MHz P5 chip could execute two instructions simultaneously. (Interestingly, Intel sped the development of the chip by employing what it called a "two-in-a-box" management structure, in which two people shared the same job. They worked in parallel.)
Intel Itanium processors predict the flow of a program through several branches by looking ahead in the program. They also execute instructions "speculatively" -- before they're needed -- and hold the results in suspense until the predicted branches are confirmed.
Much of the work that a programmer once had to do to allow that to happen can now be done by ultrasmart compilers. Research in both chip and compiler design promises more and more parallelism -- and, hence, performance gains -- largely shielded
- How WAN Optimization Helps Enterprises Reduce Costs If you wanted to break down innovation into a tidy equation, it might go something like this: Technology + Connectivity = Productivity. Productivity...
- Four Little-Known Ways WAN Optimization Can Benefit Your Organization WAN optimization has evolved into a complete system that optimizes traffic across a broad range of most popular applications while providing deep visibility...
- SharePlan Security SharePlan is a continuous, secure, enterprise-ready file sync and share platform that facilitates smart, real-time collaboration across all devices.
- Three Ways Your DNS Can Impact DDoS Attacks Domain Name System (DNS) plays a big role in consumers' day-to-day Internet usage and is a critical factor when it comes to distributed...
- Online Video and Web Traffic: Sochi 2014 Winter Olympic Games Over 25 leading global broadcasters worked with Akamai to deliver the action, excitement and inspiration of Sochi because they understand online viewers expect...
- Video surveillance for IT: maximum image quality, minimum bandwidth Join us on Thursday, May 8th at 1 p.m. EST when Willem Ryan, Senior Product Marketing Manager at Avigilon, will discuss how IT... All Networking White Papers | Webcasts