Supercomputer on a Chip
New microprocessor architecture promises a trillion operations in one second by 2012.
Computerworld - Computer scientists at the University of Texas at Austin are inventing a radical microprocessor architecture, one that aims to solve some of the most vexing problems facing chip designers today. If successful, the Defense Department-funded effort could lead to processors of unprecedented performance and flexibility.
The density of transistors on a chip has doubled at least every two years for decades, and microprocessor designers have put those transistors to good use. Advanced circuits use techniques such as program branch prediction and speculative execution in order to build deep instruction "pipelines" that increase the throughput of the processor by allowing it to execute multiple instructions simultaneously. But the growing complexity of such circuits, and the heat they produce, signal an end to that approach. Rather than trying to build faster processor cores, chip builders are beginning to put more of them on a chip.
The problem with that, says Doug Burger, a computer science professor at the University of Texas, is that for application software to take advantage of those multiple cores, programmers must structure their code for parallel processing, and that's difficult or impossible for some applications. "The industry is running into a programmability wall, passing the buck to software and hoping the programmer will be able to write codes for their systems," he says.
Burger and his colleagues hope to solve these problems with a new microprocessor and instruction set architecture called Trips, or the Tera-op Reliable Intelligently Adaptive Processing System. "Our goal is to exploit concurrency, whether it's given to you by the programmer or not," he says.
Trips uses several techniques to do just that. First, the Trips compiler sends executable code to the hardware in blocks of up to 128 instructions. The processor "sees" and executes a block all at once, as if it were a single instruction, greatly decreasing the overhead associated with instruction handling and scheduling.
Second, instructions inside a block execute in a "data flow" fashion, meaning that each instruction executes as soon as its inputs arrive, rather than in some sequence imposed by the compiler or the programmer. "As such, the data is flowing through the instructions," explains Steve Keckler, a computer science professor and a Trips project co-leader with Burger.
Another trick: Within a block, the Trips compiler can merge two instructions that are on different paths into a single instruction if they have the same target and operation. Compared with earlier designs based on data flow concepts, "our aggressive data-flow model gives the compiler the opportunity to produce much tighter and more efficient code," says
- An Insightful Approach to Optimizing Mainframe MLC Spend This paper discusses how you can penetrate the complexity of IBM mainframe MLC products and the MLC price model to gain insight into...
- Meeting the Exploding Demand for New IT Services In this eBook, explore the top trends driving the New IT for IT Service Management, and how leading organizations are evolving to focus...
- Hybrid IT-A Low-Risk Path from On-Premise to ITaaS This white paper provides a strategy to move part or all of your ITSM suite to the cloud as a stepping stone to...
- Paving the Windows XP Migration Path to Success Support for Windows XP has ended, leaving organizations with three choices: Windows 8, Windows 7 or a combination. With the right planning and...
- Increase Your Data Center IQ Discover how to improve network efficiency, lower IT costs and more proactively manage your physical, virtual and cloud environments.
- Optimize Data Center Resources and Plan for the Future Eliminate over-provisioning and capacity shortfalls with pro-active capacity optimization. Join us in the evolution from capacity monitoring to capacity optimization in your data... All Hardware White Papers | Webcasts