Skip the navigation

Intel Looks to Software

Parallel-processing prognostications.

By Gary Anthes
February 10, 2003 12:00 PM ET

Computerworld - Computer hardware giant Intel Corp. is also one of the largest software developers in the world, employing more than 6,000 software professionals. In December, the company formed a new position -- that of Intel senior fellow -- at the top of its research hierarchy and appointed four people to the role, including Justin R. Rattner, director of microprocessor research, and Richard Wirt, general manager of software and solutions.
Recently, Rattner and Wirt told Computerworld what's coming in the software realm.

What's new in compilers?
We see activity in traditional compilers that adapt programming languages better for multithreading and hyperthreading. OpenMP, an initiative to adapt programming languages to handle threading, is a good example. [See glossary at right.]

Rattner: Today's instruction sets were really designed for static compilers, so the trade-offs they make are in favor of static compilers. When we move to dynamic compilers [like Java and .Net], we can continue to optimize even while the program is executing. The optimizing compiler is querying the hardware on a periodic basis and saying, "How's the program running?"

Justin R. Rattner, director of microprocessor research at Intel Corp.
Justin R. Rattner, director of microprocessor research at Intel Corp.
But such performance monitors aren't new.
Today, performance monitors are really designed for debugging, and they are inaccessible to the compiler. What we are definitely looking at in the future is program-visible instrumentation so the compiler has access to [runtime conditions]. This is on the fly; this is the compiler in the loop. This is where our heads are at.

Will we see more parallel processing of various types?
We went through getting computers to parallelize the instructions on a single [processor]. Intel pushed that to get about as much as we can get, so now we are beginning to go threaded on single [processors]. Then you'll see us take multiple [threaded processors] and put them on a motherboard.
As we add more transistors, then, instead of multiple [processors] on the motherboard, we'll put them on the die, on the chip itself. We refer to that as dual-core. Then you want to string these things together in big clusters. Each node gets more powerful as driven by Moore's Law, but we will string more and more of these together to form a supercomputer.
Richard Wirt, general manager of software and solutions at Intel Corp.
Richard Wirt, general manager of software and solutions at Intel Corp.
How will you get more parallelism out of existing applications? Rattner: We've discovered you can create "helper threads" when certain situations arise. A set of helper threads created by the compiler can run ahead of the main thread in order to bring normally missing data into the cache ahead

Our Commenting Policies