Hardly anything inside a computer would seem to be more basic, or more necessary, than the processor "clock"—the little crystal oscillator whose rhythmic ticks ultimately regulate everything the computer does. Indeed, we often define computers by their clocks, as in, "I just bought a 2-GHz PC."
Yet clocks aren't necessary for the workings of digital devices, and some researchers predict that clock-regulated circuits will increasingly give way to clockless, or asynchronous, circuits.
In the early days of computing, both asynchronous and synchronous circuits were used in computers, but the latter came to dominate because they were easier to design, test and debug. "But after decades during which clocked logic has imposed its discipline, the older and more anarchic approach seems poised to make a comeback," says Steve Furber, head of the computer science department at Manchester University in England.
It's becoming increasingly difficult to make processor clocks work correctly as chips get bigger and more complex. In order for operations to be conducted at the right time and in the right sequence, all parts of the chip must see the same "clock face." But clocks are so fast today that a given clock tick won't reach all components on the chip before the next tick occurs, so components at different distances from the clock can get out of sync.
This has forced designers to resort to ever more complex and expensive solutions, such as elaborate hierarchies of busses and circuits that adjust clock readings at various chip locales.
"That's a very expensive way to solve the problem," Furber says. "It's only companies like Intel that can afford the designer effort."
The elaborate clock circuits also draw more power and generate more heat with every new chip generation. Even worse, synchronous circuits perform only as fast as their slowest component. And sometimes the slowest component is the clock itself. Research at Sun Microsystems Inc. shows that logic transistors can spend up to 95% of their time just waiting for the next clock tick to tell them to act.
Manufacturers are experimenting with clockless microprocessors, including some that are completely asynchronous and some that have local components with clocks tied together by asynchronous networks.
Self-Timed Solutions, a Manchester-based start-up co-founded by Furber, has prototype chips of the latter type that it calls "self-timed interconnects." Furber describes his chips as asynchronous "network fabrics" into which it's easy to plug synchronous and asynchronous "clients"—such as processors or memory blocks that operate at different frequencies. That will let designers sidestep the difficult and expensive task of making processors globally synchronous, he says.
Asynchronous circuits can perform faster than synchronous ones, since their components aren't limited by the pace of clock ticks. And because they draw less power and generate less heat, they are likely to find applications in mobile devices. Philips Electronics NV has already put asynchronous microcontrollers in some of its pagers.
Sun will ship its new UltraSPARC IIIi processor with clockless circuits that pass data between memory modules and memory controllers. By making these data transfers independent of clock timing, the circuits are simpler, more reliable, easier to modify and potentially faster, says Jo Ebergen, a senior staff engineer at Sun.
"Asynchronous techniques will be adopted more and more in mainstream chip designs," Ebergen says. "Bigger parts of the chip will become completely asynchronous."
But asynchronous computing presents design challenges. By definition, clockless circuits operate in a more uncoordinated way. That results in sequences of events that aren't completely predictable, just as it's easier to predict the motion of marching soldiers than that of pedestrians on a sidewalk. Ways to deal with that unpredictability must be built into chip designs.
Moreover, asynchronous devices have nowhere near the infrastructure of design and test tools or the expertise that has been built up for synchronous devices at a cost of billions of dollars.
"Expertise is one of the biggest hurdles," says Chris Myers, an associate professor of electrical and computer engineering at the University of Utah in Salt Lake City. "Few universities in the U.S. teach asynchronous circuits."
Myers says the industry will move gradually toward chip designs that are "globally asynchronous, locally synchronous," in which little synchronous islands operating at different clock speeds communicate through some kind of asynchronous buffer or "fabric."
"It's not going to be the revolution that some of us predict," Myers says. "It's going to be little by little."