In the early days of computing, hardware and software took little account of users. The user interface consisted of punch cards for input and folded paper for output.
I remember wishing I could get closer, literally and figuratively, to the big mainframe that ran my jobs in the early 1970s. But all I could do was watch the operator through a glass partition and hope that he didn't drop my cards on the floor as he loaded them into the card reader. And hours later, I'd learn that my job had failed because I had omitted a semicolon somewhere.
Things improved greatly in the 1970s with the introduction of the PC and "WIMP" -- windows, icons, mice and pull-down menus. Development of the WIMP graphical user interface was sponsored by the Pentagon's Defense Advanced Research Projects Agency, where computer scientist J.C.R. Licklider envisioned a "man/computer symbiosis."
But symbiosis, with its notion of a close, mutually beneficial relationship, is a bit of a stretch to describe what has evolved since then. Sure, a novice can now surf the Web with a few intuitive mouse clicks, the visually impaired can command their PCs by voice, and you can know in seconds that your job bombed because you left out a semicolon. But symbiosis it ain't.
Now, however, work being done in several more-or-less unrelated IT fields suggests that we're entering a third era, one in which humans and computers take another big step toward each other. Several companies and universities are developing ways that computers can recognize human emotions by scanning facial features and comparing them to facial templates in a database (see story). One of the participants, NCR Corp., says customer relationship management systems will become much more powerful when they can react on the fly to the emotions of customers at kiosks and automated teller machines (ATM).
(An elderly man squints at an ATM screen, and almost instantly, the font size doubles. A woman at a shopping center kiosk smiles at a travel ad, prompting the device to print out a travel discount coupon. Several users at another kiosk frown at a racy ad, prompting a store to pull it.)
Meanwhile, researchers at Microsoft Corp. are developing ways that a computer -- based on its knowledge of how a particular user works -- can anticipate a user's needs and actions and perform them in advance (see story). The long delays that users sometimes experience while waiting to fetch something off the Net could be greatly reduced and at no cost because the work is done using idle computer cycles.
(A future version of Microsoft Project knows the habits of its user. It senses that the user will soon perform an optimization, or recalculate something in Excel, and it does this quietly in the background -- in advance.)
Eric Horvitz, a researcher at Microsoft, calls the process "continuous computing" and says it will be built into both operating systems and application software.
Clay Shirky, a writer, teacher and thinker about the future of IT, says networked systems have become so complicated that it's impossible for anyone to fully understand their "global state." That means they must be built like biological systems, where parts of an organism work within a local context. "Your kidneys only know what's going on in the kidneys, and yet the whole organism functions," he says.
Shirky says that in the future, systems will have to be made out of collections of applications that encapsulate and hide complexity while presenting simple, bulletproof interfaces to the outside world. Very simple, reliable, language-independent protocols will have to replace complex and fragile application programming interfaces, he says.
And complex systems will need to mimic biology in other ways, Shirky continues. Just as thousands of cells in the body may die without harming the organism as a whole, software and hardware components will need to be constructed like RAID, so that one may fail without affecting users.
IBM researchers are working on that concept. They're using the human nervous system as a guide in their Autonomic Computing project, which is an attempt to tame the escalating complexity of systems. IBM's approach could lead to systems that configure, reconfigure, optimize and "heal" themselves, all unseen by users.
Stephen Younger, previously a nuclear physicist at Los Alamos National Laboratory and now director of the U.S. Defense Threat Reduction Agency, has suggested that within 20 years, we'll build supercomputers with artificial consciousness (see story). "We are, as a fully conscious species, alone," he says. "Creating a self-aware machine would give us a companion, something to talk to about major issues. It could help us to better understand ourselves." Real symbiosis at last.
(People and machines could become partners that will provide perspective and comment on our human condition. We might even be creating our evolutionary successors.)
During the past two decades or so, most efforts to improve the human/computer interface have been aimed at making computers more user-friendly -- essentially extending and enhancing WIMP. Improvements along those lines, although welcome, are incremental.
But some research is going much further, by enabling computers to respond to the needs and wishes of humans or imitating the workings of the human body. The big wins in computing will come from this kind of creative and fundamental rethinking of the human/machine interface.
Gary H. Anthes writes Computerworld's "Future Watch" print feature.