The Rise of Intelligent Agents: Automated Conversion of Data to Information

Business process virtualization (BPV) is fundamentally about using automation and intelligent networked technologies to increase efficiency, reduce costs and improve employee/customer interaction. Additionally, though, it's about the application of information to decision-making.

Information isn't the same as data. Analysis through paralysis, where the quality of decision-making is thought to be proportional to the amount of available data, is the logical consequence of a failure to distinguish between the two. BPV provides tools for reducing data to information. These are largely management approaches to analysis, but there is also an increasing number of automated tools for building and simulating decisions.

Of course, it would be ideal if there were technologies that could automatically generate decision models in response to rational questions. In such an approach, one would ask a computer to independently collect and analyze data, draw conclusions and present the results. In such an approach, one could, for example, ask a computer terminal to search all internal company sources and externally reachable sources for information on a specified competitor, its products and its potential threat to the business in a particular market. Within minutes, a neat two-page summary would be delivered with appropriate footnoting. Better yet, one could ask the computer how it arrived at its synopsis and receive cogent replies.

Such technology has been of abiding interest for some time. The HAL 9000 computer of 2001 fame is perhaps the most famous example of fictional approaches to such technology; Star Trek's library computer is another. In fact, such fictional approaches are so well known that it frequently comes as a surprise to people who are unfamiliar with real technology that such machine-based intelligence isn't available.

A so-called intelligent agent is difficult to achieve because computers, unlike humans, have no ability to infer context from data. Where humans can usually be counted on to figure out conversational dialogue when spoken in a language familiar to them, this task is very difficult for a computer. Humans have the ability to bring a lifetime of experience to a pronouncement and arrive at a good approximation of its intent. Computers, at least so far, have had only rudimentary capabilities along these lines. Computers generally achieve some semblance of context assessment through a series of rather sophisticated if-then rules. This brute-force method is time-consuming and requires large quantities of storage.

In Stephen Cass' article, "A Fountain of Knowledge," appearing in "IEEE Spectrum Online," he discusses IBM's activities in developing just such a computing agent. This concept, named WebFountain, is currently a plethora of rack-mounted processors, routers and over 160TB of disk space consuming a footprint the size of half a football field. Looking back at the evolution of computers, the original computers were, in fact, the size of football fields, housed in climate-controlled environments that processed information very slowly relative to today's handheld computers. Over the past three decades, computer functionality has increased, the size of computers has decreased, and the processing speed has skyrocketed. Recognizing this evolution, it's not unreasonable to assume that IBM and its peers will soon have a computing device of reasonable size that's highly capable of taking all data available on the Internet and reducing it to actionable information.

Undoubtedly, such approaches will work progressively more effectively as Moore's Law inexorably reduces the cost of a processing cycle and cost of a bit of storage. With the current rate of progression, research and consulting group Nova Amber LLC feels fairly confident in claiming that reasonably good context-independent agents will be generally available within the next two to five years. Such agents will be assigned only low-level analysis to begin with. But as the bugs are worked out, increasingly sophisticated tasks will be relegated to such automation. And if there are some fundamental breakthroughs in software or machine intelligence, results could be dramatically better much faster.

The implications are staggering. When agents can collect, analyze and present data as a consolidated information package, even within specified levels of uncertainty, decisions will be accelerated tremendously. Furthermore, when such intelligence can be embedded in the business process itself, the network can become an intelligent fabric that not only manages the flow of data, but also manages the flow of information within a company.

Currently, several institutions are working on context engines to drive such intelligent agents. Notably, MIT has been at this for a number of years and has made some impressive strides. Others, such as Ray Kurzweil of Kurzweil Technologies Inc., confidently predict the rise of such technology in the very near future and are actively exploring the implications of such approaches.

Enterprises should expect such technology to become available within five to 10 years and should be planning their network and automated infrastructures to take advantage of it. An example would be planning a network with sufficient bandwidth to transport the kinds of data loads such agents will generate. In addition, these intelligence engines will likely be based on a grid computing infrastructure so that complex analysis tasks can tap additional resources as required. Consequently, a cogent plan for deploying and managing such distributed computing infrastructures will be essential. And, of course, security for such intelligence is critical.

Nova Amber believes that the rise of true machine-based intelligence is the next critical quantum leap for computer technology. It has been asserted that software hasn't kept up with hardware, so no one is buying new computers -- after all, the old ones still work acceptably. Nova Amber believes that, in fact, the problem is that hardware has yet to deliver the kinds of computing power that will enable brute-force machine intelligence. We have the software, but we need cheap computing power to make it go. Within a short period of time, we will have the computing power at the right price point. Then the Information Age will truly have begun.

Martha Young has more than 19 years of experience in the technology market and is a partner in Nova Amber LLC, a consulting firm in Golden, Colo. Young can be reached at info@novaamber.com. Michael Jude, Ph.D., is a well-known industry analyst with more than 20 years of experience in telecommunications and management automation. They are the co-authors of The Case for Virtual Business Processes: Reduce Costs, Improve Efficiencies and Focus on Your Core Business, to be published in March by Cisco Press.

Copyright © 2004 IDG Communications, Inc.

8 highly useful Slack bots for teams
  
Shop Tech Products at Amazon