The Rise of Intelligent Agents: Automated Conversion of Data to Information
Business process virtualization (BPV) is fundamentally about the application of automation and intelligent networked technologies to improve business efficiency, reduce costs and improve the dynamics of employee/customer interaction. Additionally, though, it is about the application of information to the decision-making processes of the enterprise.
As we have discussed in previous publications, information is not the same as data. Analysis through paralysis, where the quality of decision-making is thought to be proportional to the amount of data inserted into the decision-making process, is the logical consequence of a failure to distinguish between the two. BPV application, with its dependence on decision modeling techniques, provides tools for reducing data to information. These are largely management approaches to analysis, but there are also increasingly automated tools for building and simulating decisions.
Of course, it would be ideal if there were technologies that could automatically generate decision models in response to rational questions. In such an approach, one would ask a computer to independently collect and analyze data, draw conclusions and present the results. In such an approach, one could, for example, ask a computer terminal to search all internal company sources and externally reachable sources for information on a specified competitor, its products and its potential threat to the business in a particular market. Within minutes, a neat two-page summary would be delivered with appropriate footnoting. Better yet, one could ask the computer how it arrived at its synopsis and receive cogent replies.
Such technology has been of abiding interest for some time. The HAL 9000 computer of 2001 fame is perhaps the most famous example of fictional approaches to such technology; however, Star Trek's library computer is another example that has received a lot of attention over the years. In fact, such fictional approaches are so well known that it frequently comes as a surprise to people who are unfamiliar with real technology that such machine-based intelligence is not available.
It turns out that a so-called intelligent agent is difficult to achieve because computers, unlike humans, have no ability to infer context from the data that they are presented. Where humans can usually be counted on to figure out conversational dialogue when spoken in a language familiar to them, this task is very difficult for a computer. Humans, it appears, have the ability to bring a lifetime of experience to a pronouncement and arrive at a good approximation of the intent of the pronouncement. Computers, at least so far, have had only rudimentary capabilities along these lines. Computers generally achieve some semblance of context assessment through a series of rather sophisticated if-then rules. This brute force method is time consuming and requires large quantities of storage.
In Stephen Cass' article, A Fountain of Knowledge, appearing in IEEE Spectrum online (http://www.spectrum.ieee.org), he discusses IBM's activities in developing just such a computing agent. This concept, named WebFountain, is currently a plethora of rack mounted processors, routers and over 160 terabytes of disk space consuming a footprint the size of half a football field. Looking back at the evolution of computers, the original computers were, in fact, the size of football fields, housed in climatically controlled environments and processed information very slowly relative to today's handheld computers. Over the past three decades, computer functionality has increased, the size of computers has decreased and the processing speed has skyrocketed. Recognizing this evolution, it is not unreasonable to assume that IBM and its peers will soon have a computing device of reasonable size that is highly capable of taking all data available on the Internet and reducing it to actionable information.
Undoubtedly such approaches will work progressively more effectively as Moore's law inexorably reduces the cost of a processing cycle and cost of a bit of storage. With the current rate of progression, Nova Amber feels fairly confident in claiming that reasonably good externally provided context independent agents will be generally available within the next two to five years. Such agents will be assigned only low-level analysis to begin with, but as the bugs are worked out, increasingly sophisticated tasks will be relegated to such automation. And, if there are some fundamental breakthroughs in software or machine intelligence, then the results could be dramatically better much faster.
The implications are staggering. Currently large portions of overhead are relegated to collecting, analyzing and acting on data. When agents can collect, analyze and present data as a consolidated information package, even within specified levels of uncertainty, then decisions will be accelerated tremendously. Further, when such intelligence can be embedded in the business process itself, then the network can become an intelligent fabric that not only manages the flow of data, but also manages the flow of information within a company.
Currently, several institutions are working on context engines to drive such intelligent agents. Notably, MIT has been at this for a number of years and has made some impressive strides. Others, such as Ray Kurzweil of Kurzweil Technologies, confidently predict the rise of such technology in the very near future and are actively exploring the implications of such approaches.
Enterprises should expect such technology to become available within five to ten years and should be planning their network and automated infrastructure to take advantage of it. An example would be planning a network with sufficient bandwidth to transport the kind of data loads such agents will generate. Additionally, these intelligence engines will likely be based on a grid-computing infrastructure so that complex analysis tasks can tap additional resources as required. Consequently, a cogent plan for deploying and managing such distributed computing infrastructures will be essential. And, of course, security for such intelligence is critical.
Nova Amber believes that the rise of true machine based intelligence is the next critical quantum leap for computer technology. It has been asserted that software has not kept up with hardware, consequently, no one is buying new computers: after all, the old ones still work acceptably. Nova Amber believes that, in fact, the problem is that hardware has yet to deliver the kinds of computing power that will enable brute force machine intelligence. We have the software, but we need cheap computing power to make it go. Within a short period of time, we will have the computing power at the right price point. Then, the information age will truly have begun.
Martha Young has more than nineteen years of experience in the technology market and is a partner in Nova Amber, LLC, a consulting firm. Martha is the co-author of The Case for Virtual Business Processes, published by Cisco Press. She can be reached at info@novaamber.com.
Michael Jude, Ph.D. is a well-known industry analyst with more than twenty years of experience in telecommunications and management automation. Michael is the co-author of The Case for Virtual Business Processes, published by Cisco Press.