- How Did We Get Here?
- Problems with Data
- Problems with Programs
- Moving at the Speed of People
- SmartEnough's Experience
Problems with Programs
The gap between analytics and operational or transactional systems explored in the previous section explains why organizations have such a hard time deriving insight from data and applying that insight to their operational systems. A second major flaw with current approaches is problems with programs. Fundamentally, the programs used to run most organizations today are built without any embedded intelligence. The "peopleware" of an organization, not the software, is what makes decisions.
The inherent inefficiency of these programs has long been obvious to those working with information systems. Attempts have been made to address this limitation. All these attempts, from handwritten code to failed attempts at artificial intelligence, from enterprise applications to business process management and service-oriented architecture, have made little difference. Most organizations have software programs that just aren't smart enough.
The Weight of Legacy Code
As more custom code is written, organizations find that it lives longer than they intended. Code that's 20 or even 30 years old is still in use in many organizations—still running core business processes and processing vital transactions. Companies have discovered the hard way that they can't code their way into the future. This deadweight of legacy applications has created two problems.
Part of the problem comes from a mind-set that systems, like other enterprise assets, should be built to last. This focus results in detailed but largely static requirements and huge investments in system architecture and design. However, it also buries critical code in complex systems. To make applications robust and complete, a huge amount of business expertise must be embedded in the system, but there are problems with this approach:
- Embedding business expertise in the system is hard because those who understand the business can't code, and those who understand the code don't run the business. Business users can't explain to programmers what they need, and the result is systems that don't quite work the way they were intended.
- Generally, custom code isn't well documented, or the documentation is allowed to get out of date rapidly. The promises of "self-documenting" code notwithstanding, new generations of programmers struggle to amend code to face new challenges.
- Showing a regulator or auditor how custom code works is nearly impossible, and demonstrating compliance with policies or regulations is extremely difficult, costly, and time consuming.
- New challenges and requirements emerge constantly because the world doesn't stop changing, and organizations can't afford to stand still. Therefore, changing and managing custom code is difficult.
All these factors contribute to another aspect of the problem: the maintenance backlog. The maintenance backlog is the list of projects not progressing because of a lack of time and resources or other projects having higher priorities. Most projects don't even make it to the backlog unless they have a positive potential—that is, the project's business value exceeds its cost. An organization that could magically complete all projects in its backlog would add tremendous value to the business. For most organizations, this backlog represents a sink of resources and time that could add significant value.
Organizations have a maintenance backlog for many reasons, but one of the most persistent is that a huge percentage of IT resources is spent on systems maintenance—75 percent or more, as noted at the beginning of the chapter. So much old code is used to run businesses and must be constantly updated (to reflect new regulations, competitors, and products) that this work dominates the IT department's responsibilities. The systems were originally built to specification but no longer do what the business needs. Perhaps the specification was wrong, or perhaps the business has changed. Maintenance takes so long and uses so many resources that little or nothing else can be done.
Even if maintenance work isn't consuming a large percentage of your IT resources, traditional approaches to embedding logic in systems create rigid and unwieldy systems. This lack of agility causes problems if you need to respond quickly and cost-effectively to a competitive issue, new regulations, a new channel, or another major change. Coding business logic into legacy systems perpetuates the separation between those who know the business and those who run the systems and makes it hard to update systems as business needs change. Expert systems and 4GLs were two programming approaches intended to address these issues.
Artificial Intelligence and Expert Systems
In the 1980s, the IT industry and organizations made a major investment in various forms of artificial intelligence (AI), which was designed to bring the power of human intelligence to computers. Expert systems vendors promised that their software would perform tasks just like the company's most experienced employees. These vendors used intelligence and best practices painstakingly collected from industry experts to power their systems, but most expert systems didn't succeed in practical application.
Expert systems were typically designed as "closed systems," designed to solve a problem on their own. They didn't support or integrate with the programming models prevalent in OLTP systems. Specialized hardware and software were required. At the time, most corporate computer systems were written in COBOL and ran on IBM mainframes, but many expert systems packages required high-end workstations using artificial intelligence languages, such as Lisp or PROLOG. Organizations couldn't easily integrate these systems into their production environments and had no personnel trained in maintenance or programming techniques for them. They had to rely on special training and support from software vendors. Generally, expert systems came with predefined rules for accomplishing specific tasks. Specialist programmers at vendors, working from interpretations of interviews with industry experts, crafted these rules as compromises between the different methods their sources used.
Organizations purchasing expert systems software usually needed to go through laborious tuning sessions to understand how the rules functioned and to modify them for their business preferences. Organizations found it impossible to use expert systems to automate other tasks because they couldn't modify their underlying processing flow and structure. In the end, the organizations that experimented with expert systems became wary of computer software promising "intelligent processing."
In addition, AI software consumed huge amounts of computing resources at a time when these resources were still at a premium. Other factors prevailed as well, especially rampant paranoia that Japan's burgeoning economy would overwhelm the United States and that Japan's government-funded "fifth-generation computing" initiative, largely about developing AI, would pose a bigger threat to the U.S. economy than all the Toyotas it could produce. This mentality created a rush among small AI entrepreneurs to market immature and nonperforming products. Ultimately, the Japanese fifth-generation initiative failed to produce much more than factory (and toy) robots, and many AI vendors returned to the university labs and defense contracts from whence they came. To be fair, some firms survived and, having learned a painful lesson, transformed their products into more commercially useful offerings, such as business rules management systems, data-mining tools, logistics optimization software, and embedded intelligence, such as fraud detection.
Ultimately, even though the outcome was less than desirable, everyone learned from the experience. Lessons included the value of keeping knowledge (rules) in a repository where it could be managed and the power of a declarative approach, compared to procedural programming, to simplify some problems. Expectations for AI were rolled back to more reasonable levels, the buying community became more careful about the next big trend, and AI entrepreneurs learned that they have to embed their inventions in useful applications as well as cooperative architectures. In the meantime, the power and relative cost of processing is dramatically more favorable, and open standards and ubiquitous communication provided the basis for AI-like technologies to have a second chance.
4GLs and Other User-Friendly Tools
If business know-how is hard to embed into information systems, even with an expert system, can you make maintaining the code easier? To bring a higher level of abstraction to programming problems, 4GLs were developed. In theory, abstraction makes it easier to see what's happening in code, engages businesspeople in the process more effectively, and makes IT development and modification of sophisticated processes easier and quicker. Many 4GLs came and went, and not much changed in the problems of application maintenance.
Although 4GLs do offer some productivity gains and many are easy for less technical staff to use, they fail to address the core "build to last" problems. Code representing core business logic is still procedural and embedded in code that's "plumbing" or otherwise highly technical. Few business users become fluent enough to write code themselves, and those who do often create programs and scripts that aren't managed or controlled well enough for a company to rely on.
Stretching the metaphor, you could consider a spreadsheet a 4GL (at least its scripting or macro creation capabilities). The problem of maintaining spreadsheet applications is well documented and pervasive. Names such as shadow IT, spreadmarts, and spreadsheet hell are used. The problem is that spreadsheets are helpful tools for composing a piece of analysis, but as shared applications, they are a disaster. They lack version control, a repository, and collaboration features. They are consistently applied to problems for which they were never intended, which is their biggest weakness—in fact, the biggest weakness of almost any piece of technology.
Business Rules
The use of business rules as a way to specify how an organization behaves began to gain ground in the mid-1990s and was popularized by Ron Ross4 and Barbara von Halle,5 among others. Many early adopters regarded business rules primarily as a tool for describing and understanding how an organization wants to act. In one sense, business rules were a way to design an organization. As technology support for this approach increased, business rules began to separate into a business-design approach and a higher-level, more declarative way to develop code. Declarative approaches allow managing each piece of code or logic separately instead of in the procedural sequence typical of regular code.
Another major impetus for the business rules approach was to bring the idea of business rule management to problems that didn't involve extremely complex decisions (such as diagnosing cancer). Instead, this approach was used with everyday operational transactions that occur in high volume and involve decisions with low to medium complexity. Instead of using an expert system to handle very complex problems, organizations could use business rules to automate 80 percent of less complex cases and, therefore, improve throughput. Additionally, this approach enabled nontechnical businesspeople to state business rules rather than provide fuzzy requirements that IT translates into the system's real logic—sometimes without enough business input and often with a lot of confusion.
Despite the early and repeated proof of the effectiveness of business rules, their use has been somewhat limited. Most organizations using this approach at a strategic level do so only to describe and understand their business. Organizations using it to make specific systems smarter often do so only in a localized way. The rules are extracted from limited sources and largely ignore the insight that can be gained from an organization's data. Without an overall approach, few organizations have become proactive at finding decisions and automating them with business rules, although this approach is clearly possible. Business rules, as you'll see, are a critical component of a solution to the problem of making systems smart enough, but their potential has been untapped so far.
Buying Solutions
In the past, IT departments had no alternative but to build their own software. As the industry matured, however, more packaged applications became available that offered quicker time to market and less need for specialist programmers. In theory, you could buy a package for inventory control, for example, and install it and be up and running quickly.
In fact, many packages were overly rigid, based on a single interpretation of how a certain business process might run. Configuring and installing these packages, especially as they grew into today's enterprise applications, were time consuming and costly. In addition, enterprise applications still assumed that people were the motivating force behind systems. They had a data model, captured data through various generations of user interface technology, and stored it in an operational database. They provided reporting or integrated with business intelligence/data warehouse products so that data could be transferred to an analytic environment and used to help people make decisions. Attempts to customize or extend these applications resulted in custom code with all the problems described previously. IT departments still spent a lot of money and time on maintenance, organizations still had backlogs, and systems still didn't do what businesses needed them to do .
Processes and Services
At the end of the twentieth century, a new class of software became perceived as a way to solve many problems with IT: workflow or business process management (BPM) software. These products generated high initial ROI because they integrated many disparate systems, linked people in different departments into a coherent process, and made management and reporting of an overall business process possible, often for the first time.
BPM systems, however, still assumed that intelligence and decision making in processes come from people or are embedded in systems. The use of worklists, alerts, and the paraphernalia of integrating people into processes is widespread. Most BPM systems allow limited replacement of human decision making with automated decision making. Those that do tend to focus on routing and other kinds of simple decision making, not making decisions about operational transactions.
In parallel with the growth of business process management, service-oriented architecture (SOA) started making inroads in IT departments. SOA promised a new level of agility and flexibility and reduced maintenance backlogs. To some extent, it's delivering on these promises. However, an SOA approach doesn't change how business expertise is turned into computer code, nor does it address the issues in delivering analytic data discussed previously.
Although BPM systems and SOA don't offer a way to make systems (or even processes) smart enough for today's business environment, they do offer a framework for using the approaches and technologies discussed in this book. Integrating business rules engines and analytics with the process automation and orchestration capabilities they offer can make smart enough systems more possible.
One common factor in problems with different approaches to programs is people. Relying on people to make decisions has a consequence: You can move only at the speed of people.