- Business Enters the Digital Age
- The Tried and True
- Harris Kern's Enterprise Computing Institute
The Tried and True
Traditionally, the economy of technology was basically the economy of producers and consumers. Thirty years ago, when the current market was taking shape, there were two major players on the technology playing field, setting the pace of technology for wide business use across the globe: IBM and Digital Equipment Corporation (DEC). IBM stood the test of time; DEC didn't. I won't go into detail on this topic; all we need to know at this point is that both started primarily as manufacturing companies and IBM managed to make the transition to today's economy. What these early pioneers in computing technology were able to achieve was to make digital technology accessible to the international business community, universally redefining strategic advantage.
But it's important to note not just what these two corporations did, but how they accomplished similar business objectives in two completely different ways:
Since its tabulating days, IBM had always been in the business of "computing," long before digital technology was introduced. DEC was created in direct response to the way IBM was developing and deploying IBM's idea of digital technology. By the mid-1950s, IBM was a billion-dollar company, but in terms of how it did business, it wasn't radically different from what it was in 1924: make the machine, sell or lease it, make more machines. The directors, managers, and engineers at IBM considered themselves forward thinkers, but their business structure was a reflection of the American business ethos of the 1920s, if not earlier.
DEC was a "visionary" company; it was selling technology, not just the box it came in. from its inception in 1957, DEC took great pride in being an "engineering company for engineers." But by the 1970s and 1980s, it was not only engineers who were buying into and relying on DEC's technology; it was banks, manufacturers, insurance companies, and publishers. Suddenly, businessmen with MBAs, whose businesses had nothing to do with engineering per se, were making strategic decisions about which computer manufacturer to invest millions of dollars in, affecting the future of their companies for decades to come. Many used the same strategies to buy technology as they used to price, market, and support their own products. This began to draw the border between the Manufacturing Age and the Information Age.
By the 1980s, just having computer technology didn't separate a business from its competitors in terms of the open marketplace. The real question was whether having a computer made doing business easier. The answer, many managers found, was yes and no. As computer technology took hold in business during the late 1960s and 1970s, productivity gains became noticeable. The issue for managers then became to determine where productivity could additionally improve. An IBM System 360, for example, could collect, analyze, and report on data a hundred times faster than, say, a DEC PDP-11. But it would take two days to reprogram a report or database query on an IBM mainframe, and only two hours on a PDP-11. The needs of the business then dictated the best solution to go with: if the business didn't require writing or modifying programs frequently, an IBM S360 was the way to go. If there was a good deal of programming required, the user-friendly PDP-11 was the programmer's choice. This worked as long as the decisions were made primarily by the middle managers who actually used the equipment firsthand in relation to the business.
But with the 1980s came the personal computer. Soon, everybody from the secretary to the CEO dealt with information technology. No longer was just having the technology enough for any organization. Since computers were used and reviewed by just about everyone in the company, each computer, large and small, had to have a direct "strategic" purpose. In other words, the following question could no longer be shunned: We all know we need computers to do business, but why?!
There are a variety of answers to this question, and none of them are as simple as they should be. In response, not only were purchases of hundreds or thousands of PCs required, but they were also networked, supported, maintained, and more importantly kept up to date. The cost of IT in most corporations skyrocketed! Because of this cost increase, the decisions for accepting these costs were taken out of the hands of middle managers and department heads and sent straight to the boardroom.
One of the most common justifications for the increased spending on computing equipment follows the reasoning that since computers gather and store information so much more quickly, they provide that much more information upon which to base our decisions. This sounds reasonable, but the reality is that with the increase in information, the process of making a decision becomes longer and more complex. We frequently find ourselves paralyzed by the abundance of information, and so choose not to make a decision, or at best, to make the simplest choice. Unfortunately, the tides of business very rarely lend themselves to such simplicity.
Consider the following:
Manager A and Manager B are competing to convince their board of directors which technology their organization should buy. The size of the investment is considerable; therefore, so should be the returns. Manager A argues for a technology that is less cutting-edge, but reliable, strongly branded, and made by a blue-chip company that has very deep pockets. His argument is that by investing in this technology, his company is in fact partnering with the blue-chip company, allowing them to generate the market momentum that will allow the technology to become a de facto standard. If so, the organization will have invested in a strong, well-backed product they'll be using for years to come.
Manager B, however, wants the company to invest in a newer, more cutting edge technology, created by a group of brilliant but inexperienced "techies." Her argument is that the technology itself is far superior to that recommended by Manager A, and with the company's support, they can actually enter the market in a dominant position, take an immediate market share, and reap beneficial short-term profits immediately. By placing themselves in such a favorable position in the short term, they all but ensure a considerable return on their investment over the long term.
Which of these managers is right? They both are, depending of course on what the organization is trying to do. For instance, if the organization as a whole wants to invest in this market conservativelythat is, they know that if they don't invest in some fashion, they'll be left behindthen Manager A may have the most convincing argument. It's better to "piggyback" on a company that has a firm understanding of what technology can do and has already taken most of the risks in bringing that technology to market. For years (and perhaps even today) the most enduring motto among IT executives around the world was, "No one ever got fired going with IBM!" This was seen as acceptable even if the actual technology proved inferior, in performance or logistics, to more cutting-edge technology.
If the company is seeking to enter an unfamiliar market or to create a completely new market, clearly seeing the opportunity for a large economic advantage, they should approach it aggressively to enter the market ahead of any competition. In this case, Manager B's solution sounds the best. In fact, this approach has been the impetus behind the phenomenal growth of the so-called e-businesses and other Internet companies. The current whirlwind over the Internet is a perfect example of businesses looking for new markets to define and exploit. Many companies have found that by simply putting a "dot com" after their name, their stock prices rise considerably. Even if a company is not interested in the risks associated with the e-business market, they're all but forced to enter this realm of business or, at the very least, to devise some type of strategic business plan that takes the dot-com market into consideration.
Most companies find themselves somewhere between the solutions of Manager A and Manager B. They want to develop new business or find a way to strengthen their current business, or both, which returns us to the strategic advantage discussion. But the idea of gaining a strategic advantage introduces additional questions. For example, even if we do make the right business decisions, how do we know we've accomplished them? In other words, now that we've made our decision and accepted either Manager A's plan or Manager B's, we have to implement the plan. There can be an almost infinite number of reasons why a business plan based on technology succeeds or fails. But more often than not, when one fails it's usually because the organization that created the plan has trouble carrying it off, not because unforeseen market forces or unexpected competition cause it to fail.
Of course, when we discuss business strategies, we're talking at a very high level. Sooner or later, we have to come down to implementing these strategies, either within or outside the organization. If the strategy is simply one of investment, businesses write the check and wait for the results. If the strategy entails implementation, it's a different ballgame. Let's assume that the strategy requires some kind of software application implementation at some point in the process. This software has to be written on some form of hardware, either inside or outside the organization, for the benefit of the organization. As soon as we start factoring in these issues, we start finding all the "slips between the cup and the lip."
Technology should suit one's purposes. Ironically, at least in the United States, the same pliability that allows different technologies to solve different problems is usually what causes so many headaches. This is one of the reasons why 10 different people will install the same software in 10 different wayswith the same instructions. However, I would argue that problems arising from technology usually don't occur because of technology. They occur because of people. As illustrated by nonhigh-tech examples (the chimney, the stirrup, etc.) the fundamental problem is not getting the technology to adapt to people, but getting people to adapt to unknown or new technology. After all, even if William the Conqueror had decided that the stirrup was the way to go, he wouldn't have made it very far if his knights hadn't taken the time to become proficient at using this new technology.
The most important thing about any strategy, technological or otherwise, is implementation. Although the decisions concerning where and how we use technology have moved up to boardrooms and executive suites, it's still the middle managers, the department heads, and the lowly programmers who must make it all work.