Signs and Portents
Programmers might laugh at this prediction. "Software isn't like manufacturing." "Patch in the field." "Release early and often." During the dot-com bubble, we heard repeatedly that new economy was immune from the rules of the old economy. The most important of those rules was this one: It has to make money. As we've seen, the new, new economy looks a lot like the old economy. It also has to make money.
In the same way, software's technological wonders will eventually become products. Although wonders are bought for their novelty, products are bought because they work.
No one knew exactly when the new economy revolution would end, but those who were paying attention could say what it would basically look like when it did. Companies that made no profit and had no chance of ever earning a profit would see their stock prices crash. Companies that were making a profit would see their stock prices return to historic norms of price versus earnings. No one could say when this would start or how long it would take to complete (still no one knows), but at the end of the day, past performance is an indication of future return.
The end of the computer revolution is similar. I can't say when it will happen or how quickly it will change, but I can say something about what it will look likeand how we will know it is coming.
Wisdom of Your Ancestors
For years, software producers have been able to make a disposable product. Every year or so, they released a new version of the software, and consumers dutifully upgraded. In some cases, users upgraded in search of specific new features, but often users upgraded because of compatibility issues. Although software is often backward compatible, it is forward compatible much less often. For example, if everyone else in my office upgrades from Microsoft Office 95 to Microsoft Office 2000, then they'll still be able to read my documents, but I might not be able to read theirs.
Before you laugh at this idea, remember that 50-year-old black-and-white televisions can receive today's color signals. Forty-year-old cars built for leaded gasoline will run on today's unleaded gasoline. Thirty-year-old pulse-dial phones still work on the latest touch-tone, fiber-optic telephone networks. Consumers expect forward compatibility in many industries, but they excuse software and pay high fees to upgrade from working software to buggy software just so they can access things written with the new version.
Consumers are already showing signs that they are tired of this situation. Businesses, in particular, are becoming more resistant to upgrading and are demanding that bugs in the old software be fixed. Even more emphasis is put on back-porting important features to older versions so that the consumer can get just the features she wants with the stability of tried-and-true software.
The software producers might be successful in pushing through their rental software scheme with ASPs. I could write a, entire article on the few advantages and many disadvantages of rental software, but few of the disadvantages are likely to stop it from happening. The vision of software producers rolling out new features whenever they want is unlikely, however. Consumers, particularly business consumers, will demand that even their rental software be generally left alone. A company doesn't want to come in the morning before a major milestone to find that a new bug in the email system was introduced during last night's unrequested software update and won't be fixed until sometime next week. A student writing her thesis feels the same about her word processor.
As fewer people are willing to accept radical changes to their software, producers will have to handle supporting multiple versions more completely than they do today. This will mean more development on old versions and the capability for customers to purchase (or rent) older versions. The release of digital television sets did not mark the end of development or sales of traditional sets. In the future, this will be true of software as well.
Simplifying
We are approaching a saturation point for software features. Office suites passed this point a few years ago. Personal finance programs passed it recently. E-mail clients and Web browsers will probably reach saturation in another year or two. A similar trend exists in other software sectors, from graphics applications to programming tools.
Users generally use only a fraction of the features provided by software. Many features are almost universally ignored and often get in the way. Consider tear-off menus, the capability to click and drag the pull-down menu bar and make it its own window. This feature exists in at least Windows, KDE, and GNOME. Few users understand this feature. Fewer still use this feature intentionally. Many accidentally tear off their menus and then don't know how to undo what they've done. Even features that are more likely be useful, such as modifying your toolbars, are so buried among the useless features that few users know about them, let alone use them.
It's actually lucky that users seldom significantly customize their environment. If they did so, training and documentation would be much more difficult, and it would be more difficult to move from one installation to another.
Consider the user interface for an automobile. Although the user is able to make minor adjustments to the seat and mirror positions, most of the interface is unchangeable. The user cannot decide to drive from the opposite side of the car or to replace the steering wheel with a control stick. Software might move in a similar direction. For example, you might be able to change the background and the colors, to determine whether help dialog boxes pop up, and even to create an extra toolbar that has your most-needed buttons. The majority of the interface, however, would be unchangeable, helping contribute to smaller, more stable software.
I say "might" here because, in the desktop computer industry, very open-ended customization is extremely entrenched. It might not be possible to remove these features after the fact. The nondesktop computer interfaces will move this direction, though.
Consumers have shown several times already that they want specialized devices instead of putting everything in one big box. For instance, the Palm Pilot has been so successful because it does basically one thing and does it well. It's extendable, but only in specific ways.
WinCE tried to bring a full desktop environment to the handheld space. This was unsuccessful because there is no need for so much complexity to keep track of your addresses. KDE also can run on handheld computers, but it generally won't be for the same reason. PalmOS solves a particular problem very well. It doesn't try to solve all problems, and that's why it's successful.
I predict similar effects in other specialized applications such as "multimedia" (which mostly means "television") and games. The desktop and the television set will not merge because they solve different problems and need different interfaces. Eventually, console game machines will subsume the PC game industry as well because the specialized consoles will be capable of running better games for a much lower cost. As more software runs on these specialized machines, the software industry will be more tied to the "it has to work" rule.
Windows users might accept that they have to reboot once a week. The same people will not accept the same of their television set or refrigerator, even if they happen to be running Windows under the hood. This is one reason why smaller, more stable systems such as Linux will probably be more successful in that market.
Unix-based systems have tended to use the "small tool" approach. They're made up of many small tools that are easy to fit together. This is part of what has made them so stable: Each tool is fairly simple. This is also what keeps them from integrating their components more effectively: Each tool knows little about the other tools. The pendulum has swung a long way toward integration over stability and simplicity. As we move toward more specialized boxes that must interact with each other, the pendulum will start to swing back.
Managing Risk
As more companies rely increasingly upon their computers in one way or another, stability becomes ever more critical. More importantly, the costs of instability become more measurable. Computer downtime can mean production delays, missed opportunities, and, ultimately, lost sales. If the loss and the risk can be quantified, then it can be insured against.
Bruce Schneier of Counterpane Internet Security has proposed that the future of computer security is in the insurance industry. I agree, but it goes further than that. Crackers are only one thing that can cause your computer system to fail. Another is bugs in the software. If software is mission-critical, then its operation should be insurable. If your entire mail system crashes due to a bug in sendmail, can you recover your losses?
Insurance rates are adjusted based on your risk profile. New drivers pay higher auto insurance rates because they have more accidents on average. The insurance industry is very good at quantifying and comparing risks. Software stability (or lack thereof) is also a quantifiable risk. Imagine a Windows 98 shop. This shop would likely receive a significant discount for moving to Windows NT 4.0, Solaris 2.8, or Red Hat 6.2. On the other hand, its rates might go up if it moved to an untried piece of software, such as Red Hat 7.0.
Suddenly you would have an industry interested in determining the quantifiable risk of particular versions of software. Because that risk translates directly into premiums, consumers would more easily be able to quantify the real costs of running unreliable software. This, in turn, should have a significant impact on the software producers. The insurance companies won't accept rubber stamps like ISO 9001. They're going to want to see that the software producers are actually using proper software-development practices. Companies that continue to use the "big ball of mud" development model that is common throughout the industry would see the effective cost of their products rise due to higher premiums. Companies that sacrifice stability for time to market would have similar problems. Software producers would actually have to compete based on the quality of code.
Don't assume that having the insurance companies get involved in software will make everything wonderful. It will completely turn the software industry upside down. It will take a lot of the fun out of programming. Much less time will be spent writing new, fancy features. Much more time will be spent doing rigorous testing, fixing obscure bugs, and maintaining older versions of your software. Old versions are likely to persist because their premiums would almost certainly be lower (due to their long history). The innovation that the software industry is so fond of will slow to the pace seen in other mature industries, such as the automotive and consumer electronics industries.
And don't forget that the insurance companies are going to want to recover their losses from somewhere .
Responsibility
The biggest indicator that the revolution is over is to see software producers take responsibility for their product. Once insurance companies get involved and start paying claims, expect them to start suing software companies for defective products.
The software industry has hidden behind its nonwarranty for years. This "contract" essentially says that even if the software was intentionally written to break out of the computer and eat your dog, the company is not responsible. There is nothing, in fact, that the company is responsible for in relation to its work. This is a fairly unique situation. Products are usually at least carries a warranty against defects in construction, but the software industry makes no such pledge. The insurance companies are unlikely to look favorably on such a situation.
Don't assume that I'm singling out commercial software producers. Open source and other types of free (speech and beer) software are often even worse. Not only does the GPL (and most other open source licenses) completely disavow any responsibility for the software, but many pieces of open source software spend significant parts of their life cycle (and sometimes their entire life) in "beta" releases, often designated with 0.x number schemes. The implication is that the user should not actually expect this software to work and that any bugs are due to the fact that it's "just a beta." About a quarter of the software shipped on the two main Mandrake Linux 7.2 CDs is pre-1.0 software. The authors of these packages don't trust their software enough to even call it "1.0", but we accept this as part of a production distribution. A similar percentage of pre-1.0 packages is installed on your system if you accept the recommended install set. These include such pivotal packages as the following:
-
pwdbPassword database library (manages /etc/password)
-
pamCentral authentication system
-
lilo and grubBootloaders
-
ORBit and bonoboInterprocess communications for GNOME
-
sawfishGNOME's window manager
-
opensslSecure access to Web sites
-
harddrakeHard drive partitioner
This does not imply that Linux is a particularly unstable operating system (it isn't). It also doesn't imply that beta packages receive poor support (they generally receive similar support to nonbeta packages). The point is that the authors are unwilling to even take the symbolic step of saying that their software is ready for production use. If they're unwilling to accept even that much responsibility, why do we trust it so much?
Because it's the best we have. There is no requirement that programmers take any responsibility for the code that they distribute. A beautician must be licensed to cut your hair, but Intuit can manage your bank accounts without oversight and without any responsibility for the results.
The programmers will likely be forced to take responsibility for their work in two ways: insurance and government. In the insurance model, the insurance companies would refuse to cover a piece of software that isn't written (or at least signed off by) a registered, sue-able programmer. In the government model, distributing software without a license would be akin to practicing medicine without a license (or possibly being a beautician without a license). The insurance model is much more likely, but keep an eye out for Congress to become interested in regulating the software industry if cracking becomes a serious problem or if copyright holders convince them that it's the only way to protect their copyrights (see Richard Stallman's "The Right to Read").
Eventually software producers will have to take the same responsibilities for their products as other manufacturers. When this happens, the revolution will be over. Depending on exactly how it plays out, it could also radically change the nature of open source and other free software. If insurance companies handle it, free software might be largely unaffected. On the other hand, if the government gets involved and requires a programmer to always take responsibility for any of her published work, then it could eliminate the volunteer programmer who is the backbone of the open source movement.