Lingering Effects of the Millennium Bug
By 1999 there were probably few people on the planet who had not at least heard something about the Y2K computer problem. While many are by now at least somewhat aware of what the millennium bug involved, I will describe it here for readers who may not fully appreciate the source and significance of the problem.
As mentioned earlier, the shortage of expensive data storage, both in main memory and on disk devices, led application software developers in the mid-1960s to conserve on storage space. One of the most common methods used to save on storage space was to shorten the length of a date field in a program. This was accomplished by designating the year with only its last two digits. The years 1957 and 1983, for example, would be coded as 57 and 83, respectively. This would reduce the 4-byte year field down to 2 bytes.
Taken by itself, a savings of 2 bytes of storage may seem relatively insignificant. But date fields were very prevalent in numerous programs of large applications, sometimes occurring hundreds or even thousands of times. An employee personnel file, for example, may contain separate fields for an employee date of birth, company start date, company anniversary date, company seniority date, and company retirement date, all embedded within each employee record. If the company employed 10,000 workers, it would be storing 50,000 date fields in its personnel file. The 2-byte savings from the contracted year field would total up to 100,000 bytes, or about 100 kilobytes of storage for a single employee file.
Consider an automotive or aerospace company that could easily have millions of parts associated with a particular model of a car or an airplane. Each part record may have fields for a date of manufacture, a date of install, a date of estimated life expectancy, and a date of actual replacement. The 2-byte reduction of the year field in these inventory files could result in millions of bytes of savings in storage.
But storage savings was not the only reason that a 2-byte year field was used in a date record. Many developers needed to sort dates within their programs. A common scheme used to facilitate sorting was to comprise the date in a yy-mm-dd format. The 2-digit year gave a consistent, easy-to-read, and easy-to-document form for date fields, not to mention slightly faster sort times due to fewer bytes that must be compared.
Perhaps the most likely reason that programmers were not worried about concatenating their year fields down to 2 digits was that few, if any, thought that their programs would still be running in production 20, 30, and even 40 years later. After all, the IT industry was still in its infancy in the 1960s when many of these programs were first developed. IT was rapidly changing on a year-to-year and even month-to-month basis. Newer and more improved methods of software developmentsuch as structured programming, relational databases, fourth-generation languages, and object-oriented programmingwould surely make obsolete and replace these early versions of production programs.
Of course, hindsight is most often 20/20. What we know now but did not fully realize then was that large corporate applications became mission-critical applications to many businesses during the 1970s and 1980s. These legacy systems grew so much in size, complexity, and importance over the years that it became difficult to justify the large cost of their replacement.
The result was that countless numbers of mission-critical programs developed in the 1960s and 1970s with 2-byte year fields were still running in production by the late 1990s. This represented the core of the problem, which was that the year 2000 in most of these legacy programs would be interpreted as the year 1900, causing programs to fail, lock up, or otherwise give unpredictable results.
A massive remediation and replacement effort began in the late 1990s to address the Y2K computer problem. Some industries such as banking and airlines spent up to four years correcting their software to ensure it ran properly on January 1, 2000. These programming efforts were, by and large, successful in heading off any major adverse impacts of the Y2K problem.
There were also a few unexpected benefits from the efforts that addressed the millennium bug. One was that many companies were forced to conduct a long-overdue inventory of their production application profiles. Numerous stories surfaced about IT departments discovering from these inventories that they were supporting programs no longer needed or not even being used. Not only did application profiles become more accurate and up to date as a result of Y2K preparations, but the methods used to compile these inventories also improved
A second major benefit of addressing the Y2K problem involves the refinements it brought about in two important functions of systems management. The first was in change management. Remedial programming necessitated endless hours of regression testing to ensure that the output of the modified Y2K-compliant programs matched the output of the original programs. Upgrading these new systems into production often resulted in temporary back-outs and then reupgrading. Effective change management procedures helped to ensure that these upgrades were done smoothly and permanently when the modifications were done properly; they also ensured that the back-outs were done effectively and immediately when necessary.
The function of production acceptance was also refined as a result of Y2K. Many old mainframe legacy systems were due to be replaced because they were not Y2K compliant. Most of these replacements were designed to run on client-server platforms. Production acceptance procedures were modified in many IT shops to better accommodate client-server production systems. This was not the first time that mission-critical applications were implemented on client-server platforms, but the sheer increase in numbers due to Y2K compliance forced many IT shops to make their production acceptance processes more streamlined, effective, and enforceable.