Why Has The Year 2000 Problem Happened?

Michael Gerner
Unibol Ltd.
Version 4 (September 1996)

When any Board of Directors hears about the year 2000 problem, the first question on their lips is likely to be, "Who's to blame for thismess?"

There is no single easy answer to that question. In fact, it appears that several business and technological problems have combined inorder to form what some people are calling, "The Millennium Time Bomb".

1. Lack Of Widely Accepted Date Standards

Nobody has a standard representation of dates that has been internationally accepted and implemented. For example, to an American, 1/4/96 means January 4th, 1996. To an Englishman, 1/4/96 means the first of April, 1996 - more colloquially known as "April Fools' Day".

At least the year is common between these two. But even this has not been true around the world - China, for example, didn't standardize on the Gregorian calendar until 1948.

Attempts have been made since the mid 1970s to set such standards,and even as recently as the late 1980's the ISO-8601 standard advocated the yyyymmdd format for date storage. But the use of these standards was never internationally mandated and they never achieved critical mass in commercial acceptance. Many ITshops have never even heard of them.

With no widely accepted standard to work to, management and IT staff were left to work to whatever suited them. 9 times out of 10, that meant whatever the users would find most convenient i.e. whatever the local style was.

With no consistent date standard being followed, it is no wonder that around 90% of applications will fail to handle a date event that has never occurred before in Western electronic computing history i.e. the turn of a century.

2. Computing Resource Restraints

Main memory, disk space and even punched card space was at a premium in the early days of computing. Whatever could be squeezed to save space and money was squeezed to save space and money.

In the early days of computing, it made sense NOT to include any extra digits for years. In fact, for this very reason, someapplications were written with only one digit for the year. These had to behurriedly re-written to cope with their first decade turnover, as the US Air Forcefound out to their cost in 1979. Back then, computing was not as pervasive as it is now and the change could reasonably be performed by a few programmers burning the midnight oil. It's a much bigger task twenty years on.

3. User Demand

Computers are designed to make life easier for users and to relate tousers in a manner consistent with their known world. Nobody wants tokey in the century when he or she is not used to having to do it inother circumstances (for example, when writing cheques).

In the "Customer is always right" service ethos promoted throughoutthe IT industry, there is little maneuvering room for the technician whothinks that he or she knows better than the user when specifying a new system. And historically, users have not thought to specify Year 2000 compliance for those new systems.

Even if users did think about the issue, they were generally unwilling to put up with the workload involved in testing let alone using such systems. Besides, with IT skills coming so expensive, and with corporation pressure to cut overheads, adding any "extra" features to a new system would be at a significant cost. So why bother with a feature that nobody really wanted to implement anyway?

As Meskimen's Law says, "There's never time to do it right, butthere's always time to do it over".

4. Applications Have Lasted Longer Than Expected

When designing a new application, a typical justification for using a date algorithm which would fail when processing dates outside the 20th Century was that the application wouldn't last outside that era anyway.

In some cases that theory just hasn't panned out in practice. With so many spectacular software project failures, the older style applications have (in many cases) been required to soldier on while their much vaunted younger replacements have fallen by the wayside.

Besides, once an application is in and working, there is a sizable reluctance to significantly change or replace it until forced to do so."If it ain't broke, don't fix it", has long been the unofficial businessethos for many organizations. This makes short term sense and keeps the budgets looking good, but it sacrifices the longer termgood of the company. With the year 2000 inexorably approaching,suddenly that long term is coming home to roost.

5. Backwards Compatibility

As the reservoir of established applications grows ever larger so each and every new application written is expected to maintain some form of compatibility with previous systems. If Windows had not been able to run DOS applications, Microsoft would have been dead in the water when their GUI was launched. Today, Windows95 is faced with the burden of supporting not only DOS programs but also Windows 3.1, Windows for Workgroups and so on.

The market has demanded backwards compatibility, basicallybecause users have been reluctant to junk working applications in order to replace them with the latest models. The "junk andreplace" scenario has always involved significant costs, both infinancial terms and in the work culture: why should I have to rewrite my nice spreadsheet just because a new release of Lotus 1-2-3 (for example) came out?

All this has meant that out of date date algorithms (read that phrase carefully!) have had to be supported by leading edge replacement systems. Nobody wanted the hassle of ripping out the old to make room for the new - it has always been easier to simply bend the new to accommodate the old. Now what was once the easy solution has landed most of us with an even harder problem.

...For wide is the gate and broad is the road that leads to destruction, and many enter through it. But small is the gate and narrow the road that leads to life, and only a few find it. (Matthew 7:13-14)

6. Code Re-Use

It has always made sound economic sense not to redevelop thewheel. Virtually all new applications have algorithms and evencode incorporated from previous systems. This speeds updevelopment and results (usually) in more reliable systems.

The re-use of algorithms which have a hidden date processing fault is one reason why the year 2000 problem is so huge and why some people have likened it to an immense virus. As thealgorithms are used and reused, so their deadly payload is spread through more and more systems. Finding and dealingwith each problem is rather like tracking down a particularstrand in a bowl of spaghetti. Even worse, every strand touching this strand has to be examined for contamination.As most of us know to our cost, dealing with spaghetti canbe a messy business. It would take a huge problem on a corporation threatening scale to force us down that route!

7. Historical Data

The information built up painstakingly over an organization's history has been likened by some to the "corporate crownjewels". Companies make profits mining through this heapof data looking for the nuggets that yield competitive edge.Successive applications are built on this asset to furtherimprove performance for the future.

Which means that successive applications are beingbuilt on the basis of what may be faulty data. Not that itwas faulty at the time of writing. It may not even be faultynow. But it sure could be when the century rolls over.

The problem is, changing the data means changing the applications accessing that data. And this has ramificationsoutside the remit of the MIS department. With the PC revolution, development and control of many applications has typically passed out of the control of the IT professionals into the hands of the end users. So changing the core corporation data means immense inconvenience and costto all those end users. Who wants to grasp that particularnettle?

Even the best and most modern code in the world can behamstrung by historical data that is faulty. And nobody has relished the thought of sifting through the family jewels looking for paste. Especially when, so far, it has tasted, smelled andlooked like the real thing.

8. Procrastination

Quite honestly, a significant proportion of MIS departments havebeen putting this problem off. Usually in the hope that a solution will emerge out of the mists of time.

The thing is, although such a solution is looking less and lesslikely, it is still feasible in the minds of many that some formof techno-marvel will come to the rescue before the crunch comes. "Hope springs eternal in the human breast".

And while that hope is there, MIS are reluctant to face up to thepresent ugliness of the situation. There are always other, more urgent, things to busy their resources on. The tunnel may look dark, but there are tracks to be adjusted and sleepers to be fixedin the meantime.

The question arises, is the glimmer that they see in the distance the light at the end of the tunnel, or in reality the headlights of the oncoming year 2000 train?

9. Other Business Priorities

With the increasing pace of change in IT, MIS have been hard pushed enough to keep up with the pack let alone look over theirshoulder at what might be creeping up on them. The urgent hastaken priority over the essential, in order to meet the user community's demands. As the old saying goes, "We were toobusy fighting to worry about tactics".

MIS departments who took pride in their responsiveness touser demands are probably the worst hit in this case. When the department is measured by metrics built on their speedof response to user demands, there is little incentive to devoteresources into pro-actively looking for new problems.

This doesn't say a lot for the long term forecasting and management of MIS workload. That may be a bitter pill forMIS managers to swallow, but for many it is the harsh truth. Sometimes the truth hurts, but it must be faced up to and dealt with sooner or later. Otherwise, the accumulated small oversights of decades can band together into an almighty ambush by the end of the century.

10. Business Process Re-Engineering

No matter what the consultants say, the net result of BPR is reduced head count and less "fat" in the company. Which isfine until a crisis outside the requirements foreseen by the BPR analysts hits. Like the Year 2000 software problem.

When MIS have been downsized to simply provide the required business functions of the company, that is preciselywhat MIS will do. No less and certainly no more. The "slacktime" that could have been devoted to addressing the year 2000 issue before it became urgent has been deliberatelycut out of the system, in the search for leaner and meanerbusiness processes. Furthermore, because they now lack the internal resources to handle the year 2000 problem, companies who have been through BPR downsizing will be forced to outsource the Year 2000 fix project. Ironically, this contract could end up in the hands of the very consultancy who advocated their BPR process in the first place.

Where MIS infrastructure has been totally outsourced, the situation is possibly even worse. Typically an outsourcing contract does NOT include Year 2000 work. In fact, at least one outsourcing firm has publicly stated that if their clients try to get them to cover Year 2000 conversion within the maintenance contract, they will terminate the contract. So outsourced MIS will not handle the Year 2000 problem, unless they are asked to do it and are paid for doing it. And who is going to ask them, if the non-MIS management in the corporation are unaware of the problem? Shedding expensive MIS expertise from the corporation may have cut costs in the short term, but it exposes the corporation to any IT related problems which fall outside the domain of the outsourcing contract. Such as the Year 2000.

11. Accounting Conventions

Typically, accounting conventions have treated expenditure onsoftware to be an expense in the period incurred. The capitalizationof software as an asset is still a thorny issue yet to be tackled tothe satisfaction of the accountants.

This means that spending money on maintaining software hasbeen treated like a telephone bill. It gets paid regularly for theuse of the service, but at the end of the day does not increasea corporation's net worth.

In other words, if a medium sized company spends US$ 5,000,000to solve their year 2000 problem, that money comes straight off the bottom line. With no increase in assets in the balance sheet to reflect the fact that the corporation will now (probably) survivepast 1/1/2000. It is the difficulty of convincing a CEO that a 5 million hit on the Profit & Loss Account is "A Good Thing To Do" that has largely contributed to the inertia on this issue. It would take an exceptionally brave MIS manager to spoil a CEO's day with THAT news.

12. Hindsight Is 20-20, Foresight Is Normally Less Effective!

Looking back on it all, of course we can see the faults and where decisions could have been better made. A four year old child can understand the 2 digit year problem. But four year old children don't usually write applications let alone run corporations. It's adults who do that: adults subject to all the pressures of everyday working life, who have enough on their plate to get on with without worrying about a problem that is around the corner and which will (they hope) be solved anyway by somebody else.

We're human. Nobody can claim to be fault free on this one."People who live in glass houses shouldn't throw stones."

We need to work together to fix this rather than spend timebickering on who caused it. Because, if the truth be told,we're all partly to blame.

With thanks to the many members of the Year 2000 mailing list, whose thoughts on this issue were invaluable while preparing thisdocument. Michael Gerner.

Year2000 Home Page
IS Home Page
UF Home Page

Updated: 1 Oct 2023 http://www.is.ufl.edu/bawb080h.htm