Can You Trust Your Computer? - Part I
Michael Gemignani
Some 25 years ago, I had unusually high medical expenses. I kept careful records of those expenses and the insurance reimbursements using a CP/M program called SuperCalc. (Yes, that was even before Windows 3.1.) Despite the difficulty of deciphering the insurance statements, I was certain that I had been shorted some $1800. When I complained to Blue Cross/Blue Shield, they, of course, told me I was wrong. After all, their computers had done their calculations. I laid out my argument in careful mathematical terms to prove my point. They examined my case further and discovered a flaw in their computer program. I was right and they were wrong. I got my $1800.
I do not offer this story to show how smart I am. I offer it to show how dumb computers are. Admittedly, had I not kept careful records and had my own computer to help me, I would probably have just accepted BC/BS’s figures and let it go. I am sure the bulk of their customers who had been shorted by the same program error did just that. I was, after all, the first one who caught the problem, but I am sure I was not the first one stung by it. BC/BS was willing to trust their computer to do the right thing. I was not, nor should you be.
Of course, companies argue that it is their computers that make mistakes. Generally, however, computers do exactly what they are programmed to do, no more and no less. If the computer runs a program to completion and provides an answer, we know the program is free from an error that will cause it to hang; but we have no assurance, except in the simplest of cases, that the answer we get is the correct answer to the question we think our programs asks.
In Douglas Adams’ classic The Hitchhiker’s Guide to the Galaxy, a powerful computer, Deep Thought, has been put to work to come up with the answer to “The Great Question and Everything.” After a long time the computer produces its answer: 42. Deep Thought’s program came up with an answer, but the answer was either wrong or as incomprehensible as the question.
Thus, computers can not only give us a wrong answer – that is, an answer that is not the answer to the question we want answered - but they can give us an answer that we cannot understand. We do not know what question, if any, has been answered, or how the answer we get, assuming it is correct, applies to the question we want answered.
Consider the following hypothetical. A broker is using a program that tells him that if he invests funds in a certain way, there is a 99% chance that he will make one million dollars. The program gives an accurate prediction. The broker makes piles of money because there is such a slim chance he will lose. Question: How much money will the broker lose if his investment fails, i.e., falls within the 1% of events where he does not make one million dollars?
I suspect that many people will say he will lose one million dollars because if he wins, he will win one million dollars. However, this is an unfounded assumption as many brokers discovered to their regret. The program does not tell us how much could be lost. In fact, the amount could be so large as to dwarf all of the broker’s winning. This is precisely what happened in the recent collapse of the market. If the odds of winning were so good, how could one possibly lose so much as to make all those winnings disappear and then some? Stuff happens, particularly when your rapacious greed leads you to trust your computer a bit too much.
To be fair to the computer specialists, “quants” as they are known, who designed and wrote the formulas and the programs that the investment banks relied on, warned the banks about the pitfalls of their programs; for example, the parameters they were using in their formulas were often little more than guesses and were, in any case, subject to rapid change. But there was so much money to be made that the banks relied on the formulas and forgot the warnings. Thus, the mess we’re in now. It is not a question of my $1800 from BC/BS but trillions of dollars down the proverbial rat hole. Was it greed or a foolish overreliance on computers, or some of both?
If I am shown a cave filled with riches beyond my wildest dreams, and I am permitted to haul out as much treasure as I want, but I am warned that the cave could collapse at any minute, and if I am inside when it does, I will be killed, would I have enough sense to stop while I am ahead? One wonders.
There are those who wonder if someday computers will become so smart that they will dominate the human race. My concern is more that we will become so reliant on computers that we will trust them in instances we should not, once more to our immense regret.
The Rev. Dr. Michael Gemignani, an attorney and Episcopal priest, is also a former professor of computer science who has written extensively on legal issues related to computers. Although he is now retired, he enjoys writing and speaking about computer law and security. Contact him at mgmign2@hal-pc.org with any questions or comments about this topic.
|