In Texas, a man suspected of homicide has escaped from prison. How he did it tells us something about the inevitable failure of ID cards, and the importance of false positives. Via Bruce Schneier.
What happened? Well, the suspected killer was in a cell with another remand prisoner, a car thief named Garcia. He memorised Garcia’s prison number and other details, and when someone stood bail for Garcia, he answered the jailers with Garcia’s name and number. They took him instead of Garcia. When they took his fingerprints, they were smudged and judged useless (one wonders if this was deliberate), so they decided to check him against their spanking new biometric database.
When his fingers were scanned, the DB actually worked perfectly, which was precisely the worst thing that could have happened; up came the file, with a large photograph of the man who was standing before them, so they released him. The problem here is that the system had taught its users that if nothing weird happened, they were right. This is a common problem in user interface design; if you depend on throwing an alert box to stop something weird from happening, you better not throw too many others, or your users will be conditioned to hit Ctrl+W or Alt+F4 as a reflex.
Of course, the notion that if “it goes through”, everything is OK is deeply embedded in the computer experience. As a rule, if there is a problem you experience it as the computer throwing an error message or crashing; programming, you hack away, compile, and it either compiles, in which case you run the thing, or there is a compiler error, in which case you go back to the drawing board. And if it doesn’t run or does something weird or throws an error message, you go back to the drawing board. Silence is consent in computing.
What the Texan warders were really checking was the absence of an error message, not the fingerprint; further, the system design contained a major flaw in that the error condition looked OK. You check the fingerprint, and up comes a photo of the guy who’s standing in front of you, which is what you would expect; the alternative condition would be very unlikely. What the system should have done was to ask for the prisoner’s name and number, then check the fingerprint file, and throw a great big red-flashing alarm if the names didn’t match. Its function here was authentication; is this man the same man who’s been bailed? But it was designed for identification; which database record matches this chap?
Sounds like managerialism in action.
I used to work at a VA hospital where, as in all hospitals, management resorted to all sorts of computerized gimmicks–computerized med carts and bar code patient ID bands–to prevent errors. Of course, the real cause of error is understaffing. And guess what? A harried orderly, with double the patient load she should have had were it not for the criminal filth running the place, was admitting three patients at the same time and switched the ID bands on two of them. One of the patients almost died from an insulin overdose. Of course, management blamed the orderly and fired her. But the lesson is that no such gimmick is foolproof, if the correct data isn’t entered in the first place.