three links about false positives

Via Bruce Schneier’s, an interesting paper in PNAS on false positives and looking for terrorists. Even if the assumptions of profiling are valid, and the target-group really is more likely to be terrorists, it still isn’t a good policy. Because the inter-group difference in the proportion of terrorists is small relative to the absolute scarcity of terrorists in the population, profiling means that you hugely over-sample the people who match the profile. Although it magnifies the hit-rate, it also magnifies the false positive rate, and because a search carried out on someone matching the profile is one not carried out elsewhere, it increases the chance of missing someone.

In fact, if you profile, you need to balance this by searching non-profiled people more often.

The operators of Deepwater Horizon disabled a lot of alarms in order to stop false alarms waking everyone up at all hours. Shock! In some ways, though, that was better than this story about a US hospital, from comp.risks. There, a patient died when an alarm was missed. Why? Too many alarms, beeps, and general noise, and people had turned off some devices’ alarms in order to get rid of them.

Unlike Transocean, they had a solution – remove the off switches, because that way, they’ll damn well have to listen. At least the oil people didn’t think that would work. Of course, they didn’t think that if your warning system goes off so often that nobody can sleep when nothing unusual is going on, there’s something wrong with the system.

3 Comments on "three links about false positives"


  1. Nice paper, thanks for pointing it out. I love counter-intuitive results from (relatively) simple maths. It would be fun to run some simulations on this.

    Of course, if sampling bias is obvious then a smart adversary would choose their vector accordingly.

    The profiling issue is a cognitive bias. I think a better strategy is to randomize at least some of the sampling rather than do it at the sole discretion of the screeners. I have a buddy who travels a lot and very frequently is screened. He’s friendly and open, smiles a lot, looks like what he is; a compliant white middle aged, middle class chatty businessman. Other airport security people have said he’s likely picked because he looks like they won’t give them a hard time and they have a quota to make and time limits.

    re: the alarm stuff:

    Alarm correlation in network monitoring was a hot topic when I was involved in that more than a decade ago. Not sure where it’s at now, but the gist of the problem we had was a single event could trigger dozens or hundreds of alarms in the corporate NOC without giving a clear indication of the root cause. And a hundred alarms is tough analyze properly.

    Can’t find a reference, but I remember a talk about three mile island where they said that hundreds of alarms went off there but the key one was covered by a tag at the time.

    Reply

  2. The alarm thing… offshore rig workers are typically on 12-hour shifts, 7 days on 7 days off. Lots of alarms going off means they don’t get enough sleep and start making mistakes like not checking mud returns, deciding to circulate seawater down the well when prepping a well for closure, trusting iffy cement bond logs etc.

    Reply

  3. One of other issues is that there are massive political-economic incentives to design, market, install, service and then out year upgrade a threat-matrix system inherently biased towards oversampling and false positives. It’s a win from almost all portions of the usual Venn diagram.

    After all, little downside. The surrounding ambient culture is inured to the consequences.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.