Have we talked about James Reason? I don’t think we have. So he’s a Manchester University cognitive psychologist, best known for the Swiss-cheese model of accidents, which he developed in the context of aviation human factors research.
Imagine a block of Swiss cheese. To get to an accident, we need to pass through it; the holes have to line up. Now imagine slicing the cheese into layers, so they can move independently. Each layer is an opportunity to stop the crash. This gives us a number of important insights.
The first and perhaps the biggest in my view is that root causes aren’t enough. Many of the layers in the cheese apply to more than one possible root cause. You could start in several different places, in the space of possible root causes, and pass through that “hole” on the way to disaster. Because the holes can be general problems, adaptable ones, they can be more important than the root cause. Common mode failures trump root causes.
The second big insight is that correlation is the enemy. The more independent the layers are, the safer you are. If they move together, the distinction between them collapses and the benefit from them is lost. They need to be mutually supporting, but independent.
Reason observed that even accidents that were considered utterly unpredictable came after repeated near-thing events. Some people would go so far as to say they always do. Whether or not black swans are expected, you may well find a whole flock of dark grey ones flew by when you weren’t watching. This leads us to another important insight: always investigate near-thing events.
Near-things are much more common than accidents. Therefore, that’s where the information is. In the Reason model, it’s not just negative information, either. The fact that one of the layers worked – that ill-fortune thwacked into a wall of Emmental, hard cheese old fruit – is very important. Something worked.
Reason’s view of expertise is interesting. Airmanship – the stuff you have if you have the right stuff – also known as professionalism, craftmanship, or judgment, is the last line of defence, the last layer in the cheese. It is enormously valuable, because general intelligence iz general. It’s also subject to all the human limitations – fatigue, limited cognitive resources, bias, conflict. Some of it can be worked into Stanovic & West’s domain of rationality, Kahneman’s System One, through massive experience. However, it’s precisely this form of reasoning that relies on pattern recognition, which limits its generality. And perhaps the worst of it is that you don’t know if you’re a pseudo-expert until things go badly wrong.
This is difficult. It’s status-challenging, it requires disclosure of cockups we’d all prefer to forget. In quite a lot of environments, commercial and operational requirements conflict with it. And the kind of institutions you need may be vulnerable to strategic behaviour. The CHIRP process might not work so well without a certain amount of trust that people aren’t using it to badmouth their competitors.
So here’s a question. If we had a near-miss financial crisis today, would we know? And who’s we, honky? Probably the counterparty would know, and obviously the author of the near miss, but not necessarily the next counterparty or the central bank, the regulator, or the government, or the political nation. In fact it might be the worst of all worlds – the problems are trade secrets, people with them keep them quiet, people who know about competitors’ problems keep quieter, and try to exploit them, thus making it worse.
But would we know? The New York Stock Exchange did a reasonable report on the so-called flash crash, but then it wasn’t the securities world that caused all the trouble.
just some half-assed thoughts:
suppose a financial crisis happens because (a sufficient number of) banks have assembled a combination of assets and liabilities (balance sheets) such that an unexpected shock to certain asset prices would be enough to put everybody’s solvency and/or liquidity in doubt and knock over the dominoes, again.
Is a near miss when the unexpected shock to asset prices isn’t quite large enough to trip mechanism?
Or is a near miss when the combination of assets and liabilities is almost that vulnerable, but not quite?
I suppose a near miss is both together – if the shock had been a little bit larger or the balance sheet structure a little bit more vulnerable, we’d have sailed through your cheese holes.
But it would be useful to have some idea of near misses of the vulnerability sort, in the absence of shocks. I suppose that’s what all these stress tests were about.