A new research paper from Caltech, which you can read here, has cast doubt on the characterisation of the Internet as a scale-free network. What the fuck is one of them things, and why care?
Almost anyone who knows anything beyond “there’s this thing called the interwebs, it’s like TV that you read” knows that the basic idea of the Internet is that there’s no centralised authority and, instead, packets of data are transferred between specialised computers called routers, all of which are in principle equal. This implies that the Internet is what really deep geeks call a random network – that is, any point on it can be linked to any other. This was what the original gangster developers, people like Doug Engelbart and Vint Cerf, were trying to achieve – if no node is more important than any other, there are no critical points of failure, and hence the system can simply route around any nodes that are destroyed.
Back in 1999, though, a bunch of academics discovered that, in fact, the net didn’t work like that in practice. In fact, if you plotted the level of traffic against the number of routers experiencing that level, you got a graph with a big spike near the origin and a long tail. A small core group of routers were carrying a disproportionately huge amount of traffic – more like a centralised telecoms network than the flat TCP/IP architecture. This topology, where a small set of the nodes have many more links than the rest, is called a scalefree network.
So, who cares? Well, people like John Robb of Global Guerrillas fame do, because all kinds of other stuff works like that: power grids, gas and oil pipelines, rail, air and road networks. In a random network, you basically have to destroy everything to stop it working – in a scale-free network, you can shut the whole thing down with a whack from a hammer in the right place. The 1999 results, therefore, suggested that the Internet was much less secure than anyone had thought.
The Caltech team’s simulations, though, seem to suggest this was wrong. I have a little theory about this, which goes like this: the Internet is a random network that wants to be a scale-free network. Think of it like this: we’ve all seen those demonstrations of how some of your packets go right the way round the world and some go straight to their destination. Now, remember that the Internet protocols are meant to behave like a random network – but that the demand for traffic, and the supply of connectivity, aren’t evenly distributed around the world. For example: any traffic from Europe, North Africa or western Asia to North America must go through the big transatlantic cables (if really pushed, some might make it through the long way around – but only under real provocation). So, the routers at each end of that link will be very heavily trafficked all the time.
If one fails, though, the system routes around the block, with packets moving to the other cables and even going off down the SAFE/SAW or FLAG cables towards Asia and the transpacific links – effectively, it’s most efficient being scale-free but it can revert to randomness. Clever man, that Cerf bloke.
SFNs are a pretty fascinating idea when you get to think of them. Here’s a question: seeing as almost everything is one, why haven’t there been more efforts to bring the down by the terrorists? My answer is as follows: first, remember that scale-free networks are extremely vulnerable at those key points but less so everywhere else – if you don’t get it exactly right, you’ve wasted your time. And secondly, remember that the information you need is much less common these days – remember that masters student whose thesis on US fibreoptic networks landed him in a bunker with the FBI? With those factors in mind, it’s a better choice in terms of cost-benefit analysis to lug a bomb into the tube.
Practical exercise: what or who represents the Viktor Bout system’s critical node?