Annoyingly, Adam Elkus is going to delete this thread.
increasingly thinking that the best or only justification at this point for people that aren’t techies learning about computer programs is understanding the way in which the social world is being flattened into categories that computers can understand.
— QWRhbSBFbGt1cw== (@Aelkus) April 22, 2019
I disagree with him a bit, specifically on this point:
For a start, computers emerged into an environment rich in the kind of problems they did well, rather like bacteria learning to digest dead trees after the Carboniferous. All the things they were meant to impose on society – instrumental rationality, bureaucracy, quantification – were already present in some profusion and had been for centuries, if you think of things like double-entry bookkeeping or printed paper forms, let alone punched cards, steam engine governors, or analogue calculators in navy fire-control tables.
As a result, the intellectual armament of criticism already existed, it just needed adjustment to the new target. Surprisingly quickly, an explicitly political language of criticism emerged, a theoretical foundation was laid, and methods of action were developed. Very often the same people were on both sides of the debate. The problem was that the deterministic, logical, computerlike nature of computers threatened to give systems of administration a new, apparently magical authority. The solution was to demystify the machines, to hand out the book of spells on the street corner, above all to demonstrate and insist on the point that they were machines, comprehensible artefacts created by human labour, using skills you could acquire.
I think we’ve basically got this down, after the People’s Computer Company, Logo, the BBC Model B, the web, the RPi, etc. This is good, in a boringly vocational way, in the name of inclusivity, and most of all, in the name of informed citizenship. The problem is that there is a third option in Elk’s typology: to change the computer.
Whether computers proliferate to make use of the piles of computerlike data our already-computerlike institutions throw off, or whether they shape the institutions, there are limits. Lots of things are not very computerlike, and whichever side you start from you will run into this. So there has been an enormous intellectual effort to simulate less computerlike behaviour with computers – to get them to recognise patterns, work with educated guesses, learn from the data, and the like. The problem is that such systems – inferential or inductive ones – are by definition less computerlike. In important ways, the powers behind technology are bending the computers out of shape.
Rather than following rules, they guess. Rather than operating deterministically, they are probabilistic. Rather than representing designers’ intentions, with an error term, they pick up influences from the environment, which might not be the environment intended. Understanding computerlike systems is useful in understanding them, but only up to a point, and often misleading. A very common failure mode is that people trying to interact with them impose their expectations of computerlike behaviour on them, with the eerie consequence that they start behaving more like computers themselves. This is the opposite of the person who behaves as if a simple, deterministic program were infallible, an unquestionable source of authority, or a magical entity.
I don’t think we have an educative project for this. A complicating factor is that the first phase – demystifying classical, if-then-else computing – is still (more than ever!) necessary, and is a prerequisite for any critical understanding of the squishy inferential stuff. The two elements may be in conflict.
My favourite example here is the Strowger exchange. Strowger invented his exchange not to make money out of it, but because he was worried that the human telephone operators were sending his calls to his rival’s undertaking business. So he invented a machine – a machine blind like justice, and correspondingly impartial. We now have a century of expectation that machines are blind and impartial, even though immense ranks of data and computing power are now bent towards influencing your quarter-second decision of where to click to the advantage of vast corporations.
Excellent piece.
So good I even have “write a blog in response” now on my to-do list (although I suspect it might lie there until after the EP elections).
I’m particularly interested in the education angle, both in terms of trad computing and the “fuzzy machine learning” angle.