Due to an unaccountable outbreak of laziness, when I saw the story about an IT consultant from Wolverhampton and his chatroom monitoring programs (Nanniebots) that appeared to pass the Turing test I didn’t blog it. (Original New Scientist story and transcript here, The Register, BBC news, Need To Know) Basically, his claim was that he had developed a suite of bots, autonomous software agents, that logged into chatrooms and took part in conversations. If (I suppose) anything on a watchlist intended to signal paedophile activity came up, the bot would log it to raise the alarm. Pretty clear, but this implied that the chatbot could converse with a human being without their noticing it was a machine. Alan Turing famously theorised that the defining case for artificial intelligence was a machine capable of carrying on conversation indistinguishable from that of a human being. Was this a brilliant breakthrough in AI or a fake?
Well, I for one was really hoping that this might be a real future burst. The idea of a chap in a shed in Wolverhampton discovering everything Cambridge, Imperial, Stanford, MIT and everywhere else with a multizillion budget couldn’t (after all, Frank Whittle built the jet engine in a shed in Leicester) was too great to turn aside, as was the hope that this time – this time – we might not make a horrible potmess out of it and spend the next hundred years paying the Yanks billions annually for the use of our own inventions. Unfortunately, a quantity of debunking matter has appeared. For one, he’s supposedly been caught boasting about non-existent software before. And the Guardian’s Bad Science column, which had already secured a promise of an independent trial of the bot, has outed him as a Holocaust denier – quoting some pretty ugly posts on newsgroups for that particular online community. Puke! (He claims it’s not his fault, but the posts came from his IP address.) Jim Wightman, the chap concerned, has been posting heavily on comment threads associated with the subject. Or at least people using his identity…on the Internet, no-one knows you’re a dog. (They know you’re a blog, though.) Check this out:
I have a crazy theory. The other project of ChatNannies is humans monitoring and reporting on chatrooms. What if the bots are actually humans? You could describe this as collecting pop culture references from the internet. An experiment in tapping the power of a distibuted human brain network to create an “AI”. Perhaps the figure of 100,000 is his report database capacity.
posted by Zombywuf on March 25, 2004 03:31 PMI think Jim Wightman is actually a bot.
posted by Michael Williams on March 25, 2004 03:54 PMHas anyone thought of the wider ethical implications of this?
a) When “nanniebots” are running on many chatrooms, children obviously won’t know if they’re talking to another child or a nanniebot. Is it right to deceive millions of children in this way?
b) What if the system was used for evil instead of good? “Nanniebots” could befriend children, set up real-world meetings, and let pedophiles know the time/location automatically by email. Takes all the hard work out of “grooming” children…
But luckily, I doubt either of these are real problems, given that the whole idea is so infeasible and must be a hoax of some sort.
posted by James on March 26, 2004 09:05 AMJames: Why not bot children that lure in pedophiles? Then the bots could all just “seduce” each other and cut out the middle-man.
posted by Michael Williams on March 26, 2004 10:04 AM
It’s getting too postmodern in here! Meanwhile, over at overstated.net, Cameron Mackintosh reported a conversation with what was claimed to be one of Mr. W’s menagerie. This has been heavily debated on other sites, especially the moment when the alleged bot came up with an error message. This was widely seen as a transparent attempt to make the beast seem more mechanical. Other critics suggested that it lacked the damage control procedures to be expected from a machine – the devices used to elicit repetitions if the machine fails to understand a remark, what linguists would call a help strategy. These tags litter our own speech, oiling the wheels of conversation, organising turn-taking and clearing up misunderstanding. “Can you repeat that?” or “Huh?” for example. I’m not so sure – there appear to me to be several points where the alleged machine does apply such a strategy.
[cameronfactor] i’m learning about stuff and junk
[Guest8474860] !
Now that sounds to me like a tactic to get something less inscrutable to work on. I agree with most of those commenters, though, that the end of the conversation where Cameron first accuses the “bot” of being a bot, and then claims to be one himself is handled with almost suspicious cleanliness. That brings up the logical problem of the whole case – the only evidence that will convincingly demonstrate that Mr. Wightman is not a gifted hoaxer is a failure that would show up the bot for what it was – and that would, of course, invalidate his claims! In a sense, that should be enough – Karl Popper would have thought so. We have a hypothesis capable of refutation. We have a repeatable way of attempting to refute it. All good, no? The problem is, of course, that Popper’s method excludes the possibility of bad faith, and requires complete openness. Unless both possibilities can be trusted – that it doesn’t work or that it does – we need some positive proof. The only way to secure that would be to exclude the possibility of the bot being a human operator. What’s needed is a double blind trial with the machinery under impartial supervision. Mr. Wightman seems horrified at the idea. And his (at least, the chap calling himself him’s) behaviour on threads discussing the issue is not helpful. He claims that critics are “below him” and that he “looks forward to nuclear holocaust”. Nice. It fits with the nazi spammer, and doesn’t bode well for any real proof.