Alice in Virtual Land

A programme called Alice has won the latest round of competitions to produce a software system capable of holding a conversation with a human. Drawing inspiration from British mathematical genius Aln Turing’s hypothesis – the Turing Test – that if a conversation with a machine fooled a human into believing he/she was talking to another human then that machine was effectively ‘intelligent’. No-one has yet won the Gold or Silver awards but the Bronze is given out to the best attempt each year.

I’ve always thought the Turing Test was rather simplistic myself and argued as much in an essay on AI and cognitive psychology back in college. Conversational ability does not prove sentience: George Bush can talk (sort of, not very well, but does better when prompted – or is he merely repeating a script like a trained parrot and therefore not exhibiting any intelligent behaviour? Discuss) but would we class him as sentient? Besides it is a long way from mimicking a human skill to having actual AI. Most programmes until reasonably recently have often taken the root of pretending to be mentally impaired patients in a hospital, such as ELIZA and others. This means when the software is unable to give a convincing response to a question it can be interpreted as a result of the impairment. In fact I recall playing a home computer game based on this idea many years ago – back in the old Sinclair ZX Spectrum days in fact. It was a game called ID – you held long ‘conversations’ with the personality who had amnesia and possibly other impairments such as verbal aphasia and tried to ascertain who and what they were and had been. Anyone else remember that one?

Even if we do have software which can talk to us as easily as say HAL 9000 and understands natural language input it is merely another, albeit more sophisticated, form of interface with our machinery. It does not prove intelligence – we need a lot more to argue for that in a machine, not least sense of self-awareness. Although how you ever prove that I do not know – I can’t prove I have such a faculty really (shut up in the back Descartes, your idea is rather simplistic and proves nothing). And even if we can create a real AI and prove that it is sentient many people will refuse to believe it for religious reasons or simple stupidity or bigotry. And how will we react to such a creation if we make it? I suspect that will be a huge moral quandry for humanity. If we recognise an AI as sentient then we can no longer class it as a mere device there to serve us, can we? That would be tantamount to creating a new form of slavery. But could we bring ourselves to see an AI as equal in rights to a human? Would the AI see us as equal? And would it sound like Majel Barret Rodenberry in Star Trek or HAL 9000?