Idiosyncrasies, inquisitiveness and the occasional grammatical error – could these be the things that identify us as ‘human’? It could seem so, as a new paper from Coventry University reveals how a ‘chatbot’ became the first machine to pass the Turing test.
The Turing test was designed in 1950 to test a computer’s ability to exhibit intelligent behaviour that is indistinguishable from that of a human. A machine had never successfully passed the test – or passed as a ‘human’, essentially – until last year, when a robot known as Eugene Goostman surpassed the 30% mark needed to be deemed successful.
Eugene, which was a computer programme developed by Vladimir Veselov, posed as a 13-year-old boy from the Ukraine. Now, the transcripts from the test have been released in a paper in the Journal of Experimental & Theoretical Artificial Intelligence, and reveal the details that caused Eugene to be successful.
Coventry University’s Professor Kevin Warwick and Dr Huma Shah, the paper’s authors, examined 10 of the transcripts to consider what factors played a part in the deception. It was concluded that Eugene’s tendency to answer questions with further questions, or to sometimes change the subject, added “human credibility” to the conversations that the judges held with the computer. In addition to this, Eugene also made the occasional grammatical error – another thing that is thought to have added a very human edge to the exchange.
Below is one of the transcripts revealed in the paper:
JUDGE: What is your gender?
EUGENE: I’m a young boy, if you care to know. And not very ugly, by the way!
JUDGE: What are you aware of at this moment?
EUGENE: I would rather not talk about it if you don’t mind. Could you tell me what are you? I mean your profession.
JUDGE: How old are you?
EUGENE: I’m a little boy 13 years old.
JUDGE: What are your qualifications?
EUGENE: My qualifications? I’m not gonna tell you. Oooh. Anything.
“It’s fascinating to read through Eugene’s remarks during the exchanges.” Professor Kevin Warwick, the deputy vice-chancellor for research at Coventry University, comments. “Evidently the judges were deceived by the bot’s propensity for light-hearted, idiosyncratic responses to questions and for humanlike errors.”
As well as being fascinating, the study also has significance for the future of our relationship with machines. Professor Warwick explains, “Ultimately, when [the bot] leads to the judges failing to tell the difference between a computer and a human in this way, then we’re gaining a significant insight and indication into the capabilities of today’s machine conversation systems. And it’s quite impressive.”
You can access the full paper online: Can machines think? A report on Turing test experiments at the Royal Society.