Dante D’Orazio takes note of this weekend’s big news out of London:
Eugene Goostman seems like a typical 13-year-old Ukrainian boy – at least, that’s what a third of judges at a Turing Test competition this Saturday thought. Goostman says that he likes hamburgers and candy and that his father is a gynecologist, but it’s all a lie. This boy is a program created by computer engineers led by Russian Vladimir Veselov and Ukrainian Eugene Demchenko.
That a third of judges were convinced that Goostman was a human is significant – at least 30 percent of judges must be swayed for a computer to pass the famous Turing Test. The test, created by legendary computer scientist Alan Turing in 1950, was designed to answer the question “Can machines think?” and is a well-known staple of artificial intelligence studies. Goostman passed the test at the Turing Test 2014 competition in London on Saturday, and the event’s organizers at the University of Reading say it’s the first computer to succeed.
Kabir Chibber looks back to Turing’s exact prediction:
He said in 1950:
I believe that in about 50 years’ time it will be possible to program computers… to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning.
While this didn’t happen by the year 2000, it seems Turing was off by only 14 years.
Nathan Mattise has more on this weekend’s breakthrough:
Eugene was one of five supercomputers tackling the challenge at this weekend’s event, held precisely 60 years after Turing’s death on June 7, 1954. It was designed by a team in Saint Petersburg, Russia, led by creator Vladimir Veselov (who was born in Russia and now lives in the US). An earlier version of Eugene is hosted online for anyone to interact with, according to The Independent (though with interest understandably high right now, we’ve been unable to access it).
“Eugene was ‘born’ in 2001. Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn’t know everything,” Veselov said according to the event press release. “We spent a lot of time developing a character with a believable personality. This year we improved the ‘dialog controller’ which makes the conversation far more human-like when compared to programs that just answer questions. Going forward we plan to make Eugene smarter and continue working on improving what we refer to as ‘conversation logic.'”
Polly Mosendz suggests Goostman wouldn’t have passed the test if he weren’t a teenbot:
Developer Veselov explained that, “Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn’t know everything.” So if the judges asked him something he was not programmed to know, judges might write that off as a factor of his age instead of his lack of humanity.
Pranav Dixit comments that “a chatbot successfully pretending to be a 13-year-old boy for whom English is a second language ain’t exactly Hal 9000,” but calls the event “an obviously exciting breakthrough.” Robert T. Gonzalez and George Dvorsky elaborate:
The chatbot is not thinking in the cognitive sense; it’s a sophisticated simulator of human conversation run by scripts. In other words, this is far from the milestone it’s been made out to be. That said, it is important, because it supports the idea that we have entered an era in which it will become increasingly difficult to discern chatbots from real humans.
“Having a computer that can trick a human into thinking that someone, or even something, is a person we trust is a wake-up call to cybercrime [and the] Turing Test is a vital tool for combatting that threat,” said competition organizer Kevin Warwick on the subject of the test’s implications for modern society. “It is important to understand more fully how online, real-time communication of this type can influence an individual human in such a way that they are fooled into believing something is true…when in fact it is not.”
Update from a reader:
This chatbot absolutely did NOT pass the Turing test – not even close. Nor is it a breakthrough in any technical or conceptual sense. “Passing the Turing test” does not mean fooling more than 30% of judges within 5 minutes – that’s just what Turing thought might be possible by 2000. Passing the Turing test means fooling a capable judge after an extended, thorough interrogation.
As hilariously demonstrated by MIT computer scientist Scott Aaronson, this chatbot cannot even tell you whether a shoebox is bigger than Mt Everest, or how many legs a camel has.
Another passes along this article, which “pretty much blows all the claims out of the water – and makes clear the whole thing was a PR stunt by a “scientist” who specializes in PR stunts.”