I don’t know if this is bad form, but I’m archiving another comment here, this time in +Jonathan Langdale‘s post linked below.
https://plus.google.com/u/0/109667384864782087641/posts/5fDf3r3AsHe
I need to write up a longer post on Turing, but there are two distinct parts to the test:
1) The machine uses language like a natural language user (it can carry on a conversation).
2) We take that language use to be an indicator of intelligence.
The whole history of attempting to build machines that “pass” the Turing Test has been an engineering project designed to solve criteria 1. Its certainly not a trivial problem; but I think it is taken to be much more complicated than it need be. For instance, when I am talking to someone who doesn’t know English well, the language might be grammatically messy, even indecipherable, but I nevertheless tend to give a lot of charity and presume my interlocutor’s general intelligence anyway.
So on this criteria, I have argued that some machines are already language users and have been for some time. We aren’t on the “brink” of passing it; we’ve shot right by it, and it is now common place to talk in semi-conversational language to our devices and expect those devices to understand (at least to some extent) the meaning and intention of those words. Google in particular is not only a language user, but its use of language is highly influential in the community of language users; denying that Google uses language therefore threatens to misunderstand what language use is.
I’m currently having an extended philosophical discussion along these lines in this thread:
https://plus.google.com/u/0/117828903900236363024/posts/RT5hG9a4dNd
But even granted that you build a conversational machine, it is still an open question of whether we take that machine to be intelligent. Turing recognized very clearly that regardless of the machines ability human beings have a preternatural bias against machines, a deep prejudice that will convince them to deny any and all status to machines even when their performances clearly deserve praise and admiration. Turing’s concern about such biases is perfectly explicit in his writings, and can be seen in the syllogism that closes one of his later letters, as seen in the beginning of this video:
So if the claim of this article is that we are on the brink of somehow overcoming our technophobia, that seems wildly optimistic, and nothing whatsoever in the article gives reasons for thinking it to be true. Even if we have a machine that talks our biases will prevent us from recognizing it as such. As I said, we are already surrounded by such machines, and yet incorrigibly we continue to treat the possibility of intelligent machines as a distant possibility.
Matt Uebel originally shared this post:
Artificial Intelligence Could Be on Brink of Passing Turing Test
One hundred years after Alan Turing was born, his eponymous test remains an elusive benchmark for artificial intelligence. Now, for the first time in decades, it’s possible to imagine a machine making the grade.