Jon links to Forbes’ special edition on AI. I’ll go through most of these, commenting when appropriate. For instance:
While the switch to “stupid” statistically based computing has given us tools like Google, it came with a steep price, namely, abandoning the cherished notion that computers will one day be like people, the way early AI pioneers wanted them to be. No one querying Google would ever for a minute confuse those interactions with a Q&A session with another person. No matter how much Google engineers fine-tune their algorithms, that will never change. Google is inordinately useful, but it is not remotely intelligent, as we human beings understand that term. And despite decades of trying, no one in AI research has even the remotest idea of how to bridge that gap.
…
Since AI essayists like to make predictions, here’s mine. No one alive as these words are being written will live to see a computer pass the Turing Test. What’s more, the idea of a humanlike computer will increasingly come to be seen as a kitschy, mid-20th-century idea, like hovercraft and dinner pills on The Jetsons.
This is basically what I’ve been saying for a decade, with a few caveats. First, I don’t think we can make much sense of the ‘unbridgeable gap’ lamented in the first paragraph, as if intelligence were a single-dimensional spectrum with a large black void somewhere near the top. Thats a silly little antiquated picture, and revising the picture makes Gomes’ thesis that much stronger. Intelligence is task-specific; computers, humans, animals, and everything else are good at solving certain kinds of problems, and bad at solving other kinds of problems. Since solving some problems does not necessarily imply success at other problems (even when those problems are closely related), then intelligence can’t be understood in single-dimensional terms. Deep Blue can play chess but not checkers; my dog can fetch the paper but not milk from the corner store.
Importantly, the strengths of computational techniques do not have a straightforward implementation for solving the tasks for which human intelligence evolved. It takes a lot of work to go from the basic logical structures of a computing machine to the complex, robust tasks that humans are capable of. Early computer scientists were famously optimistic (to the point of naïveté) about designing such an implementation, and it is easy to laugh at them from 60 years in the future. But we should remember that, at the time, we had a very dim understanding of how the brain/mind worked. The competing psychological theories included behaviorism and Freud, neither of which were particularly good at explaining the subtleties of the mind, and neither of which would have been that difficult to program into a machine. In fact, it was precisely because of early stumbling blocks in developing artificial intelligence that psychologists went back to the drawing board in order to develop a cognitive theory of the mind that treats mental processes as essentially computational manipulations of mental representations. Such a research paradigm has been incredibly successful at explaining both human minds, and at helping to develop newer and more powerful artificial intelligent systems. As far as the research is concerned, there is no unbridgeable gap to cross that has stymied progress; instead, there has been rather steady advances in machine intelligence that has made significant advances in every area of “paradigmatically human” intelligence.
But advances in human-like intelligence are but a sub-sub-area of the much more radical advances in cognitive theory and technology, which has learned to solve problems that are important to humans without replicating anything like the human mind. Google solves problems that humans simply can’t solve, nor would we expect any human to attempt a solution. I appreciate Gomes’ recognition that this doesn’t represent a failure of artificial intelligence so much as it reveals that human-like intelligence isn’t really a goal that needs to be achieved.
This shouldn’t be surprising. Technology advances according to its own unique rhythms and advantages, and the evolutionary pressures on technology look nothing at all like the evolutionary pressures on the early ancestors of humans. It confuses me to no end that we still believe that all intelligent systems ought to converge to the same point, conveniently to the point that humans have already ‘achieved’. We don’t need our technology to replicate our own behavior; we want our technology to be useful, and to work with us on problems we find important. What concerns us most is that those problems get solved, by any means available, and that ultimately means that we need a variety of techniques, some of which look completely unlike humans, to get in on the action.
That said, I’d hesitate to make Gomes’ prediction; such predictions are a constant source of hilarity and embarrassment. More likely, someone will produce a machine that convincingly passes the Turing Test but has very little practical use. Such a machine will be reported by the mainstream press as a novelty amid a good deal of objection and controversy, and quite a bit of nerdy fan-boy fascination (indeed, such machines are highlighted in the press quite regularly). After the novelty wears off no one will give it much thought, and will still passionately object to the possibility of machine intelligence despite the evidence. My prediction is that a machine will pass the Turing Test quite convincingly and quite soon[1], and for the most part the public will remain unimpressed.
Meanwhile, the technological breakthroughs that generated such a machine will slip in to all sorts of peripheral technologies that we are already familiar with (in your car, your cell phone, your computer, your smart toilet), and it will radically improve and dramatically change our way of life. And we will become even more desenstitized to the increasingly intelligent machines we surround ourselves with every day.
[1] In another of the Forbes articles on the Turing Test, Warwick claims that Turing originally “dared to suggest that within 100 years a human simply wouldn’t be able to tell the difference between another human and a machine.”
Well, thats a little generous. From Turing’s 1950 paper: “Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. I believe further that no useful purpose is served by concealing these beliefs.”
Turing clearly means to suggest that we will have thinking machines by 2000, and did not mean to suggest we would have thinking machines within a hundred years. Nevertheless, I believe that Turing’s prediction came true long before he predicted. Since at least the Kasparov-Deep Blue match in 1997, general educated opinion (and not just niche philosophers/cognitive scientists) spoke of thinking machines with no contradiction, and I dare you to find any topic that wont generate SOME response from educated objectors.
Even though his prediction came true, the onus is on US to recognize this fact. And, you know, good luck with that.