Jon links to Forbes’ special edition on AI. I’ll go through most of these, commenting when appropriate. For instance: Dumb Like Google While the switch to “stupid” statistically based computing has given us tools like Google, it came with a steep price, namely, abandoning the cherished notion that computers will one day be like people, the way early AI pioneers wanted them to be. No one querying Google would ever for a minute confuse those interactions with a Q&A session with another person. No matter how much Google engineers fine-tune their algorithms, that will never change. Google is inordinately useful, but it is not remotely intelligent, as we human beings understand that term. And despite decades of trying, no one in AI research has even the remotest idea of how to bridge that gap. … Since AI essayists like to make predictions, here’s mine. No one alive as these words are being written will live to see a computer pass the Turing Test. What’s more, the idea of a humanlike computer will increasingly come to be seen as a kitschy, mid-20th-century idea, like hovercraft and dinner pills on The Jetsons. This is basically what I’ve been saying for a decade, with a few caveats. First, I don’t think we can make much sense of the ‘unbridgeable gap’ lamented in the first paragraph, as if intelligence were a single-dimensional spectrum with a large black void somewhere near the top. Thats a silly little antiquated picture, and revising the picture makes Gomes’ thesis that much stronger. Intelligence is task-specific; computers, humans, animals, and everything else are good at solving certain kinds of problems, and bad at solving other kinds of problems. Since solving some problems does not necessarily imply success at other problems (even when those problems are closely related), then intelligence […]