I stumbled on the transcript to the News Hour segment that occured just after Kasparov conceded defeat to Deep Blue. They had Dennett and Dreyfus on, and they go at it with their standard arguments. It is really the culmination of what I will officially call the Old School Debate on AI, or OSDAI. It is really quite entertaining, and Dennett really just nails Dreyfus.
MARGARET WARNER: Hubert Dreyfus, what do you think is the significance of this? There’d been a lot of commentary about it. “Newsweek” Magazine called it the “brain’s last stand.” What do you see as the significance of this outcome?
HUBERT DREYFUS, University of California, Berkeley: Well, I think that’s a lot of hype, that it’s the brain’s last stand. It’s a significant achievement all right for the use of computers to rapidly calculate in a domain–and this is the important thing–completely separate from everyday human experience. It has no significance at all, as far as the question: will computers become intelligent like us in the world that we’re in? The reason the computer could win at chess–and everybody knew that eventually computers would win at chess–is because chess is a completely isolated domain. It doesn’t connect up with the rest of human life, therefore, like arithmetic, it’s completely formalizable, and you could, in principle, exhaust all the possibilities. And in that case, a fast enough computer can run through enough of these calculable possibilities to see a winning strategy or to see a move toward a winning strategy. But the way our everyday life is, we don’t have a formal world, and we can’t exhaust the possibilities and run through them. So what this shows is in a world in which calculation is possible, brute force meaningless calculation, the computer will always beat people, but when–in a world in which relevance and intelligence play a crucial role and meaning in concrete situations, the computer has always behaved miserably, and there’s no reason to think that that will change with this victory.
MARGARET WARNER: Daniel Dennett, what do you see as the significance? And respond, if you would, to Mr. Dreyfus’s critique.
DANIEL DENNETT, Tufts University: Certainly. It seems to me that right now is a time for the skeptics to start moving the goal posts. And I think Bert Dreyfus is doing just that. A hundred and fifty years ago Edgar Allan Poe was sure in his bones that no machine could ever play chess, and only 30 years ago so was Hubert Dreyfus, and he said so in the earlier edition of his book. Then he’s changed his mind, and, as he says, it’s–this is really no surprise. People in the computer world have known for a couple of decades that this–this day was going to happen. Now it’s happened. I think that the idea that Professor Dreyfus has that there’s something special about the informal world is an interesting idea, but we just have to wait and see. The idea that there’s something special about human intuition that is not capturable in the computer program is a sort of illusion, I think, when people talk about intuition. It’s just because they don’t know how something’s done. If we didn’t know how Deep Blue did what it did, we’d be very impressed with its intuitive powers, and we don’t know how people live in the informal world very well. And as we learn more about it, we’ll probably be able to reproduce that in a computer as well.
MARGARET WARNER: Mr. Dreyfus, do you think he’s right that perhaps we don’t–still just don’t completely understand what it is that humans do when they think, as we think of thinking?
HUBERT DREYFUS: I think that we don’t fully understand it in the sense that Dan Dennett and people in the AI community meet, if I fully understand.
MARGARET WARNER: By AI you mean artificial intelligence.
HUBERT DREYFUS: Right. That is, we don’t–we are not able to analyze it in terms of context-free features and tools for muting these futures. But I don’t think that’s just a limitation of our current knowledge. That’s where I differ with Dan. There is something about the everyday world which is tied up with the kind of being we are. We’ve got bodies, and we move around in this world, and the way that world is organized is in terms of our implicit understanding of things like we move forward more easily than backward, and we have to move toward a goal, and we have to overcome obstacles. Those aren’t facts that we understand. We understand that just by the way we are, like we understand that insults make us angry. You can state those as facts. But I think there’s a whole underlying domain of what we are as emotional embodied beings which you can’t completely articulate as facts and which underlies our ability to make sense of facts and our ability to find any facts relevant at all. Can I say one word about this–
MARGARET WARNER: Please.
HUBERT DREYFUS: –this story. I never said that computers couldn’t play chess. I’ve got a quote here. I said, “In ‘65, still no computer can play even amateur chess.” That was a report on what was going on in 1965. I’ve had to put up for 35 years with this story that I said computers could never play chess. In fact, I said from the beginning it’s a formal game, and of course, computers could play, in principle, could play, world champion chess.
MARGARET WARNER: All right. Let me bring Mr. Friedel back in here. Mr. Friedel, did Gary Kasparov think the computer was thinking?
FREDERIC FRIEDEL: Not thinking but that it was showing intelligent behavior. When Gary Kasparov plays against the computer, he has the feeling that it is forming plans; it understands strategy; it’s trying to trick him; it’s blocking his ideas, and then to tell him, now, this has nothing to do with intelligence, it’s just number crunching, seems very semantic to him. He says the performance is what counts. I see it behaves like something that’s intelligent. If you put–if you put a curtain up, he plays the game and then you open the curtain, and it’s a human being. He says, ah, that was intelligent, and if it’s a box, he says, no, that was just number crunching. It’s the performance he’s interested in.
MARGARET WARNER: Daniel Dennett, I know you’re not a chess expert, but I mean, do you feel that in this situation the computer was thinking in the way that Mr. Friedel said Gary Kasparov thought it was, I mean, that it was somehow independently making judgments? I’m probably using the wrong terminology here.
DANIEL DENNETT: No. I think that’s fine. I think that Kasparov has put his finger on it too. It’s the performance that counts. And Kasparov is not kidding himself when he sees–when he confronts Deep Blue and feels that Deep Blue is, indeed, parroting his threats and recognizing what they are and trying to trick him, this is an entirely appropriate way to deal with that. And if Professor Dreyfus–
MARGARET WARNER: But do you think it was capable of trying to trick Kasparov?
DANIEL DENNETT: Certainly.
MARGARET WARNER: And Mr. Dreyfus, your view on that.
HUBERT DREYFUS: No. I think it was brute force, but the important thing is I’m willing to say, okay, it’s the performance that counts. But it’s the performance in a completely circumscribed, formal domain, mere meaningless–can produce performance full of trickery–performance in the everyday world.
MARGARET WARNER: Daniel Dennett, briefly in the time we have left, where do you think we are in the continuum of developing–percent of where computers–or 50 percent?
DANIEL DENNETT: No. I don’t think that’s the right way to look at it. In fact, Deep Blue in chess programming in general is a sort of offshoot to the most interesting work in artificial intelligence and largely for the reasons that Bert Dreyfus says. I think the most interesting work is the work that, for instance, Rodney Brooks and his colleagues and I are doing at MIT with the humanoid robot Cog, and as Dreyfus says–you’ve got to be embodied to live in a world, to develop real intelligence, and Cog does have a body. That’s why Cog is a robot. Now, if Bert will tell us what Cog can never do and promise in advance that he won’t move the goal posts and he won’t say, well, this wasn’t done in the right style, so it doesn’t count, if he’ll just give us a few tasks that are now and forever beyond the capacity of Cog, then we’ll have a new test.
MARGARET WARNER: All right. We have just a few seconds. Mr. Dreyfus, give us two tasks it’ll never be capable of, very quickly.
HUBERT DREYFUS: Okay. If Cog is programmed as a symbolic rule-using robot and not as a brain-imitating robot, it won’t be able to understand natural language. There’s no reason why a computer that’s simulating the way the neurons in the brain work won’t be intelligent. I’m talking about how what’s called symbolic manipulation won’t be intelligent.
MARGARET WARNER: All right. Thanks. We have to leave it there
I’ll comment on this (and probably edit it down) after lunch.
I’ll just jot down the following links too for future reference:
http://www.slate.com/id/3650/entry/23905/
http://www.slate.com/id/3650/entry/23906/
This is the subsequent email exchange between Dennett and Dreyfus following the News Hour segment.