My mailbox today contained a small clipping from the letters to the editor section of The New Yorker. It didn’t come with a issue number, or even what article this was in response to; I’ll let you know if I find those references.
Update: The response is to an article entitled “Your Move: How Computer Chess Programs Are Changing the Game†from Dec. 12, 2005. (Thanks, Maschas)
Total lack of emotional involvement in the game may give chess programs a strategic advantage over human players, but it is also precisely what robs them of anything like geunine intelligence. Can we even say that such programs are “playing” the game when they neither know nor care what it means to win or lose, or even just to do something or be thwarted? Real animal intelligence involves the organism responding affectively to its environment. Computer programs literally could not care less, which is why they are mere simulations of intelligence.
Taylor Carman
Associate Professor of Philosophy
Barnard College, Columbia university
New York City
This gives me hope, because this view is still alive and well among even the distinguished academics in our field. It is of course no surprise that Carman is a Heidegger scholar. But lets attack his arguments here nice and methodically. I’ll start with the easy one first.
1. Machines aren’t really “playing the game” because they don’t know or care what it means to win or lose.
We should hold off on answering the question about ‘playing’ until we know whats at stake in that question. Carman just assumes that participation requires care and emotional investment (more on that below); I don’t think the case is quite so open and shut. ‘Knowledge’ here is much easier. Of course the machine knows what it means to win: thats the goal of the program. If it didn’t know how to win, it would have no way of evaluating its moves as getting closer to farther from that goal. And that sort of evaluation is all the machine does; it seems to be a serious misunderstanding of both the machine’s internal programming and external behavior to say ‘it doesn’t know what it means to win’.
Perhaps Carman will respond, as most do, that the above is merely a metaphor we use to understand the machine’s behavior, but reflects nothing of the machine. The machine doesn’t know how good it feels to win, or how bad it feels to lose; in other words, it lacks emotional involvement. It has no stake in the game. And without that affective dimension, we can’t even understand the machine’s behavior as knowledge, much less genuinely intelligent. In other words, Carman’s argument rests on emotional involvement as central and necessary for intelligence, and derivatively for cognitive states like knowledge. So lets turn to whether emotional involvement is necessary for intelligence.
2. Emotional involvement is required for anything like genuine intelligence
Carman is alluding to ‘care’ here as an affective necessary condition of intelligence, which we will attack in a moment, but first we need to absolve ourselves of this idea of ‘genuine’ intelligence as a sensible distinction. Intelligence, as understood in cognitive science and artificial intelligence, is merely the ability of a system to construct a plan for achieving some goal, or for solving some problem. A system is more or less intelligent by being more or less capable of realizing that goal given various starting conditions and environmental constraints (including processing speed, time, memory, and efficiency constraints, etc). An evolutionary psychologist would add here that there is no such thing as ‘general intelligence’, but intelligence is always domain specific: a system is more or less intelligent at some particular task, or at realizing some particular goal, and always in some particular (environmental) context. In any case, anything conforming to these general parameters is considered ‘intelligent’, and there is simply no sense in making a distinction between genuine intelligence and ersatz intelligence.
Carman thinks this isn’t the whole story. Real intelligence is not just the solving of some planning task, but necessarily involves some story about the way those problems are solved. In real intelligence, problems are solved affectively: there is some personal investment or emotional attachment to the details of the plan and its ultimate fruition. Affective involvement is the hallmark of ‘real animal intelligence’, and machines clearly don’t have that. But notice the goalposts have shifted, or at least been clarified. We aren’t talking about intelligence in the cog sci sense, but we are talking about specifically ‘animal’ intelligence. No one, to my knowledge, has tried to build a system that plays chess like an animal; they try to build systems that play good chess.
But why should emotional involvement be necessary for intelligence? Science, ideally, is disinterested inquiry; should the mathematician chugging through the details of the Riemann Hypothesis with care only for the formal structure and validity of his arguments be considered less intelligent than the one who is overwhelmed with passion and zeal? Of course, Carman’s argument goes much deeper than that. Carman’s claim is that there is some particular way, unique to humans (or perhaps animals generally), that embodies the whole range of affective qualities that might shape and augment a particular planning strategy: we care. Any two people might come to some task with different interests and concerns and affective dispositions, and therefore approach some problem with different levels of involvement; this qualitative distinction itself is not enough to deprive either of ‘genuine’ intelligence, since both have some investment in the matter, and both care to some extent.
Emotions here are understood as arising within a planning structure, as augmenting or filtering the agent’s relation to the various levels of its plan of action, in addition to its relation to the terms of the constraints and the environment in which the task is carried out. But surely the machine augments information in some way: by encoding the chess board and moves into a language it understands, by embodying that representational system in some architecture, and so on. The machine does, in a certain (but very real) sense, make the game its own, by filtering the game and certain aspects of the context through its hardware. Granted, the machine’s internalization of the game looks radically unlike any filtering relation we are familiar with. But that in itself is not enough to buttress the claim that there is a difference in kind between the human and the machine’s involvement with the game of chess. If two people with perhaps radically disjoint motives and affects can be considered to genuinely play the game, and play it intelligently, then we need another argument to show that the machine’s approach to the game is not a mere qualitative distinction but a radical difference in kind- that is, by saying the machine “plays the game”, we are actually making a category mistake. Carman does not (and cannot) support this claim without begging the question.
Carman would object- the machine doesn’t make the game its own at all! This comes out in his distinction between the machine’s ersatz intelligence and ‘real animal intelligence’, as the latter involves affective responses to an environment. Even granting that the machine filters incoming information in certain ways, Carman’s intuition seems to be that the machine is not interacting or engaged with an environment at all. Thus we see the deep bias and chauvinism against machines revealed. It is only under the assumption that machines do not interact with the world, but only with the pure realm of mathematics or Platonic forms, does this bias begin to take hold. The machine merely calculates, Carman thinks. Thus, it is not an embodied, worldly agent engaged with an environment, and thus does not genuinely filter the world through ‘affective’ hardware, and thus does not embody genuine human intelligence, knowledge, or the capacity for participation in an activity like a game. The machine suffers for lack of a body.
But why would anyone hold the view that machines are not worldly? Well, I have my suspicions; I think this bais against machines traces back at least to the early modern conception of nature as itself a machine, and our contemporary reaction against Cartesianism. I think that this bias seriously misunderstand the (genuine, substantive) role machines play in our life and in this world. It seems clear to me that the machine is engaged in a real game of chess (and not just its abstract form), and with this engagement comes all the attributes of agency: intelligence, knowledge, participation. Of course it is correct that there are substantive structural distinctions between animals and machines, and in arguing that machines are agents (or ‘genuine’ agents) I am not holding that the machine’s understanding of chess mirrors are own. But surely there is overlap (they are both concerning chess), and in any case a mere difference of understanding is itself insufficient to revoke both care and the very possibility of participation. It is obvious that humans and machines approach chess in distinct ways- that is precisely what makes watching them play together so interesting