Andreas Schou writes: +Daniel Estrada finds this unnecessarily reductive and essentialist, and argues for a quacks-like-a-duck definition: if does a task which humans do, and effectively orients itself toward a goal, then it’s “intelligence.” After sitting on the question for a while, I think I agree — for some purposes. If your purpose is to build a philosophical category, “intelligence,” which at some point will entitle nonhuman intelligences to be treated as independent agents and valid objects of moral concern, reductive examination of the precise properties of nonhuman intelligences will yield consistently negative results. Human intelligence is largely illegible and was not, at any point, “built.” A capabilities approach which operates at a higher level of abstraction will flag the properties of a possibly-legitimate moral subject long before a close-to-the-metal approach will. (I do not believe we are near that point, but that’s also beyond the scope of this post.) But if your purpose is to build artificial intelligences, the reductive details matter in terms of practical ontology, but not necessarily ethics: a capabilities ontology creates a giant, muddy categorical mess which disallows engineers from distinguishing trivial parlor tricks like Eugene Goostman from meaningful accomplishments. The underspecified capabilities approach, without particulars, simply hands the reins over to the part of the human brain which draws faces in the clouds. Which is a problem. Because we are apparently built to greedily anthropomorphize. Historically, humans have treated states, natural objects, tools, the weather, their own thoughts, and their own unconscious actions as legitimate “persons.” (Seldom all at the same time, but still.) If we assigned the trait “intelligence” to every category which we had historically anthropomorphized, that would leave us treating the United States, Icelandic elf-stones, Watson, Zeus, our internal models of other peoples’ actions, and Ouija boards as being “intelligent.” Which […]