“The short answer is no one really know what kind of emotions people want in robots, ” said Maja Mataric, a computer science professor at the University of Southern California. But scientists are trying to figure it out: Dr. Mataric was speaking last week from a conference on human-robot interaction in Salt Lake City. There are signs that in some cases, at least, a cranky or sad robot might be more effective than a happy or neutral one. At Carnegie Mellon University, Rachel Gockley, a graduate student, found that in certain circumstances people spent more time interacting with a robotic receptionist — a disembodied face on a monitor — when the face looked and sounded unhappy. And at Stanford, Clifford Nass, a professor of communication, found that in a simulation, drivers in a bad mood had far fewer accidents when they were listening to a subdued voice making comments about the drive. “When you’re sad, you do much better working with a sad voice,” Dr. Nass said. “You don’t feel like hanging around with somebody who says, ‘Hi! How are you!’ ” That illustrates the longer answer to the question of what humans want in their robots: emotions like those they encounter in other humans. “People respond to robots in precisely the same way they respond to people,” Dr. Nass said.|link| Well, for me, a chess game is a conversation of sorts. From my perspective, today’s off-the-shelf computer programs come awfully close to meeting Turing’s test.|link|