âThe short answer is no one really know what kind of emotions people want in robots, â said Maja Mataric, a computer science professor at the University of Southern California. But scientists are trying to figure it out: Dr. Mataric was speaking last week from a conference on human-robot interaction in Salt Lake City. There are signs that in some cases, at least, a cranky or sad robot might be more effective than a happy or neutral one. At Carnegie Mellon University, Rachel Gockley, a graduate student, found that in certain circumstances people spent more time interacting with a robotic receptionist ââŹâ a disembodied face on a monitor ââŹâ when the face looked and sounded unhappy. And at Stanford, Clifford Nass, a professor of communication, found that in a simulation, drivers in a bad mood had far fewer accidents when they were listening to a subdued voice making comments about the drive. âWhen youâre sad, you do much better working with a sad voice,â Dr. Nass said. âYou donât feel like hanging around with somebody who says, âHi! How are you!â â That illustrates the longer answer to the question of what humans want in their robots: emotions like those they encounter in other humans. âPeople respond to robots in precisely the same way they respond to people,â Dr. Nass said.|link| Well, for me, a chess game is a conversation of sorts. From my perspective, todayââŹâ˘s off-the-shelf computer programs come awfully close to meeting TuringââŹâ˘s test.|link|