February 12, 2008
steal the souls of the free In The Know: Are We Giving The Robots That Run Our Society Too Much Power? via SuicideBots
February 12, 2008
he’s going to cause the system to fall Asimo can now operate in an environment with people as well as other Asimos. Robots working together will wirelessly share data such as battery levels and the closest unit to a given task. Each works autonomously based on the networked information. Another new AI function allows Asimo to estimate the path of people walking toward it based on their speed and direction and to avoid them by stepping back if necessary. And when Asimo’s battery level falls below a certain level, it will return to its recharging station and power up. via LTM
February 1, 2008
From NewScientist: Artificial letters added to life’s alphabet The molecular pair that worked surprised Romesberg. “We got it and said, ‘Wow!’ It would have been very difficult to have designed that pair rationally.”
January 27, 2008
Hopefully this post will encourage Lally to start regularly posting on her blog again, which is now on the blog roll. This is a major step in our relationship. Lally Andreevna To that end:
January 18, 2008
January 16, 2008
From NYT: Monkey’s Thoughts Propel Robot, a Step That May Help Humans “It’s walking!†Dr. Nicolelis said. “That’s one small step for a robot and one giant leap for a primate.†This is the same guy who got a monkey to control a robot arm with its thoughts alone back in ’03.
December 12, 2007
From What is it like to be a Thermostat? by David Chalmers. What Lloyd’s approach brings out is that when we try to isolate the kind of processing that is required for conscious experience, the requirements are remarkably hard to pin down, and a careful analysis does not throw up processing criteria that are more than minimal. What are some reasonable-seeming functional criteria for conscious experience? One traditional criterion is reportability, but this is far too strong to be an across-the-board requirement. It seems reasonable to suppose that dogs and cats have conscious experience, even in the absence of an ability to report. If we seriously discussing panpsychism, why should we think that ‘reportability’ should be a strong requirement? To me, reportability seems very weak. My cat Gus lets me know he wants to go outside by knocking things off my desk. Gus is letting me know about his current internal state. If it is reasonable to suppose that Gus is having conscience experiences, then ‘wanting to go outside’ is a very likely candidate for an internal state that is associated with a phenomenological experience. So Gus exhibits exactly the sort of behavior we are looking for in an ability to report. If conscious states, as Chalmers assumes, are functionally independent of linguistic behavior, then there is no reason to assume that reportability as a criteria of consciousness rests on an ability to use language. Gus reports his internal states all the time, in a variety of ways, most of which annoy the shit out of me, and none of which are linguistic, but can very easily be taken as a evidence of an internal conscious state. Only when reportability is a weak requirement does the possibility of panpsychism become a live option, because its very easy to exhibit behavior […]
December 12, 2007
From Norms, Networks, and Trails by Adrian Cussins If the ‘rules’ don’t pre-empt what is properly possible in the ‘game’, then the ‘rules’ become part of what is negotiated by the ‘players’. If the ‘rules’ become part of what is negotiated by the ‘players’, then we end up with the comical but also absurd activity of “Calvinball” from the Calvin and Hobbes cartoon strip. Counter-examples: 1) The US Constitution contains provisions for revising and amending the constitution. 2) Wikipedia encourages active discussion of its policies and guidelines. Perhaps these processes are comical and absurd, but I don’t think they undermine the normative structure of the game as such. Am I wrong?