August 11, 2008
Ran into this quote from Whitehead: It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilisation advances by extending the number of operations we can perform without thinking about them. From Alfred North Whitehead’s An Introduction to Mathematics, p. 61. There is an echo of this sentiment in Turing’s approach to artificial intelligence. In any case, I found this quote on the Nudge blog, based on a book by Thaler and Sunstein. A nudge is any environmental cue that disposes a person to a particular response. They describe it like this: By a nudge we mean anything that influences our choices. A school cafeteria might try to nudge kids toward good diets by putting the healthiest foods at front. We think that it’s time for institutions, including government, to become much more user-friendly by enlisting the science of choice to make life easier for people and by gentling nudging them in directions that will make their lives better. They call their position ‘libertarian paternalism’ (ugh), and it is all about limiting control in particular ways without compromising freedom of choice. More specifically, it is about how to design environments that foster intelligent decision making. This might be one of those dangerous ideas, but when have you ever had a reason to distrust a Chicago economist? Some examples and a lecture below. Social cues are particularly salient nudges, but our machines are getting better at providing motivational feedback. I really like traffic examples as a case of almost seamless human-machine-infrastructure integration, which works really well in the ‘nudge’ vocabulary. For instance: If white lines are removed from the centre of a road, […]
June 13, 2008
Roadrunner supercomputer puts research at a new scale On Saturday, Los Alamos researchers used PetaVision to model more than a billion visual neurons surpassing the scale of 1 quadrillion computations a second (a petaflop/s). On Monday scientists used PetaVision to reach a new computing performance record of 1.144 petaflop/s. The achievement throws open the door to eventually achieving human-like cognitive performance in electronic computers. PetaVision only requires single precision arithmetic, whereas the official LINPACK code used to officially verify Roadrunner’s speed uses double precision arithmetic. “Roadrunner ushers in a new era for science at Los Alamos National Laboratory,†said Terry Wallace, associate director for Science, Technology and Engineering at Los Alamos. “Just a week after formal introduction of the machine to the world, we are already doing computational tasks that existed only in the realm of imagination a year ago.†PetaVision models the human visual system—mimicking more than 1 billion visual neurons and trillions of synapses. Both my phil mind and phil tech class are ridiculously out of date. (Thx Steve via /.)
June 13, 2008
Check this out (Thanks Steve!) Insight into how we tell whether something’s alive When viewers see the unscrambled pictures, they readily discern whether the point-light display represents a living thing or a random moving pattern. In fact, the task is so easy that it’s not actually very useful for researchers trying to understand the visual system. What Chang and Troje want to know is whether viewers use a “local” system or a “global” system to identify biological motion. In other words, are viewers looking at an isolated part of the display like the human’s ankles, or are they considering the concerted motion of all the points together? … Other research has found that the motion of the ankle appears to be a key in identifying biological motion. This may be because nearly all walking vertebrates swing their legs forward in a similar manner: they don’t actually use their muscles, but instead simply rely on gravity, thus conserving energy. Chang and Troje speculate that perhaps it is this distinctive arc that viewers focus in on when they identify biological motion.
June 13, 2008
Robot Swarms Invade Kentucky One thing that the robots don’t know yet is how to define boundaries of the network, so they often spread out from the center and then get disconnected. The robots can communicate via one another (they know the neighbors, but don’t know about everybody else) but not with everybody at once. So if they need to find a robot that is not in their neighborhood, they must relay the info via their neighbors. To find the answer, they go around and query one another to find the result. The robot that is searching just goes around and asks a robot next to him. The network reconfigures in real-time and the robot is going to move around the network until it finds the robot in question. They can also form protective areas/fences. And, of course, they can also leave the planet in orderly fashion, so McLurkin has his robots leave the stage by ID. Two special robots know they are special and the rest know that they are ordinary. So they query all neighbors about their ID and then place themselves between the two neighbors—one that has a greater id than them and one that has a lower id than them—until the whole “squad†is arranged.
June 13, 2008
The Soul in the Machine When I was ushered into the room, the professor motioned me to a chair, his hands playing nervously, his shoulders rising with each breath. “Ask me anything you like,” he said, fixing me with an intent look, before staring at the floor despondently when I began to chuckle. “How many actuators do you have?” I said. “I have 50 pneumatic actuators in my upper body, including 17 in my head, five of which I use to move my lips for speech, and four activitators to make my shoulder move in a natural fashion.” “Do you believe in God?” “Um, er…,”: Ishiguro put his finger to his face in embarrassment. “Good question. Maybe you should ask the professor that one?” The “professor” was being operated in a nearby room by a young research assistant. I met the real Ishiguro the next day. He argued that Japan’s easy acceptance of robots had religious roots. In both Buddhism and Shintoism, the soul is everywhere and “just as we don’t distinguish between humans and rocks, so we don’t distinguish between humans and robots.” By contrast, Honda had sought the Vatican’s advice ten years ago before introducing Asimo’s forerunner to Europe. … In Japan people “feel love for robots”, as Doc put it, and want to care for them. “We Japanese want to live alongside robots.” They give robots human qualities–kawaii, “cute”, is perhaps Japan’s most squealed word. Robots are not threatening or alienating, they create feelings of security, comfort and companionship. Their cuteness tips over into the cloying. Don’t misunderstand me. I was not taken with Western notions of robots as a threat–of Daleks and Terminators. But I could take them or leave them.
June 13, 2008
I meant to post something on this a while ago, and never did, but let me save it for posterity. More on Blue Brain: “The column has been built and it runs,” Markram says. “Now we just have to scale it up.” Blue Brain scientists are confident that, at some point in the next few years, they will be able to start simulating an entire brain. “If we build this brain right, it will do everything,” Markram says. I ask him if that includes selfconsciousness: Is it really possible to put a ghost into a machine? “When I say everything, I mean everything,” he says, and a mischievous smile spreads across his face. He has a talent for speaking in eloquent soundbites, so that the most grandiose conjectures (“In ten years, this computer will be talking to us.”) are tossed off with a casual air. But then I notice, tucked in the corner of the room, is a small robot. The machine is about the size of a microwave, and consists of a beige plastic tray filled with a variety of test tubes and a delicate metal claw holding a pipette. The claw is constantly moving back and forth across the tray, taking tiny sips from its buffet of different liquids. I ask Schürmann what the robot is doing. “Right now,” he says, “it’s recording from a cell. It does this 24 hours a day, seven days a week. It doesn’t sleep and it never gets frustrated. It’s the perfect postdoc.” The science behind the robotic experiments is straightforward. The Blue Brain team genetically engineers Chinese hamster ovary cells to express a single type of ion channel—the brain contains more than 30 different types of channels—then they subject the cells to a variety of physiological conditions. That’s when the robot […]
June 12, 2008
Speaking of long articles worth reading, Vanity Fair has assembled a good oral history of the Internet to celebrate it’s 50th anniversary. How the web was won Leonard Kleinrock: September 2, 1969, is when the first I.M.P. was connected to the first host, and that happened at U.C.L.A. We didn’t even have a camera or a tape recorder or a written record of that event. I mean, who noticed? Nobody did. Nineteen sixty-nine was quite a year. Man on the moon. Woodstock. Mets won the World Series. Charles Manson starts killing these people here in Los Angeles. And the Internet was born. Well, the first four everybody knew about. Nobody knew about the Internet.
June 12, 2008
Excellent article on the Internet up on The Atlantic (thanks, Lally!) that ties the internet into the long history of automated “choreography” characteristic of the industrialized world. Is Google Making Us Stupid? Where does it end? Sergey Brin and Larry Page, the gifted young men who founded Google while pursuing doctoral degrees in computer science at Stanford, speak frequently of their desire to turn their search engine into an artificial intelligence, a HAL-like machine that might be connected directly to our brains. “The ultimate search engine is something as smart as people—or smarter,†Page said in a speech a few years back. “For us, working on search is a way to work on artificial intelligence.†In a 2004 interview with Newsweek, Brin said, “Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.†Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and to do it on a large scale.†Such an ambition is a natural one, even an admirable one, for a pair of math whizzes with vast quantities of cash at their disposal and a small army of computer scientists in their employ. A fundamentally scientific enterprise, Google is motivated by a desire to use technology, in Eric Schmidt’s words, “to solve problems that have never been solved before,†and artificial intelligence is the hardest problem out there. Why wouldn’t Brin and Page want to be the ones to crack it? Still, their easy assumption that we’d all “be better off†if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps […]
June 7, 2008
now that the drama is over Meanwhile, Obama’s Chicago headquarters made technology its running mate from the start. That wasn’t just for fund raising: in state after state, the campaign turned over its voter lists — normally a closely guarded crown jewel — to volunteers, who used their own laptops and the unlimited night and weekend minutes of their cell-phone plans to contact every name and populate a political organization from the ground up. “The tools were there, and they built it,” says Joe Trippi, who ran Howard Dean’s 2004 campaign. “In a lot of ways, the Dean campaign was like the Wright brothers. Four years later, we’re watching the Apollo project.” Even Obama admits he did not expect the Internet to be such a good friend. “What I didn’t anticipate was how effectively we could use the Internet to harness that grassroots base, both on the financial side and the organizing side,” Obama says. “That, I think, was probably one of the biggest surprises of the campaign, just how powerfully our message merged with the social networking and the power of the Internet.”
April 14, 2008
From He Wrote 200,000 Books (but Computers Did Some of the Work) (NYT) While nothing announces that Mr. Parker’s books are computer generated, one reader, David Pascoe, seemed close to figuring it out himself, based on his comments to Amazon in 2004. Reviewing a guide to rosacea, a skin disorder, Mr. Pascoe, who is from Perth, Australia, complained: “The book is more of a template for ‘generic health researching’ than anything specific to rosacea. The information is of such a generic level that a sourcebook on the next medical topic is just a search and replace away.†When told via e-mail that his suspicion was correct, Mr. Pascoe wrote back, “I guess it makes sense now as to why the book was so awful and frustrating.â€Mr. Parker was willing to concede much of what Mr. Pascoe argued. “If you are good at the Internet, this book is useless,†he said, adding that Mr. Pascoe simply should not have bought it. But, Mr. Parker said, there are people who aren’t Internet savvy who have found these guides useful. It is the idea of automating difficult or boring work that led Mr. Parker to become involved. Comparing himself to a distant disciple of Henry Ford, he said he was “deconstructing the process of getting books into people’s hands; every single step we could think of, we automated.†“Using a little bit of artificial intelligence, a computer program has been created that mimics the thought process of someone who would be responsible for doing such a study,†Mr. Parker says. “But rather than taking many months to do the study. the computer accomplishes this in about 13 minutes.†Thanks, Jon