This line of thinking traces to Drefyus’ What computers can’t do, and specifically of his reading of Heidegger’s care structure in Being and Time. Dreyfus’ views gained popularity during the first big AI wave and successfully put a lid on a lot of the hype around AI. I would say Dreyfus critiques are partly responsible for the terminological shift towards “machine learning” over AI, and also for the shifted focus on robotics and embodied cognition throughout the 90s.
https://en.wikipedia.org/wiki/Hubert_Dreyfus%27s_views_on_artificial_intelligence
But Drefyus’ critiques don’t really have a purchase anymore, and I’m surprised to see Sterling dusting them off. It’s hard to say that a driverless car doesn’t “care” about the conditions on the road; literally all it’s sensors and equipment are tuned to careful and persistent monitoring of road conditions. It remains in a ready state of action, equipped to interpret and respond to the world as a fully engaged participant. It is hard to read such a machine as a lifeless formal symbol manipulator. Haraway said it best: our machines are disturbingly lively, and we ourselves frighteningly inert.
I think +Bruce Sterling underappreciates just how well we do understand the persistent complexities of biological organization. Driverless cars might be clunky and unreliable, but they are also orders of magnitude less complex than even a simple organism. The difference is more quantitative than qualitative, and is by no means mysterious or poorly understood. In a biological system, functional integration happens simultaneously at multiple scales; in a vehicle it might happen at two or three at most. This low organizational resolution makes it easier to see the structural inefficiencies and design choices in technological system.
But this isn’t a rule for all technology. Software in particular isn’t subject to such design constraints. This is why we see neural nets making huge advances not just in vision and object recognition, but also in interpolation, natural language processing, and a host of other real AI puzzles that have gone unsolved for decades. We’re living in a second golden age of AI, releasing charming bots of all shapes and sizes into the circus of social media. And in this zoo they are already passing for human (http://goo.gl/fSr1Qy), and having measurable influence on social trends and events.
Twitter bots care about the same things we do. They flock to Beiber and Gaga, they have partisan allegiances in all the hot-button political debates, and they curate audience engagement with all the gusto of a teen taking a selfie. When these bots pass for human, it’s because their memetic flocking is indistinguishable from our own.
If these bots don’t care, none of us do.
// Originally posted here in response to this interview with Bruce Sterling. via Robert Smart.