One of my biggest problems in philosophy is that not very many people do what I do. The Cyberneticists in the 50s came close, but the continental philosophers are prone to use ‘technology’ as shorthand for a discussion about whatever aspect of society they want to talk about, and consequently they never really engaged the problem of technology directly. I draw a lot of my own work from the analytic work in Phil Mind from the 80s and 90s (which in turn was a response to the AI guys in the 50s and 60s), but really thats only because that’s the literature I know the best; I am definitely taking oblique lines to that whole discussion.
I’m sort of embarassed to admit it, but there are definite similarities between what I am working on and the Singularists. I’ve talked about the Singularity before, so I wont go into my quasi-Davidsonian spiel about how there’s no real sense to make of entirely incomprehensible intelligences. I’ve never really felt comfortable making these arguments, because I have trouble treating anyone calling themselves a ‘futurist’ or ‘transhumanist’ seriously. But apparently other people are taking them seriously, because they are having a big conference this weekend at Stanford. I don’t know how much credit to give this fact- their sponsors, aside from the hosting institution (?), are kind of ridiculous, and they quote people on the home page like Gates and Hawking who have nothing to do with the conference, which is rather disingenuous. Plus, I’m sure these guys take their increasing popularity and ‘success’ as evidence that their claims are accurate, which is just self-confirming bullshit.
But then again, maybe I can see this as an employment opportunity. Make a name for myself arguing against these guys. I dunno. At the very least, I can agree with Hofstadter’s more skeptical quote on the front page:
A growing number of highly respected technological figures, including Ray Kurzweil and Hans Moravec, have in recent years forecast that computational intelligence will, in the coming two or three decades, not only match but swiftly surpass human intelligence, and that civilization will at that point be radically transformed in ways that our puny minds cannot possibly imagine. This bold hypothesis, now often called “The Singularity,” strikes some as wonderful and strikes others as abhorrent. But whether it is wonderful or abhorrent, is the singularity scenario even remotely plausible, or is it just science fiction? If the singularity scenario is plausible, is the time frame proposed ridiculous or realistic?
To any thoughtful person, the singularity idea, even if it seems wild, raises a gigantic, swirling cloud of profound and vital questions about humanity and the powerful technologies it is producing. Given this mysterious and rapidly approaching cloud, there can be no doubt that the time has come for the scientific and technological community to seriously try to figure out what is on humanity’s collective horizon. Not to do so would be hugely irresponsible.