David Chalmers at Singularity Summit 2009 — Simulation and the Singularity.
First, an uncontroversial assumption: humans are machines. We are machines that create other machines, and as Chalmers points out, all that is necessary for an ‘intelligence explosion’ is that the machines we create have the ability to create still better machines. In the arguments below, let G be this self-amplifying feature, and let M1 be human machines.
The following arguments unpack some further features of the Singularity argument that Chalmers doesn’t explore directly. I think, when made explicit and taken together, these show Chalmers’ approach to the singularity to be untenable, and his ethical worries to be unfounded.
The Obsolescence Argument:
(O1) Machine M1 builds machine M2 of greater G than M1.
(O2) Thus, M2 is capable of creating machine M3 of greater G than M2, leaving M1 “far behind”.
(O3) Thus, M1 is rendered obsolete.
A machine is rendered obsolete relative to a task if it can no longer meaningfully contribute to that task. Since the task under consideration here is “creating greater intelligence”, and since M2 can perform this task better than M1, then M1 no longer has anything to contribute. Thus, M1 is ‘left behind’ in the task of creating greater G. The obsolescence argument is at the heart of the ethical worries surrounding the Singularity, and is explicit in Good’s quote. Worries that advanced machines will harm us or take over the world may be implications of this conclusion, but not necessarily so. However, obsolescence does seem to follow necessarily from an intelligence explosion, and this on its own may be cause for alarm.
The No Precedence Argument:
(NP1) M1 was not built by any prior machine M0. In other words, M1 is not itself the result of exploding G.
(NP2) Thus, when M1 builds M2, this particular act of creation is not leaving anything “far behind”.
(NP3) Thus, when M2 builds M3 and initiates an ‘intelligence explosion’, this is an unprecedented (that is, a singular) event.
The No Precedence Argument goes hand-in-hand with the evolutionary considerations that motivates Chalmers’ positive suggestions at the end of his lecture. M1 was produced by dumb evolutionary processes. Designing intelligent machines, the defining feature of M1, is supposedly not itself a dumb process, even if M1 uses methods inspired by dumb evolutionary algorithms. Thus, when M1 builds M2, this is a straightforward application of G; call it “linear G”. When M2 creates M3 and produces exploding (or exponential) G, this is an unprecedented event. Since it is unprecedented, we don’t know what to expect from exploding G. This is also cause for alarm.
Both the Obsolescence argument and the No Precedence argument are packed into Chalmers’ formulation of the Singularity, and both give reasons to be worried about this event. Chalmers seems to implicitly endorse both arguments, and indeed argues that there are grounds for taking precautions in response to these possible threats. Furthermore, both arguments have the implication that if such an event is possible, it certainly hasn’t happened yet, and likely won’t happen in the near future. We currently play an essential role in the development of technology, and show no signs of becoming obsolete in the design of future technologies. As a result we have not experienced (and have no grounds for expecting) any radical discontinuities in the development of technology. Chalmers is explicit that if the Singularity is possible, it is a distant concern. This is evidence that he is committed to something like the above arguments.
I will argue that the conclusions of both arguments are false. First, I will argue that the ‘intelligence explosion’ predicted by the Singularists is not unprecedented, but is fundamental to the nature and use of technology. From this, I will argue that we are not in danger of becoming obsolete; the introduction of better technology does not leave us behind any more than the introduction of the wheel or the computer left us behind. Instead, it changes who ‘we’ are.
My arguments are largely inspired by Andy Clark’s discussion of technology, which Chalmers knows well but leaves out of his talk. Let’s formulate a Clarkian principle of technology to help us along:
The Interdependence Principle: Human intelligence and the technology it creates and are fundamentally interdependent.
There is a superficial sense of interdependence that makes this principle obviously true. Machines need us to build and use them, and we need to use and build machines in order to survive. But Clark would say this mutual dependence is closer to a kind of symbiotic relationship, where who we are is essentially tied to the tools we use. Our very capacity for G is not a feature of our naked brains, but is the result of thousands of years of developing an intimate (Clark’s term) relationship with technology. Our best machines today are not designed by any isolated human brain. They are designed by collectives of brains in cahoots with a variety of technological machines that assist in the design, development, and construction of still better machines. It is only through these elaborate cooperative enterprises, incorporating both humans and machines, that enables our steady technological progress. The computer I am writing on could not have been built without the computers that came before. In other words, O1 is true; not just possibly in the distant future, but actually and in our world today. AI is already a fact about our world, in such a straightforward and familiar way that we hardly recognize it. Here, I’ll introduce you.
If the Interdependence Principle is true, and I believe it is, then (NP1) and (NP3) are worse than simply false; they represent a fundamental conceptual error in thinking about the relationships between humans and their technology. If M1 represents humanity at its current level of technological development, then clearly we ARE the result of the humans and machines of a different technological age. We stand on the shoulders of giants, and some of those giants are robots. And the next generation of machines will exploit and incorporate our own generation; we will be swept along as technology marches forward. In other words, there is no substantive distinction to draw between M1 and M2, or indeed between M1 and M3. These are not unprecedented leaps into an unknown future, they are the signposts of a very familiar pattern of human behavior.
This does not suggest that the future of technological development can be predicted with any accuracy or certainty, and I am not suggesting that we try. One of the benefits of having philosophers speak on the Singularity is our deeply ingrained skepticism about induction, and a general distaste for futurism, both of which run rampant among the Singularity enthusiasts. Chalmers does us all a service by staying cool about the future.
If there is no distinction to draw between ourselves and the machines with which we coexist, then we are not threatened by the possibility of obsolescence, for we are necessarily carried along with the intelligence explosion; indeed, we partly constitute it. This does not mean it will be a painless transition. Technological change has always caused enormous suffering; witness Detroit or China. Such examples show that technological change does leave people behind (e.g., the Digital Divide), but this is because technology is part of humanity, and it therefore participates in the same social and political institutions as we do. This is certainly cause for concern, but not of the variety that Chalmers suggests. The very idea of incomprehensible technology, of the sort that the Singularists argue for, rests on a denial of the Interdependence Principle. Endorsing the principle makes it clear that O2 and O3 are likewise false.
Since both arguments violate the Interdependence Principle, then both arguments can be rejected. I will suggest that if we reject both arguments, then there is nothing left of the Singularity to worry about. Of course, there are still worries about the use of intelligent machines, and it may very well be possible to initiate an intelligence explosion (on my view, we already have!). However, there are no and will be no discontinuities leaving us “far behind”, because continuity is part and parcel of our continued use of technology. This curve might be exponential, but from our perspective it will always seem gradual because we are traveling in the same frame of reference.
If correct, the Interdependence Principle not only defeats the Singularity, but also reveals Chalmers’ positive suggestions for dealing with the Singularity to be hopelessly naive. It is impossible to build machines that we can isolate from having any effect on the real world. One implication of the Interdependence Principle is that even the most exclusive, proprietary, well-protected machines can have dramatic consequences for humanity, even if they are rarely used (see the Atomic Bomb). Whatever thin veneer of safety Chalmers thinks he can derive from the metaphysically suspicious distinction between the real world and virtual world betrays a deep misunderstanding of the very nature of technological change. Insofar as interest in the Singularity is symptomatic of a deep fear of the unknown technological future, we would be wise not to reinforce the mysticism surrounding machines that Chalmers’ argument represents.