David Chalmers at Singularity Summit 2009 — Simulation and the Singularity. First, an uncontroversial assumption: humans are machines. We are machines that create other machines, and as Chalmers points out, all that is necessary for an ‘intelligence explosion’ is that the machines we create have the ability to create still better machines. In the arguments below, let G be this self-amplifying feature, and let M1 be human machines. The following arguments unpack some further features of the Singularity argument that Chalmers doesn’t explore directly. I think, when made explicit and taken together, these show Chalmers’ approach to the singularity to be untenable, and his ethical worries to be unfounded. The Obsolescence Argument: (O1) Machine M1 builds machine M2 of greater G than M1. (O2) Thus, M2 is capable of creating machine M3 of greater G than M2, leaving M1 “far behind”. (O3) Thus, M1 is rendered obsolete. A machine is rendered obsolete relative to a task if it can no longer meaningfully contribute to that task. Since the task under consideration here is “creating greater intelligence”, and since M2 can perform this task better than M1, then M1 no longer has anything to contribute. Thus, M1 is ‘left behind’ in the task of creating greater G. The obsolescence argument is at the heart of the ethical worries surrounding the Singularity, and is explicit in Good’s quote. Worries that advanced machines will harm us or take over the world may be implications of this conclusion, but not necessarily so. However, obsolescence does seem to follow necessarily from an intelligence explosion, and this on its own may be cause for alarm. The No Precedence Argument: (NP1) M1 was not built by any prior machine M0. In other words, M1 is not itself the result of exploding G. (NP2) Thus, when M1 builds […]