Discussing the singularity is often confusing because it makes claims about both technology and artificial intelligence, and its hard to see how the two fit together. In fact, some philosophers have argued that technology is entirely irrelevant to studying the mind using the techniques of artificial intelligence. The idea is that cognitive science is medium independent; it doesn’t matter if you run the program on my laptop or yours or a computer 10 years old, its the same program that can be explained by the same theory. So success in artificial intelligence is theoretically independent of technological advances. I don’t think anyone buys this story any more, but it raises the issue of exactly how the two are related. It is a long story, but this is how I see it:
Machines can perform certain tasks better than people. When they do, we often replace the human labor with their machine counterpart. This has been part of the history of technology. Most advances in technology involve machines that can move faster, or stronger, or more durably than people. These machines don’t have to be ‘smart’, although they might be improved by making them smarter.
But with the advent of computers, machines started processing information. And the going theory is that the human brain also operates as a kind of information processing machine. That doesn’t mean the human brain is a computer, or that computers are brains. It just means the two are explained by the same basic theory. And in fact, we can get computers to simulate various aspects of the information-processing routines that brains perform. Computer vision is one of the wild successes of this paradigm.
The ‘singularity’ supposedly hits when computers are equivalent to the human brain. Why is this event special? Well, what does it mean for computers to be equivalent to the brain?
Kurzweil suggests that processing power in terms of computations per second is the right measure for finding a point of convergence. I think everyone agrees that this is an implausible measure to judge equivalence to the brain. It doesn’t matter how many computations you can do, what matters is which computations you do, and how you do them. Now, it is true that as our computing technology has more computational resources it becomes more likely that we can train our machines to perform the right tasks. But looking at raw processing power isn’t a helpful guide to to finding those tasks.
Perhaps a better measure of ‘equivalence’ is having identical sets of tasks they can perform. But as OOCC has been arguing, this is basically a stupid goal. Machines can perform lots of tasks that brains just can’t, why artificially constrain them to our limited abilities? And there are things we do that make no sense to give to machines. Who cares if we can’t build a machine that can chew bubble gum and skip rope? These are goals we’d never bother to achieve, so its hard to see why the singularity would follow once we get there.
A more plausible measure is to say that there are a certain specific subset of tasks that brains can perform, and when machines can perform those tasks they are equivalent in the relevant sense. This is the basic idea of ‘homonocular functionalism’, and it is the motivation behind the Turing test. That’s is fine, and the issue becomes specifying which tasks belong in the subset. The whole range of the cognitive sciences have been addressing exactly this issue for quite a while now.
But remember, we are dealing with computational technology, and we are looking at what happens when the information processing capacities of our technology are as good as or better than we are. Well, what is special about these places?
In fact, it doesn’t look like anything particularly surprising happens. It means our machines will be incredibly good at dealing with information, and that they can deal with information at a level and speed that is quite similar to the way I also deal with information, so it will be easier to interact with it at the right level. This will allow me to do incredible things that I couldn’t do before. It also means that a lot of the tasks that we currently perform that require information processing would be better performed by a machine, in the same way that technology has always replaced human labor. And this will undoubtedly cause huge social and economic upheavals. Hell, it already has caused huge changes. But this has always been the history of technological change, and there doesn’t seem to be any reason to think that this generates a point of singularity. This has been the pattern of technology since we’ve started using tools, and it hasn’t outstripped us yet.
Specifically, singularists like to talk about machines that design the next generation of machines, as if this will be an identifiable breakthrough in technology. In fact, we have always used current technology to help design, develop, and test the next generation of technology. Why should we think that anything fundamentally changes when machines are more responsible for the design process?
Presumably, it is the very fact that machines are responsible that marks the important difference between the historical development of technology and the post-human singularity event. Machines have always been designed and used as tools for our own purposes, but when machines take control of the design process they will suddenly have a kind of independence or autonomy that they didn’t have before. Its what makes them responsible, and not just derivatively responsible through us as tools.
Remember in the Chinese Room argument, where Searle distinguishes between “strong” and “weak” AI. Here, I’ll quote it for you:
According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion. But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.
The distinction between strong and weak AI is exactly where we stop treating machines as tools, and start treating them as independent agents. Its when we start attributing its various states to it and not some human responsible for controlling it. Forget this business about ‘minds’, what matters is that an autonomous machine is not a tool.
I think this is the right way to look at artificial intelligence, situated within the larger technological context. I think that the ultimate goal of artificial intelligence is not to create human-like creatures, but to create autonomous, independent machines. I also think that building significantly autonomousmachines is the kind of radical transition that may generate an event like the ‘singularity’. But note a few things:
Autonomy is not just information processing. Its not really a task at all; it wouldn’t appear in the subset of things the brain does. Its a different sort of thing entirely. Autonomy involves practical reason, which cannot neatly be described in information-theoretic terms. It involves the ability to make judgments and act on those judgments. Its about setting your own goals, and figuring out how to optimize your ability to realize goals. Its also about settling the criteria for making judgments, and about committing to those standards. This is not something that happens because you can compute things faster. Cognition is certainly involved in making judgments, but that doesn’t mean that judgments are best described in information processing terms.
And, for the record, none of this is exclusively the domain of human beings or biological creatures. Machines are quite capable of practical reason. However, the kind of practical reason machines engage in is primarily limited by our (human) interests, so machines are usually not making decisions for themselves except uninterestingly as part of a larger goal that we set. Therefore, machines are never attributed the independence necessary to make them fully autonomous agents.
Of course, thats not entirely true. Machines are increasingly responsible for more important decisions in our lives, and lots of important judgments made by humans have been automated and put in the hands of expert systems. The Aldo Calmino case\ is a particularly salient example of the transition; during the first rumblings of the stock market crash last year, people started to blame computers.
But we haven’t really started to recognize certain technological systems as being autonomous agents, probably because there isn’t any pressing need to acknowledge them. Once machines are put in a position that forces us to acknowledge the autonomy of their decisions, thats when we will have genuine artificial intelligence, whether or not it behaves at all like a human. It will be when a machine makes an important decision, and no one takes credit for that decision. For the record, this is basically Rorty’s point about incorrigibility, but on a larger scale.