Will computers develop, or even surpass, human intelligence?
The question of machine intelligence seems to presuppose that the machines are going to become some kind of superhuman intelligent entity. Compare this human-machine relationship to that of the horse and the car. When people talk about machine intelligence, particularly with regards to replicating the human mind, I always think of a group of horses sitting in a field looking out over a freeway. One horse says to another horse, ''Sure, they can move quickly, but they'll never actually be horses.'' The irony is that we use the horsepower to describe what the car is.
This little story shows that in the case of the horse, the machine has already surpassed the natural entity in the particular characteristics of interest to humans. By analogy, it may well be the case that machines have already surpassed the characteristics of human intelligence of interest to humans. In particular, being indistinguishable from a human (the Turing Test) is not generally a characteristic that is particularly interesting to humans. (Although search engines, call centers, and toys are three applications in which it is appropriate for machines to mimic certain specific aspects human behavior.)
Moreover, a key component of Kurzweil's singularity is that computers will take over the high level decision-making currently done by humans. But this is something that people specifically do not want computers to do for them. A computerized travel agent, for example, is a wonderful tool, and even better if it converses with the human in fluent English, but the human still wants to retain the final choice. He never delegates the high-level choices to a computer. These highest level choices are something I call the "transaction cost" of progress.
Shrink Rap Radio 126: Tom Barbalet is interviewed.