Kurzweil argues that a "singularity" will occur when technology, and computers in particular, will increase at an exponential rate, outstripping the human ability to understand or control it.
I disagree with Kurzweil. My feeling is that Kurzweil underestimates something that might be called "transaction cost", or the cost of explaining what it is that needs to be done. A good understanding of this cost comes from having managed a group of intelligent beings to achieve certain goals. If left to their own devices, these beings (human, in this case, but it really doesn't matter) would produce all sorts of results that seem appropriate, and perhaps even creative, to them. However, my interest rarely coincided with the group's own idea of the needed result. This was not because members of the group lacked intelligence, creativity, or imagination, but because the result I wanted to achieve seemed arbitrary or whimsical to them. Moreover, the members of the group invariably believed that it would have taken them a really long time (maybe forever) to figure out on their own what I wanted them to do. So it was up to me to direct them, and this took up all my time. I call this time spent in directing the group "transaction cost" because it is the cost, in time, of activities surrounding the beginning and end of a task -- getting started, and then accepting (or refining) the result.
Sure, computers can get smarter and smarter to the point where they totally outstrip human intelligence, and as Tom Barbalet points out, this may have happened already. But there is enormous idle capacity doing nothing. Why? Because no human has taken the time and effort to explain to these superintelligent computers what they want done. Kurzweil and other singularity-proponents seem to think that intelligence by itself has some sort of momentum or something, that will propel it to self-replicate and make "better" copies of itself, taking over the role of evolution, but at an incredibly fast pace. But not only does this profoundly misunderstand evolution, which has no goal in mind, it's that word "better" which is just glossed over. You see, figuring out what's better is no mean feat. This is what management is all about. As a director of an I.T. department, and as a project manager, I can tell you that all the intelligence in the world will not help me determine which choice is "better" for the project or for the company. And if intelligent computers went off in their own direction, I would term it "useless" or "errant", and put a quick stop to it.
The decision-making time needed to make choices, to direct the activities of intelligent people and computers is the transaction cost that puts a damper on runaway improvement in capabilities. This decision-making relies not so much on intelligence but on an assessment of the needs at the time, which boil down to human needs, which are subjective and must seem even whimsical.
Shrink Rap Radio 126: Tom Barbalet is interviewed.