I have yet to hear a cogent argument disputing any of the claims made by Ray Kurzweil. Even top computer scientists in the academic literature acknowledge the validity of Kurweil's basic arguments and commend him for articulately exploring the possibilities of a "Kurzweilian Singularity". Here is an example of a good article to check out: Goertzel, B. 2007. Human-level artificial general intelligence and the possibility of a technological singularity: A reaction to Ray Kurzweil’s The Singularity Is Near, and McDermott’s critique of Kurzweil. Artificial Intelligence, 171: 1161-1173. I am currently doing extensive research on the technological singularity for a book. According to most computer scientists in the field there is as much consensus over it occurring as climate scientists have regarding the anthropogenic effects of global warming. Vernor Vinge, one of the most prominent futurists in computer science first coined the phrase "technological singularity" back in 1993 in his now-famous paper: Vinge, V. 1993. The Coming Technological Singularity: How To Survive In The Post-Human Era. Vision-21 Symposium, NASA Research Center and the Ohio Aerospace Institute, 30 to 31 March 1993. Although other scientists had been aware that something "singularity-like" was on the horizon well before 1993: Ulam, S. (1958): "One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." Good, I.J. (1965): "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need very make." Vernor Vinge has discussed one of the reasons why a singularity-like event would NOT happen, with the main cause of that scenario being that we simply don't "find the soul in the hardware". I wrote about my thoughts on this here: http://www.theadvancedapes.com/theratchet/2012/12/13/singula... And here is the citation for the Vernor Vinge article: Vinge, V. 2007. What If the Singularity Does NOT Happen. Seminars About Long-Term Thinking, the Long Now Foundation. His talk was at a recent Long Now Foundation conference. An organization I have discussed before if you are interested in learning more about them: http://www.theadvancedapes.com/theratchet/2012/12/10/thinkin... Overall, people who hear about the singularity for the first time are just scared of it because the idea is so massive and so overwhelming and it changes our entire species. Actually, it more than changes our entire species, it makes our species irrelevant, and ushers us into a post-human era. I recently told my mom and her husband about the book I am writing and I experienced the common shock response that most people have when they hear about the singularity for the first time. My mom's husband actually got mad and irrational. He later calmed down, but his reaction is typical of most people (even academics) who are exposed to this type of thinking for the first time. Here are some other interesting articles about the singularity if anyone wants to do further research on the issue as it is being discussed today: Heylighen, F. 2008. Chapter 13 Accelerating socio-technological evolution: From ephemeralization and stigmergy to the Global Brain. In Modelski, G., Devezas, T. & Thompson, W.R. Globalization As Evolutionary Process. New York: Routledge. Vinge, V. 2008. Signs of the Singularity. IEEE Spectrum. Special Report: The Singularity. 1-6. Sandberg, A. & Bostrom, N. 2008. Whole brain emulation: A roadmap. Technical Report. Future of Humanity Institute, Oxford University. http://www.fhi.ox.ac.uk/reports/2008-3.pdf. Sandberg, A. (2010, March). An overview of models of technological singularity. The Third Conference on Artificial General Intelligence (AGI-10). URL http://agi-conf. org/2010/wp-content/uploads/2009/06/agi10singmodels2. pdf. 2010. I also summarized Kurzweil's main ideas in an article earlier this year (for anyone that hasn't read The Singularity Is Near): http://www.theadvancedapes.com/theratchet/2012/10/19/an-idea...
Well put. IMO a person's opinion of Watson's intelligence will likely correspond to their reaction to Singularity. If someone is looking for an essence to intelligence that separates Watson from us, then it is likely they will not be willing to follow the conceptual course that leads us to Singularity. However, if one is willing to accept that there is no difference between intelligence and the imitation of intelligence, then understanding Singularity requires only that you follow the facts as they stand. The human mind is a construct. It is a learning machine. It is not intelligent, but acts intelligently. Because we percieve intelligent actions, we infer that the brain contains something intelligent. It does not. It reacts to its environment, and to internal models of its environment. Intelligence resides in action, not in the brain. The Singularity is the result of artificial entities that act in ways that we perceive as intelligent, at least for a short while. These entities then evolve to act on perceptions and internal models that we cannot understand. It is not necessary to acheive human-like intelligence to bring about Singularity. It is only necessary to create a learning machine with a potential that is greater than our own. I agree that the reaction to Singularity that many have is often irrational and fear-driven. It's understandable. It terrifies me. However, that does not make it any less likely.A word on the nature of Watson’s “understanding” is in order here. A lot has been written that Watson works through statistical knowledge rather than “true” understanding. Many readers interpret this to mean that Watson is merely gathering statistics on word sequences. The term “statistical information” in the case of Watson refers to distributed coefficients in self-organizing methods such as Markov models. One could just as easily refer to the distributed neurotransmitter concentrations in the human cortex as “statistical information.” Indeed, we resolve ambiguities in much the same way that Watson does by considering the likelihood of different interpretations of a phrase.
What if we have the trajectory of technological progress incorrect? Say, if instead of the rate of technological advance being exponential its actually sigmoidal (or something similar), which I think is possible. For example, in many chemical reactions the rate of reaction accelerates as the reaction happens due to more free energy. But then the acceleration slows due to the lack of available reactants. I think a similar thing could be afoot with technological advance, given that faster machines require more power, which will eventually deplete the available resources, or perhaps create so much waste heat, as to slow down the machines themselves. The most powerful processors today have a heat flux equitable to a nuclear reactor. A fast enough processor will melt itself. Maybe these machines will figure out how to minimize energy input and maximize energy extraction (from the Sun, for example) and cooling efficiency, but still, I don't think its a given that technological advance can accelerate forever. Nothing is infinite, nor can be. Maybe we're on the upslope of a sigmoidal curve and are extrapolating too far into the future based on a small sample size.
It is an interesting proposition. However, the main problem with that is that we know of too many alternative energy sources (e.g., solar, fusion) that could easily give us the energy capabilities necessary to continue acceleration for centuries or millennia or indefinitely. The capability of producing AI and merging with technologies that will enhance our own biology are only decades away (really they are already happening). Information technology, and technology more generally is very predictable. Also, we don't have a small sample size. Technology has existed for over 2 million years. Since its inception, the growth of it has been exponential. There is really no reason to expect that the growth is sigmoidal. Furthermore, I feel like the analogies with exponential growth in other evolutionary processes is relevant. Biological evolution itself seems to evolve exponentially. For 1.5 billion years of the 3.5 billion years life has been on Earth, it was single-celled. For the other half we have the explosion of diversity that we see today. The way I see it. What is going to happen in the 21st century is analogous to the transformation of life from single-celled to multi-celled. I think we are creating a globally interconnected brain (i.e., most likely via some future improved internet that is incorporated into our consciousness). We aren't just going to merge with technology and AI, we are also going to become an even more interconnected superorganism than we already are. Time, space, etc. will become irrelevant in every respect within our species. All of the technologies that will make this a reality are already in existence and are developing rapidly. For example: HUBSKI. I have met none of you, yet I feel a strong bond with many of you. We share ideas nearly instantaneously, collaborate creating better projects than we would have, if we could only rely on our immediate networks in physical reality (e.g., thenewgreen podcast and hopefully theadvancedapes podcast too!). This is happening everywhere, Hubski is but one small example. One of the best examples to date is Wikipedia.