The only thing I ever come away with from discussions about "the coming Singularity" is feeling of unease when utopian language is used by the people engineering the technology who are in no way shape or form sociologists. Of course, no one can accurately predict the effects of disruptive technology, but there needs to be some serious thinking about the work-a-day ramifications of a technology that will embedded into us and augment us in such an internalized fashion. This isn't the steam engine or airplanes or antibiotics or the telephone, this is actually scarier (I am not afraid to use that word) than that and with much deeper ramifications. The whole field of AI has a pop culture interaction that is some low-level philosophical garbage, and high-minded engineering that is obsessed with the end product as, at best, a curiosity.
Ten years? I get that he's saying that if we actually used the resources we have available we could achieve this, but still that seems so fast. What do you think? Also, I don't think the "negative" outcome of human warfare over whether or not to produce these AI's is realistic. It won't be something that happens over night, it will be a gradual occurrence. By the time these questions arise, we humans will have already integrated AI heavily in to our lives/selves. Don't you think?
I agree with him that if we collectively put as much money into genetics, nanotechnology, and robotics as we collectively invest into our military we would have a singularity in 10 years. I also think we could seriously begin Martian colonization with permanent safe settlements in 10 years if we put as much money into space travel as we do into the military. But we won't do either. My view is that conceptualizing human/AI conflict as a binary is foolish (as de Garis does). Brain-interfacing technologies already exist and its only 2013. Brain interfacing technologies will actually make us cyborgs and enhance our intelligence to the level of our first A.I. systems in 2030. So in my view it will be a merger (this is generally referred to as the "Kurzweilian scenario") (Goerztel did a fantastic overview of all singularity scenarios here ). It makes no sense for A.I. to be in direct conflict with us - they will still be dependent on the system they are emerging from - and evolutionary pressures will force us to enhance our intelligence to keep up anyway. In my talks with Ben he does express a great deal more pessimism than he does in this documentary. Francis Heylighen (from the Global Brain Institute) and him frequently argue because Heylighen believes it will be a utopian-like era compared to our current existence and Goertzel thinks its 50/50 (positive/negative). Heylighen justifies this assertion by comparing the behaviour of neurons in the brain and humans in the Global Brain. We will all be permanently interacting on the Internet all the time - and that system will be very intelligent - and it will be in the systems best interest to keep all of its neurons around - not to destroy the neurons (just as our brain does). In fact it will be in the systems best interest to enhance all experience to the greatest degree possible because it will be the intelligent agents interaction that creates its own intelligence. Are there major concerns with the future of nanotech and A.I.? Of course. But I don't think a Terminator scenario is likely at all.Ten years? I get that he's saying that if we actually used the resources we have available we could achieve this, but still that seems so fast.
Also, I don't think the "negative" outcome of human warfare over whether or not to produce these AI's is realistic. It won't be something that happens over night, it will be a gradual occurrence.