These are really legit researchers, and all the names I recognize, I respect. But ... All the researchers I recognize are in the habit of philosophizing, cranking, and prognosticating out their asses. It's their hobby. When they talk philosophy, they tend towards science fiction.
So SSC cherry picks quotes from a few theoretical AI researchers who like waxing philosophical, adds in a smattering of AI salesmen in the middle of an overly aggressive pitch, and wraps up with an appeal to the authority of ancient computer scientists speaking back when A* search was still a really neat idea. Many of them are older folk who aren't really into the current trends and have therefore been more on the theoretical/philosophical end of AI for decades. The prevailing view among academics, from my experience, is that we're nowhere close to general AI. A common metaphor given is that they're proud that they've climbed the highest mountain, and announce on their progress in getting to the moon. AI researchers really are less worried about malevolent AI than about losing funding when all this hype about malevolent AI falls through.
For fukssake, can we all get over the fear mongering. Any action has risk, especially the new stuff.
This is all you need to know. People who are actually familiar with AI aren't 'afraid' at all. Some big tech names are 'afraid' because they don't know AI, and just go off terminator and shit like everyone else. Ultimately it all depends on what type of AI you are talking about.Andrew Ng builds artificial intelligence systems for a living. He taught AI at Stanford, built AI at Google, and then moved to the Chinese search engine giant, Baidu, to continue his work at the forefront of applying artificial intelligence to real-world problems. So when he hears people like Elon Musk or Stephen Hawking—people who are not intimately familiar with today’s technologies—talking about the wild potential for artificial intelligence to, say, wipe out the human race, you can practically hear him face palming.
You know, I have been thinking about this a lot. Maybe more than 20 years. I'm not an AI researcher, but I am familiar with the state of play when it comes to the artificial intelligence tech that's out there. It's really really cool what we can do right now. We've made great strides at mimicking very well many human functions that are essential for an AI to operate in the world. But none of them are capable of developing any kind of desire other than what we've given them. But the thing that I'm more worried about is modified humanity - transhumanists. When I say "worried", I mean, I think it's inevitable that we will develop technology that will interface deeply into the human brain and we will be offloading parts of our mental function - memory, compute, etc - to external processors as required. I think this technology will be achievable long before any AI is ever developed, and that there will be eventually people with capabilities that far exceed the wildest dreams of any science fiction writer. So I think the risk, if any, comes from humans - just as it always has. Personally, I'll go for the revision B iBrain machine interface by Apple. The rev A will have bugs, and probably be non-upgradable.
I think i will just wait until Samsung releases their take on the Apple Rev B. It will have better hardware and more flexibility... Jokes aside, AI Risk does have more serious applications then the Skynet scenario. Many people are concerned about the economic and ethical considerations behind such technology.