You know, I have been thinking about this a lot. Maybe more than 20 years. I'm not an AI researcher, but I am familiar with the state of play when it comes to the artificial intelligence tech that's out there. It's really really cool what we can do right now. We've made great strides at mimicking very well many human functions that are essential for an AI to operate in the world. But none of them are capable of developing any kind of desire other than what we've given them. But the thing that I'm more worried about is modified humanity - transhumanists. When I say "worried", I mean, I think it's inevitable that we will develop technology that will interface deeply into the human brain and we will be offloading parts of our mental function - memory, compute, etc - to external processors as required. I think this technology will be achievable long before any AI is ever developed, and that there will be eventually people with capabilities that far exceed the wildest dreams of any science fiction writer. So I think the risk, if any, comes from humans - just as it always has. Personally, I'll go for the revision B iBrain machine interface by Apple. The rev A will have bugs, and probably be non-upgradable.
I think i will just wait until Samsung releases their take on the Apple Rev B. It will have better hardware and more flexibility... Jokes aside, AI Risk does have more serious applications then the Skynet scenario. Many people are concerned about the economic and ethical considerations behind such technology.