Here's my problem with all popular conversations about AI:
In popular conception, the distance between "machines that think" and "A T-1000 with a shotgun" is about 1/2 a faith-leap. The basic assumption is the minute we've achieved "artificial intelligence" (which nobody bothers to define), it will Skynet the fuck out of everything and it'll be all over for America and apple pie.
Call it the Maximum Overdrive problem: "artificial intelligence" means weed whackers bent on our death. Never mind the fact that there's exactly zero links between thinking machines and autonomous kill-bots. Yes, you could build an autonomous kill-bot with artificial intelligence but there's no obligation for IBM to give Deep Blue the launch codes.
This is Yudikowsky's beef: assuming artificial intelligence, it will be more cleverer than us and assume total control of the universe within days or weeks of us inventing it. I've never had it adequately explained to me, however, why a hyperintelligent toaster oven would have the ability to exterminate the human race even if it had the motivation.
And this is my beef: every time we discuss a Mirror article in which they link to BBC4 dramas and Terminator imagery, we skip that conversation.