Wow - the science paranoia trifecta of malevolent AI, Von Neumann Machines and Grey Goo! Someone's been reading Stephen Hawking... The only way for an AI to take control is to either (A) magically infect everyone it talks to into an instantaneous AI death cult or (B) instantaneously automate a world so far stubbornly resistant to automation. I think Yudikowsky thinks (A) is a real possibility, which is one reason I lost interest in Yudikowsky... and I think everyone else assumes (B) is one switch-flip away. What really bugs me is that these conversations never even evolve to the thought-provoking stuff: - So thanks to the Singularity, we have a hyperintelligent AI. Why do we have only one? - Why do we assume that the minute we have hyperintelligent AI, all agents of it will work together towards a common goal? Has anyone ever observed two overly-smart people for more than ten minutes? - Why is it assumed that humans, us distrustful machine-hating beings, don't start the clock on another hyperintelligent AI specifically seeded to protect humanity from hyperintelligent AIs?