Good points (and welcome to Hubski)! That book by Bostrom looks very intriguing. Still, I got a different message from the article. The way I see his argument, put into terms of strong and weak AI, is as follows: P1: lots of people fear all AI P2: but there is a difference between AI with and without autonomy (strong vs weak AI) P3: strong AI will not be relevant in the next decades C: Therefore no need to worry because weak AI will not kill us and strong AI isn't relevant yet We shouldn't let the fear dominate the AI research debate, because: It's even in the title: AI won't exterminate us. I think that what the author is trying to argue, is that for our generation, fear is unjustified, not relevant, and potentially hindering our progression in the field of AI. Maybe I'm reading too much into it, I'd like to know what you think....if unjustified fears lead us to constrain AI, we could lose out on advances that could greatly benefit humanity—and even save lives. Allowing fear to guide us is not intelligent.