a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
yakov's comments
activity:
yakov  ·  3414 days ago  ·  link  ·    ·  parent  ·  post: I'm an artificial intelligence researcher and I'm not afraid. Here's why.

The author deftly sidesteps the central debate by asserting that

    the emergence of 'full artificial intelligence' over the next twenty-five years is far less likely than an asteroid striking the earth and annihilating us,
and arguing that weak AI poses no threat because it's not autonomous or conscious. But nobody is arguing against weak AI! It's precisely strong AI (ie. full AI) that poses a threat, and whether it happens within 25 years or within 250, it's worth taking seriously. For an overview of the dangers, I recommend Nick Bostrom's Superintelligence: Paths, Dangers, Strategies.

Further muddling the issue, the author defines an autonomous agent as one that creates its own goals and has free will, but then presents malware as an example, so he must have something quite different in mind when he talks about autonomy. But what? And note that a full AI would pose an existential threat even if couldn't create its own ultimate goals: striving for an assigned goal may easily be just as dangerous. As for free will, there is no reason a machine could not have it. Nor would a lack of free will be necessarily safe.

I'm dismayed to see such an post from a researcher as prominent in the field as Oren Etzioni.