a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by yakov
yakov  ·  3634 days ago  ·  link  ·    ·  parent  ·  post: I'm an artificial intelligence researcher and I'm not afraid. Here's why.

The author deftly sidesteps the central debate by asserting that

    the emergence of 'full artificial intelligence' over the next twenty-five years is far less likely than an asteroid striking the earth and annihilating us,
and arguing that weak AI poses no threat because it's not autonomous or conscious. But nobody is arguing against weak AI! It's precisely strong AI (ie. full AI) that poses a threat, and whether it happens within 25 years or within 250, it's worth taking seriously. For an overview of the dangers, I recommend Nick Bostrom's Superintelligence: Paths, Dangers, Strategies.

Further muddling the issue, the author defines an autonomous agent as one that creates its own goals and has free will, but then presents malware as an example, so he must have something quite different in mind when he talks about autonomy. But what? And note that a full AI would pose an existential threat even if couldn't create its own ultimate goals: striving for an assigned goal may easily be just as dangerous. As for free will, there is no reason a machine could not have it. Nor would a lack of free will be necessarily safe.

I'm dismayed to see such an post from a researcher as prominent in the field as Oren Etzioni.





veen  ·  3634 days ago  ·  link  ·  

Good points (and welcome to Hubski)! That book by Bostrom looks very intriguing. Still, I got a different message from the article. The way I see his argument, put into terms of strong and weak AI, is as follows:

P1: lots of people fear all AI

P2: but there is a difference between AI with and without autonomy (strong vs weak AI)

P3: strong AI will not be relevant in the next decades

C: Therefore no need to worry because weak AI will not kill us and strong AI isn't relevant yet

We shouldn't let the fear dominate the AI research debate, because:

    ...if unjustified fears lead us to constrain AI, we could lose out on advances that could greatly benefit humanity—and even save lives. Allowing fear to guide us is not intelligent.

It's even in the title: AI won't exterminate us. I think that what the author is trying to argue, is that for our generation, fear is unjustified, not relevant, and potentially hindering our progression in the field of AI. Maybe I'm reading too much into it, I'd like to know what you think.

yakov  ·  3634 days ago  ·  link  ·  

    Good points (and welcome to Hubski)!

Thanks! Looks like a great community with lots of interesting people.

Yeah, that's precisely the argument. Weak AI is safe and useful, strong AI won't happen anytime soon, so we shouldn't be worried. Sounds reasonable. And yet... it does nothing to address the points that:

P1. Strong AI is not unlikely to be created within the next few decades,

P2. If strong AI is created, it may pose an existential threat,

P3. Creating a safe ("friendly") strong AI seems like it would be surprisingly difficult, and

P4. We should be aware of the dangers, and think long and hard about how we can avoid them.

Note that P1 is a widely-held belief among experts (see Bostrom, chapter 1) so it cannot be dismissed out of hand as "hypothetical." Of course it's hypothetical. So what?

I didn't just come up with the points above. They are central to the argument. And the article does nothing to address them, instead accusing some very smart people (namely Elon Musk and Stephen Hawking) of fear-mongering.

No doubt plenty of people don't understand that present-day AI is safe, and such people ought to be corrected. But not by sweeping the entire issue under the rug.

kleinbl00  ·  3634 days ago  ·  link  ·  

My problem with the "AI community", philosophically, boils down to a few things:

1) There has never been a time in my life when break-even fusion and artificial intelligence weren't just around the corner.

2) Drawing attention to break-even fusion gets physicists to say "yeah, we were ambitious." Drawing attention to artificial intelligence gets AI researchers to argue that earlier definitions were inaccurate.

3) Considering how arbitrary the definitions are, breaking things down into "weak AI" (IE, things we need not worry about) and "strong AI" (IE, Skynet) seems arbitrary and wrong-headed. It's not like agency is a binary characteristic yet in order to have this discussion, the first thing the AI camp always does is say "don't worry about this, worry about this."

I dunno. I'm disheartened by the rapidity at which the argument devolves into whether the angels dancing on a head of a pin are malevolent or benevolent. At least the "don't worry be happy" crowd tends to focus on concrete things while Stephen "grey goo" Hawking and his posse do tend to argue about hypothetical dangers from hypothetical solutions to hypothetical problems.

user-inactivated  ·  3633 days ago  ·  link  ·  

There is acknowledgement that early AI was overly ambitious, which is why AI has focused more on applications than trying to figure out how to write programs that were "really" intelligent since the AI winter. People like Ray Kurtzweil may still be flying the flag, but that's not what most people in AI are actually working on, and you don't see many working scientists making predictions about when or if it's going to happen. The problem is you hear a lot from the cheerleaders and very little from the experts, because "we found a slightly different way to recognize text in photographs, which works really well for extracting street addresses" doesn't make for sexy press releases.

The distinction between weak and strong AI is really about what you're trying to achieve. With weak AI we want programs that act like they're intelligent; we want to be able to make them smart enough for whatever application we have in mind. Strong AI wants to give you a holographic Lexa Doig. I am sure a large chunk of the AI community would not say no to a holographic Lexa Doig, but the problems you have some clue how to solve are much more attractive than the problems you don't. Skim the table of contents of some recent issues of AI magazine and see which you see more of.

For what it's worth, I'm in the "you might as well be arguing about whether is all, like, a simulation, like the matrix, man" camp.

cgod  ·  3634 days ago  ·  link  ·  

Air gap your strong AI. AI will be great to DO all kinds of stuff but there is no reason we have to give it the ability to do things independently of our over site. Let the weak AI manage traffic and power grids and health care, listen to what the strong AI has to say about changes it thinks will be beneficial to us. If we don't give strong AI the reigns than it can't lead us too badly off course. Sadly we aren't too good at air gaping important systems right now and without serious work we will probably only get worse at it as we grow more dependent on networks.

kleinbl00  ·  3634 days ago  ·  link  ·  
maxwell  ·  3633 days ago  ·  link  ·  

Thanks for that link, it's fascinating. I had a wee google and found this which is an interesting supplement to the former.

thundara  ·  3633 days ago  ·  link  ·  

    Air gap your strong AI.

Isn't that the starting premise of Neuromancer? :P