a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by mk
mk  ·  2391 days ago  ·  link  ·    ·  parent  ·  post: The Exact Opposite Is True

I guess it depends on whether you see a solution, or if the outcome for us is inevitable. If AI is in actuality such a threat, I can't see how it can be prevented.

Personally, I am not convinced that the threat is as close as many seem to think. I've yet to see a worrisome example.





ideasware  ·  2390 days ago  ·  link  ·  

But when you see one "worrisome example" it will already be too late. That the very nature of AGI -- that's what your obviously missing. I can't describe how much I am scared by your ridiculous finding, but you seem to believe it's perfectly ok. It's artificial intelligence, not ordinary human intelligence -- and once it's achieved for the very first time, it will only get stronger, not weaker. You silly humans seem to think it's just fine and dandy -- when exactly the opposite is true.

mk  ·  2390 days ago  ·  link  ·  

Can you suggest a book or something I might read that makes the case?

ideasware  ·  2390 days ago  ·  link  ·  

Well "Superintelligence" by Nick Bostrom is a pretty recent book (that I've read and liked), but I suggest that right now, long articles are superior at this point to books. A recent 3-part article is http://www.lawandfuturetechnology.com/2017/05/military-ai-arms-race-will-ai-lower-threshold-going-war/

Just google "a book that's against the military AI arms race" and you will come up with lots of articles that will freeze your blood if you have any feeling how ominous it really is.

OftenBen  ·  2390 days ago  ·  link  ·  

    I've yet to see a worrisome example.

So, this is part of the concern, is it not? The idea being that a sufficiently advanced AGI system would be capable, and in some senses, obligated to fly under the radar until such a time it could guarantee it's own survival, assuming it has a sense of self-preservation similar to a humans. Loosely speaking, part of the 'threat' is that we won't see a worrisome example until the cost of a failure is astronomical.

mk  ·  2390 days ago  ·  link  ·  

Sure, but IMO that misunderstands the nature of intelligence. Intelligence is sense and interaction. Without such interaction, there is no comprehension in regards to the environment in question. A model-less intelligence is like a boneless elephant.