I do understand, and think the same thing as a start, but how do you deal with the ironclad fact that top-secret systems for national defense and security (here in the USA and also in China, Russia, and terrorist organizations like ISIS and al-Qaeda) will not obey these rules, because otherwise they could not obey the other rule of doing the very best that they could, with AI on the table? Automation, specifically AI-enabled, has to be able to go at it without human intervention, precisely because it’s the quickest way. That’s the problem at hand, that I have no solution to at present.

ideasware:

By the way, here's the conversation I had with Carlos Perez on Medium:

(C) Let’s have this absurd expectation that human regulation ( probably dating back to 1750 BC, i.e. code of Hummurabi) is going to magically control an advance intelligence. Probably the only solution at present is to do the same as the genetic engineeering folks do. Don’t ever let it get into the wild (i.e. get into the internet)!

(P) Unfortunately that’s completely impossible. I think you (obviously) know that too. That is the issue, and it’s quite serious, to say the very least — it’s the most important issue, by far, in the world. That’s why consumer products are fine to regulate, but national defense is different, and there really is no solution.

(C) Why do you think Musk is betting on Neural interfaces? https://waitbutwhy.com/2017/04/neuralink.html

There is a mistaken notion here that AGI will eventually behave like humans. That could either be a very good thing or a very bad thing. There reality however is that AGI will behave entirely different. Deep Learning systems seem to emulate some human skills well, but they are very different.

(P) Oh I’ve read it of course, several times, because although Tim Urban did a wonderful job, but still a little abstruse to say the least. It’s not his finest moment. I actually think that it’s just ANI that will work for defense departments, for the US as well as putative enemies, and it’s a very worrisome thing precisely because it’s almost here already. By the time AGI rolls around in 25 years (roughly), we’ll all admit that either immortal cyborgs are enough to keep us going, or true AI will go on, but human beings are done. I don’t know.


posted 2447 days ago