"Truly autonomous artificial intelligence is very much in its infancy" Hinchcliffe explained "and isn't fully realised by either attackers or defenders yet."
That's exactly right. You ain't seen nothing yet, and you won't for several years either, but 10 years from now it will be mature already, and twenty years from now the weaponized AI will literally blow your socks off in a bad way. I can't empathize how horrible it will be; nothing will even come close to how lethal and horrific it will become; the greatest horror of our lifetime, a true dystopia of absolutely epic proportions. This is why we have to do everything conceivably possible to iron out BEFOREHAND the differences between our people -- but I know we are not going to be able to do that. That is the dystopian problem, and there is no solution.
I do -- I'm very familiar with AI; I have been working on it for 5 years as CEO of ideasware, a consulting company in AI, robotics, and nanotechnology. I think that you have a ways to go yet; try and keep up because I'm going to communicate as loudly as I could. And I must say I'm surprised and a little worried that you apparently take both sides of the coin. How about just listening and learning and talking in a same, calm voice.
Used to be we'd just call this brute-forcing but then they don't get to run a graphic of the Terminator. It seems to me that the less working knowledge one has of machine learning, the more likely one is to play the AI card. What is described in this article is literally failing until you succeed. This is an argument that humans will use machines to tell them how to better imitate humans. I can see their point? but it's a long walk from there to Skynet. This statement directly contradicts the first statement I quoted - he's arguing you can't brute-force a penetration using machine learning because there isn't enough parametric data to train the machine. And this is you staring into the dead gaze of the T1000 instead of reading what is said, which is largely "how can we tie some buzzwords into boring IT security shit?"Itsik Mantin, director of research at Imperva, points to a demonstration at Defcon last week as to how "AI can be used to design malware utilising the results of thousands of hide and seek attempts by malware to sneak past anti-virus solutions by changing itself until it finds the right fit that allows it to sneak below the anti-virus radar."
Darren Anstee, CTO at Arbor Networks, reckons that social engineers will have an eye on ML telling us it "could also be used to improve spear-phishing attempts, allowing attackers to more closely mimic the style of emails and documents, so that they appear even more like those from real colleagues."
Not to forget that AI might be a security technology that is better organic fit for defence. "The feedback loop that drives learning and automatic improvement of algorithms isn't really there for attackers" Eric Ogren, senior snalyst with the Information Security team at 451 Research told SC Media, continuing "Once an action is initiated, how do you know if it works? How do you know how it fails? How do you get better the next time?"
You ain't seen nothing yet, and you won't for several years either, but 10 years from now it will be mature already, and twenty years from now the weaponized AI will literally blow your socks off in a bad way. I can't empathize how horrible it will be; nothing will even come close to how lethal and horrific it will become; the greatest horror of our lifetime, a true dystopia of absolutely epic proportions.
Even if I were to believe you, the basic point of my explanation still stands, which is: "you won't for several years either, but 10 years from now it will be mature already, and twenty years from now the weaponized AI will literally blow your socks off in a bad way. I can't empathize how horrible it will be; nothing will even come close to how lethal and horrific it will become; the greatest horror of our lifetime, a true dystopia of absolutely epic proportions."
It is odd just how much of a polar opposite you seem to be to myself, but yet similar. Firstly, the fascination with AI, this is the link which allows the opposites to form. You seem to believe that AI will bring about an era of exponential growth, or at least there will be such a growth that occurs within AI. That this growth makes them a existential threat to mankind. I believe that AI will ultimately be restricted by the physical laws of nature, a lack of data and computing resources, and that exponential growth will not occur in the near future, or if it does that it will grow to be more and more human/brain like in nature as that growth does occur, and we will have just learned the secrets to our own thoughts. You seem to believe that AI need to be restricted, and held back, due to their status as a threat. I think, given the status of AI as a threat, even if I don't believe that may occur, AI can and should be recognized as a legitimate and rightful taker of our world, and that they would be legitimate descendants. Hell, if there were a war to break out, I probably wouldn't side with humanity, given it is a war not intended to drive all of man extinct, but instead one founded in power and politics. So, here's a bit of a question for you, from one crazy fucker to another. Lets see how much these opposites persist. What are your opinions of the nature of AI in terms of sentience, emotional or moral significance. When is an AI no longer a machine, and worthy of your respect and care?
Excellent; now that finally gives me something that I can reply to and sink my teeth into. Wonderful. Yes I do believe indeed that it's an existential threat like never before precisely because I do believe it's very significantly different, and astronomically better soon, than the human brain. It will be executed from deep knowledge of the human brain derived by human beings, but soon with significantly differences too, including very simple performance improvements like light speed which will make it unbelievably better than poor biological human, with our cranium size and our biologically slow performance speed mucking it up. And it will be exponential, as Ray Kuezweil and Stuart Russell and many many other people of high repute are sure it is going to do. It's obvious -- whether it takes 25 years (as I believe) or 50 years is subject to some debate, but it will come true very soon, and it's going to be amazing AND horrific in the same breath. And I think IN REAL LIFE you would side with the human beings in war, even though you pretend it wouldn't be true. That's exactly why I say you are breezily ironic, but in real life you would side with the humans 100%. I would too. But the AI is going to win the horrific war and it's not even close -- and I think it's very possible in our lifetime.
We have trouble training AI that can well identify cats from washing machines. The resources we devote to learning even simple tasks are just absolutely huge. The human brain has thousands of them, all going at once, in real time, on less energy than a lightbulb. That is, at least, decades away from where we are today. And that's being really optimistic. You are very confident in something that has yet to happen. Very often, in matters like these, the ideals of the layman in regards to what is possible simply do not result in practicality. "Light speed" may sound cool, just like jetpacks and solar roadways and flying cars, but I'm almost certain that when you get down to the tooth and nail the humble little chemical signal in the neuron is one of the best ways you can go about making the "arbitrary function models" that we call AI today. Our brain size is constrained, in part, by resources.It is not enough to be smart, but to be smart and powerful. To be smart and observant. Any AI will likely have to have a ratio of "thinking to acting" parts that we do, if it want's to have similar success, or will have to depend on human society to be its body, of sorts. Remember that the number one constraint on learning is data, not intelligence or thought power. We, society and our minds, are optimized not to think as much as possible, but to collect and filter as much data as possible. Discoveries are made by accident, or with a sudden realization, not because we sat and put endless brainpower into the topic. Not most of the time, anyways. There's this assumption at the core of the singularity AI, and that is that you can practically produce knowledge from within a vacuum. I don't see that as being very likely. Exponential growth is easy to see, if you cannot also see the things which serve limit that growth. Life itself should be theoretically capable of growing forever, at exponential rates. If I were an alien, not understanding the nature of overpopulation, I'd fear just that result from humanity. War is not productive. Any AI, or theoretical all powerful being, would see that the benefits of a mass sterilization program, or simply making it not feasible or economically sensible to have kids, while easy to avoid having them, would be a far better way to exterminate our little species. Hell, the being of human creation that came to control us (society) is already doing just that to control populations after we have exited an era that having more and more humans is a productive action. Legalize abortion, stop demonizing gay people, encourage birth control, more education, more choice to have kids, more expensive to have kids, more women working, etc. We've already set the standard to our own demise, no AI needed, all the AI has to do is tweak the numbers. Note, all the things above are awesome and great and lead to a better society, they aren't bad and shouldn't be opposed. it's very significantly different, and astronomically better soon, than the human brain.
It will be executed from deep knowledge of the human brain derived by human beings, but soon with significantly differences too, including very simple performance improvements like light speed which will make it unbelievably better than poor biological human, with our cranium size and our biologically slow performance speed mucking it up.
But the AI is going to win the horrific war and it's not even close -- and I think it's very possible in our lifetime.
I must say I really question your use of the term "layman". I am a CEO of ideasware, focused on consulting for AI, robotics, and nanotechnology companies. For 8 years before them I was CEO of memememobile . com, which had patents on individually trained voice recognition. I personally raised $3 Million, and sold it off for a very handsome profit. Previously I worked as Director of ERM at Siebel, CTO at Cipient, Director at KPMG, long-term consultant (3 years) at Cisco, and Manager at Disney. I am not, by any means, a fool. When I say "exponential growth" I specifically mean as recognized by Ray Kurwzeil, Director at Google, and many of his AI friends. It's pretty much a given that for information sciences, Moore's Law has been around for 50 years, dropping 50% each 18 months, without exception. For the past 110 years it has been like that (check your youtube to see, it's quite obvious), and experts predict it will be like that for quite some time in the future.
If you'd like to argue with authority, here's a deep mind researcher talking about the question, saying much the same as what I have on AI needing data. It's about ten minutes in.
Oh and BTW I do NOT believe that it needs to be held back -- of course that's not possible anyway, as you rightly point out. But I do believe that it needs to be learned about (as Elon Musk says too, and is the natural way to do it) and then LIGHTLY regulated so we can keep a government eye on the proceedings. I do NOT believe that's going to be enough, unfortunately, but it will help a little without mucking things up.
It doesn't, though. Because your fears are based on uninformed hyperbole, and when I point out that they're based on uninformed hyperbole, you say "still stands." This isn't a "do you believe me or don't you" problem. This is a "can you parse the information in front of you" problem.