a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment
kleinbl00  ·  2678 days ago  ·  link  ·    ·  parent  ·  post: Weaponised AI. Davey Winder asks the industry - is that a thing yet?

    Itsik Mantin, director of research at Imperva, points to a demonstration at Defcon last week as to how "AI can be used to design malware utilising the results of thousands of hide and seek attempts by malware to sneak past anti-virus solutions by changing itself until it finds the right fit that allows it to sneak below the anti-virus radar."

Used to be we'd just call this brute-forcing but then they don't get to run a graphic of the Terminator.

It seems to me that the less working knowledge one has of machine learning, the more likely one is to play the AI card. What is described in this article is literally failing until you succeed.

    Darren Anstee, CTO at Arbor Networks, reckons that social engineers will have an eye on ML telling us it "could also be used to improve spear-phishing attempts, allowing attackers to more closely mimic the style of emails and documents, so that they appear even more like those from real colleagues."

This is an argument that humans will use machines to tell them how to better imitate humans. I can see their point? but it's a long walk from there to Skynet.

    Not to forget that AI might be a security technology that is better organic fit for defence. "The feedback loop that drives learning and automatic improvement of algorithms isn't really there for attackers" Eric Ogren, senior snalyst with the Information Security team at 451 Research told SC Media, continuing "Once an action is initiated, how do you know if it works? How do you know how it fails? How do you get better the next time?"

This statement directly contradicts the first statement I quoted - he's arguing you can't brute-force a penetration using machine learning because there isn't enough parametric data to train the machine.

    You ain't seen nothing yet, and you won't for several years either, but 10 years from now it will be mature already, and twenty years from now the weaponized AI will literally blow your socks off in a bad way. I can't empathize how horrible it will be; nothing will even come close to how lethal and horrific it will become; the greatest horror of our lifetime, a true dystopia of absolutely epic proportions.

And this is you staring into the dead gaze of the T1000 instead of reading what is said, which is largely "how can we tie some buzzwords into boring IT security shit?"