by kleinbl00
“It shouldn’t be able to take an image, slightly tweak the pixels, and completely confuse the network,” he said. “Neural networks blow all previous techniques out of the water in terms of performance, but given the existence of these adversarial examples, it shows we really don’t understand what’s going on.”The research has its limitations: Now the attackers needs to know the inner workings of the algorithm they’re trying to fool. However, past research has been shown to work on black-box systems, or proprietary algorithms unknown to the attacker. Athalye says the team will pursue that area of research next.