Universal Adversarial Perturbations

Double Targeted Universal Adversarial Perturbations

Despite their impressive performance, deep neural networks (DNNs) are widely known to be vulnerable to adversarial attacks, which makes it challenging for them to be deployed in security-sensitive applications, such as autonomous driving. …

Understanding Adversarial Examples from the Mutual Influence of Images and Perturbations

A wide variety of works have explored the reason for the existence of adversarial examples, but there is no consensus on the explanation. We propose to treat the DNN logits as a vector for feature representation, and exploit them to analyze the …