Check out our latest research on adversarial robustness and generalization of deep networks.


Logan Engstrom, Dimitris Tsipras, Ludwig Schmidt, Aleksander Madry. A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations. CoRR abs/1712.02779, 2017.

Engstrom et al. demonstrate that spatial transformations such as translations and rotations can be used to generate adversarial examples. Personally, however, I think that the paper does not address the question where adversarial perturbations “end” and generalization issues “start”. For larger translations and rotations, the problem is clearly a problem of generalization. Small ones, can also be interpreted as adversarial perturbations – especially when they are computed under the intention to fool the network. Still, the distinction is not clear ...

Also find this summary on ShortScience.org.

What is your opinion on the summarized work? Or do you know related work that is of interest? Let me know your thoughts in the comments below: