Logan Engstrom, Dimitris Tsipras, Ludwig Schmidt, Aleksander Madry. A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations. CoRR abs/1712.02779, 2017.

Engstrom et al. demonstrate that spatial transformations such as translations and rotations can be used to generate adversarial examples. Personally, however, I think that the paper does not address the question where adversarial perturbations “end” and generalization issues “start”. For larger translations and rotations, the problem is clearly a problem of generalization. Small ones, can also be interpreted as adversarial perturbations – especially when they are computed under the intention to fool the network. Still, the distinction is not clear ...

Also find this summary on ShortScience.org.
What is your opinion on this article? Let me know your thoughts on Twitter @davidstutz92 or LinkedIn in/davidstutz92.