I will be presenting our work on adversarial robustness at ICML'19 and CVPR'19 in Long Beach beginning next week!


Beranger Dumont, Simona Maggio, Pablo Montalvo. Robustness of Rotation-Equivariant Networks to Adversarial Perturbations. CoRR abs/1802.06627 (2018).

Dumont et al. compare different adversarial transformation attacks (including rotations and translations) against common as well as rotation-invariant convolutional neural networks. On MNIST, CIFAR-10 and ImageNet, they consider translations, rotations as well as the attack of [1] based on spatial transformer networks. Additionally, they consider rotation-invariant convolutional neural networks – however, both the attacks and the networks are not discussed/introduced in detail. The results are interesting because translation- and rotation-based attacks are significantly more successful on CIFAR-10 compared to MNIST and ImageNet. The authors, however, do not give a satisfying explanation of this observation.

  • [1] C. Xiao, J.-Y. Zhu, B. Li, W. H, M. Liu, D. Song. Spatially-Transformed Adversarial Examples. ICLR, 2018.
Also find this summary on ShortScience.org.

What is your opinion on the summarized work? Or do you know related work that is of interest? Let me know your thoughts in the comments below: