IAM

AUGUST2019

READING

Ali Shafahi, Amin Ghiasi, Furong Huang, Tom Goldstein. Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training? openreview.net/forum?id=BJlr0j0ctX.

Shafahi et al. propose label smoothing and label squeezing together with Gaussian noise augmentation as efficient alternative to adversarial training against adversarial examples. Specifically, they argue (based on a linear approximation of the perturbation’s impact on the model’s logits) that logit squeezing, i.e., regularizing the logits to be small, and label smoothing, i.e., training on a combination of one-hot and uniform labels, can lead to similar robustness as adversarial training when additionally using Gaussian noise. These schemes have the advantage of not requiring additional forward and backward passes to computer adversarial examples during training. In experiments, they show that these methods outperform adversarial trianing on Cifar10, also considering the achieved accuracy (which is usually reduced through adversarial training). However, I also want to note that the proposed methods have been shown to be ineffective in [1] and the paper was withdrawn.

  • [1] Marius Mosbach, Maksym Andriushchenko, Thomas Alexander Trost, Matthias Hein, Dietrich Klakow: Logit Pairing Methods Can Fool Gradient-Based Attacks. CoRR abs/1810.12042 (2018).
What is your opinion on this article? Let me know your thoughts on Twitter @davidstutz92 or LinkedIn in/davidstutz92.