Shafahi et al. propose label smoothing and label squeezing together with Gaussian noise augmentation as efficient alternative to adversarial training against adversarial examples. Specifically, they argue (based on a linear approximation of the perturbation’s impact on the model’s logits) that logit squeezing, i.e., regularizing the logits to be small, and label smoothing, i.e., training on a combination of one-hot and uniform labels, can lead to similar robustness as adversarial training when additionally using Gaussian noise. These schemes have the advantage of not requiring additional forward and backward passes to computer adversarial examples during training. In experiments, they show that these methods outperform adversarial trianing on Cifar10, also considering the achieved accuracy (which is usually reduced through adversarial training). However, I also want to note that the proposed methods have been shown to be ineffective in  and the paper was withdrawn.