Chun et al. study adversarial robustness and robustness against noise and corruptions of various regularization methods including label smoothing, MixUp and adversarial logit pairing. As adversarial attack, the authors consider FGSM only, corruptions are tested using the Cifar-C dataset. Additionally, out-of-distribution detection is tested. Regarding adversarial robustness, the experiments are very hard to interpret; FGSM is a very simple attack, and the observed improvements often seem insignificant. It is also striking that an adversarially trained model has very high robust test error against FGSM. Overall, the authors conclude that regularization methods usually only succeed on the task they target, e.g., adversarial training or adversarial logit pairing improves adversarial robustness, but not necessarily robustness against corruptions, and vice versa.