IAM

OCTOBER2019

READING

Sanghyuk Chun, Seong Joon Oh, Sangdoo Yun, Dongyoon Han, Junsuk Choe, Youngjoon Yoo. An Empirical Evaluation on Robustness and Uncertainty of Regularization Methods. ICML Workshop, 2019.

Chun et al. study adversarial robustness and robustness against noise and corruptions of various regularization methods including label smoothing, MixUp and adversarial logit pairing. As adversarial attack, the authors consider FGSM only, corruptions are tested using the Cifar-C dataset. Additionally, out-of-distribution detection is tested. Regarding adversarial robustness, the experiments are very hard to interpret; FGSM is a very simple attack, and the observed improvements often seem insignificant. It is also striking that an adversarially trained model has very high robust test error against FGSM. Overall, the authors conclude that regularization methods usually only succeed on the task they target, e.g., adversarial training or adversarial logit pairing improves adversarial robustness, but not necessarily robustness against corruptions, and vice versa.

What is your opinion on this article? Let me know your thoughts on Twitter @davidstutz92 or LinkedIn in/davidstutz92.