IAM

JUNE2020

READING

Nic Ford, Justin Gilmer, Nicholas Carlini, Ekin Dogus Cubuk. Adversarial Examples Are a Natural Consequence of Test Error in Noise. CoRR abs/1901.10513 (2019).

Ford et al. show that the existence of adversarial examples can directly linked to test error on noise and other types of random corruption. Additionally, obtaining model robust against random corruptions is difficult, and even adversarially robust models might not be entirely robust against these corruptions. Furthermore, many “defenses” against adversarial examples show poor performance on random corruption – showing that some defenses do not result in robust models, but make attacking the model using gradient-based attacks more difficult (gradient masking).

Also find this summary on ShortScience.org.
What is your opinion on this article? Let me know your thoughts on Twitter @davidstutz92 or LinkedIn in/davidstutz92.