I will be presenting our work on adversarial robustness at ICML'19 and CVPR'19 in Long Beach beginning next week!



CVPR Paper “Disentangling Adversarial Robustness and Generalization”

Our paper on adversarial robustness and generalization was accepted at CVPR’19. In the revised paper, we show that adversarial examples usually leave the manifold, including a brief theoretical argumentation. Similarly, adversarial examples can be found on the manifold; then, robustness is nothing else than generalization. For (off-manifold) adversarial examples, in contrast, we show that generalization and robustness are not necessarily contradicting objectives. As example, on synthetic data, we adversarially train a robust and accurate model. This article gives a short abstract and provides the paper including appendix.

More ...