IAM

TAG»COMPUTER VISION«

MAY2020

PROJECT

Adversarial training on location-optimized adversarial patches.

More ...

ARTICLE

Adversarial Examples Leave the Data Manifold

Adversarial examples are commonly assumed to leave the manifold of the underyling data — although this has not been confirmed experimentally so far. This means that deep neural networks perform well on the manifold, however, slight perturbations in directions leaving the manifold may cause mis-classification. In this article, based on my recent CVPR’19 paper, I want to empirically show that adversarial examples indeed leave the manifold. For this purpose, I will present results on a synthetic dataset with known manifold as well as on MNIST with approximated manifold.

More ...

MARCH2020

PROJECT

Confidence calibration of adversarial training for “generalizable” robustness.

More ...