Adversarial examples are intended to be imperceptible perturbations that cause mis-classification while not changing the true class. Still, there is no consensus on what changes are considered imperceptible or when the true class actually changes — or is not recognizable anymore. In this article, I want to explore what levels of $L_\infty$, $L_0$ and $L_1$ adversarial noise actually make sense on popular computer vision datasets such as MNIST, Fashion-MNIST, SVHN or Cifar10.
Confidence-calibrated adversarial training (CCAT) addresses two problems when training on adversarial examples: the lack of robustness against adversarial examples unseen during training, and the reduced (clean) accuracy. In particular, CCAT biases the model towards predicting low-confidence on adversarial examples such that adversarial examples can be rejected by confidence thresholding. In this article, I want to share the slides of the corresponding ICML talk.
Our paper on confidence-calibrated adversarial training was accepted at ICML’20. In the revised paper, the proposed confidence-calibrated adversarial training tackles the problem of obtaining robustness that generalizes to attacks not seen during training. This is achieved by biasing the network towards low-confidence predictions on adversarial examples and rejecting these low-confidence examples at test time. This article gives a short abstract and includes paper and code.