Recorded ICML’20 Talk “Confidence-Calibrated Adversarial Training”

In our ICML’20 paper, confidence-calibrated adversarial training (CCAT) addresses two problems of “regular” adversarial training. First, robustness against adversarial examples unseen during training is improved and second, clean accuracy is increased. CCAT biases the model towards predicting low-confidence on adversarial examples such that adversarial examples can be rejected by confidence thresholding. This article shares my talk on CCAT as recorded for ICML’20.


A clear advantage of virtual conferences is that all talks and keynotes get recorded and are available on demand. For me, ICML'20 was the first conferences for which I had to record my talk. The paper I presented at ICML'20 deals with robustness to adversarial examples. In particular, compared to standard adversarial training, it improves robustness to adversarial examples not seen during training — for example, $L_2$ adversarial examples when training on $L_\infty$ ones. This is achieved by biasing the network towards uniform predictions on adversarial examples during training. As shown in the paper, this behavior extends beyond the $L_\infty$ ball used for adversarial examples during training. As result, adversarial examples can easily be rejected by confidence thresholding. Find the talk, paper and slides below.

Slides Paper on ArXiv


There are also some interesting talks from many of my colleagues:

What is your opinion on this article? Let me know your thoughts on Twitter @davidstutz92 or LinkedIn in/davidstutz92.