IAM

ARCHIVEMONTHLY»OCTOBER2019«

ARTICLE

ArXiv Pre-Print “Confidence-Calibrated Adversarial Training”

Adversarial training is the de-facto standard to obtain models robust against adversarial examples. However, on complex datasets, a significant loss in accuracy is incurred and the robustness does not generalize to attacks not used during training. This paper introduces confidence-calibrated adversarial training. By forcing the confidence on adversarial examples to decay with their distance to the training data, the loss in accuracy is reduced and robustness generalizes to other attacks and larger perturbations.

More ...