IAM

OPEN SOURCE

Adversarial Robustness in PyTorch Article Series

This project is a collection of articles with accompanying PyTorch code introducing and discussing adversarial examples, adversarial training and confidence-calibrated adversarial training:

Large parts of this repository are taken from my ICML'20 [1] and ICCV'21 [2] papers as well as my student's ECCV'21 workshop paper [3]:

PyTorch code on GitHub
  • [1] D. Stutz, M. Hein, B. Schiele. Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks. ICML, 2020.
  • [2] D. Stutz, M. Hein, B. Schiele. Relating Adversarially Robust Generalization to Flat Minima. ICCV, 2021.
  • [3] S. Rao, D. Stutz, B. Schiele. Adversarial Training Against Location-Optimized Adversarial Patches ECCV Workshops, 2020.