Adversarial Robustness in PyTorch Article Series

This project is a collection of articles with accompanying PyTorch code introducing and discussing adversarial examples, adversarial training and confidence-calibrated adversarial training:

  • Monitoring PyTorch Training with Tensorboard
  • Easily Saving and Loading PyTorch Models
  • 2.56% on Cifar10 with AutoAugment
  • $L_p$ Adversarial Examples on Cifar10
  • Adversarial Training on Cifar10
  • Confidence-Calibrated Adversarial Training on Cifar10
  • Proper Robustness Evaluation
  • Distal Adversarial Examples
  • Adversarial Patches and Frames
  • Adversarial transformations

Large parts of this repository are taken from my ICML'20 [1] and ICCV'21 [2] papers as well as my student's ECCV'21 workshop paper [3]:

PyTorch code on GitHub
  • [1] D. Stutz, M. Hein, B. Schiele. Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks. ICML, 2020.
  • [2] D. Stutz, M. Hein, B. Schiele. Relating Adversarially Robust Generalization to Flat Minima. ICCV, 2021.
  • [3] S. Rao, D. Stutz, B. Schiele. Adversarial Training Against Location-Optimized Adversarial Patches ECCV Workshops, 2020.