22thJULY2021
As part of the Qualcomm Innovation Fellowship 2019, I have a talk on the research produced throughout the academic year 2019/2020. This talk covers two exciting works on robustness: robustness against various types of adversarial examples using confidence-calibrated adversarial training (CCAT) and robustness against bit errors in the model’s quantized weights. The latter can be shown to be important to reduce the energy-consumption of accelerators for neural networks. In this article, I want to share the slides corresponding to the talk.
Download
The slides can be downloaded here and the corresponding papers are available on ArXiv:
Slides
David Stutz, Matthias Hein, Bernt Schiele. Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks. ICML, 2020.
David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele. Bit Error Robustness for Energy-Efficient DNN Accelerators. MLSys, 2021.
qif-slides