Despite their outstanding performance, deep neural networks (DNNs) are susceptible to adversarial examples, imperceptibly perturbed examples causing mis-classification. Similarly, but less studied, DNNs are fragile in terms of perturbations in their weights. This talk highlights my recent research on both input and weight robustness and investigates how both problems are related. On the subject of adversarial examples, I discuss a confidence-calibrated version of adversarial training that allows to obtain robustness beyond the adversarial perturbations seen during training. Next, regarding weight robustness, I address robustness against random bit errors in the (quantized) weights which plays an important role in improving the energy-efficiency of DNN accelerators. Surprisingly, improved weight robustness can also be beneficial in terms of robustness against adversarial examples. Specifically, weight robustness can be thought of as flatness in the loss landscape with respect to perturbations of the weights. Using an intuitive flatness measure for adversarially trained DNNs, I demonstrate that flatness in the weight loss landscape improves adversarial robustness and helps to avoid robust overfitting.
David Stutz, Matthias Hein, Bernt Schiele. Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks. ICML, 2020.
David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele. Bit Error Robustness for Energy-Efficient DNN Accelerators. MLSys, 2021.
David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele. Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators. ArXiv, 2021.
David Stutz, Matthias Hein, Bernt Schiele. Relating Adversarially Robust Generalization to Flat Minima. ICCV, 2021.
The original recording can be found on the seminar's webpage:Talk Recording