Learning Optimal Conformal Classifiers, Dataiku (Invited Talk).
Conformal Training: Learning Optimcal Conformal Classifiers, International Seminar on Distribution-Free Statistics (Invited Talk). [Recording]
Relating Adversarial Robustness and Flat Minima, ICCV. [Recording]
Random Bit Errors for Energy-Efficient DNN Accelerators, CVPR CV-AML Workshop (Outstanding Paper Talk). [Recording]
Random Bit Errors for Energy-Efficient DNN Accelerators, MLSys. [Recording]
Random and Adversarial Bit Error Robustness of DNNs, TU Dortmund (Invited Talk). [Slides]
Confidence-Calibrated Adversarial Training and Bit Error Robustness for Energy-Efficient DNNs, Lorentz Center Workshop on Robust Artificial Intelligence (Invited Talk). [Recording]
Confidence-Calibrated Adversarial Training / Mitigating Random Bit Errors in Quantized Weights, Qian Xuesen Laboratory (China Academy of Space Technology, Invited Talk).
Confidence-Calibrated Adversarial Training, ICML Workshop on Uncertainty and Robustness in Deep Learning (Contributed Talk).
Confidence-Calibrated Adversarial Training, ICML. [Recording]
Confidence-Calibrated Adversarial Training, University of Tübingen (Invited Talk). [Slides]
Weakly-Supervised Shape Completion, ZF Friedrichshafen (Invited Talk, Part of MINT Award IT 2018, German).
Weakly-Supervised Shape Completion, Max Planck Institute for Intelligent Systems (Master Thesis Talk). [Slides]
Weakly-Supervised Shape Completion, RWTH Aachen University (Master Thesis Talk). [Slides]
In April, I was invited to talk about my work on random or adversarial bit error robustness of (quantized) deep neural networks in Katharina Morik’s group at TU Dortmund. The talk is motivated by DNN accelerators, specialized chips for DNN inference. In order to reduce energy-efficiency, DNNs are required to be robust to random bit errors occurring in the quantized weights. Moreover, RowHammer-like attacks require robustness against adversarial bit errors, as well. While a recording is not available, this article shares the slides used for the presentation.
In January, I had the opportunity to interact with many other robustness researchers from academia and industry at the Robust Artificial Intelligence Workshop. As part of the workshop, organized by Airbus AI Research and TNO (Netherlands applied research organization), I also prepared a presentation talking about two of my PhD projects: confidence-calibrated adversarial training (CCAT) and bit error robustness of neural networks to enable low-energy neural network accelerators. In this article, I want to share the presentation; all other talks from the workshop can be found here.
In October this year, I was invited to talk at IBM’s FOCA workshop about my latest research on bit error robustness of (quantized) DNN weights. Here, the goal is to develop DNN accelerators capable to operating at low-voltage. However, lowering voltage induces bit errors in the accelerators’ memory. While such bit errors can be avoided through hardware mechanisms, such approaches are usually costly in terms of energy and area. Thus, training DNNs robust to such bit errors would enable low-voltage operation, reducing energy consumption, without the need for hardware techniques. In this 5-minute talk, I give a short overview.
In our ICML’20 paper, confidence-calibrated adversarial training (CCAT) addresses two problems of “regular” adversarial training. First, robustness against adversarial examples unseen during training is improved and second, clean accuracy is increased. CCAT biases the model towards predicting low-confidence on adversarial examples such that adversarial examples can be rejected by confidence thresholding. This article shares my talk on CCAT as recorded for ICML’20.
Confidence-calibrated adversarial training (CCAT) addresses two problems when training on adversarial examples: the lack of robustness against adversarial examples unseen during training, and the reduced (clean) accuracy. In particular, CCAT biases the model towards predicting low-confidence on adversarial examples such that adversarial examples can be rejected by confidence thresholding. In this article, I want to share the slides of the corresponding ICML talk.
Recently, I had the opportunity to present my work on confidence-calibrated adversarial training at the Bosch Center for Artifical Intelligence and the University of Tübingen, specifically, the newly formed Tübingen AI Center. As part of the talk, I outlined the motivation and strengths of confidence-calibrated adversarial training compared to standard adversarial training: robustness against previously unseen attacks and improved accuracy. I also touched on the difficulties faced during robustness evaluation. This article provides the corresponding slides and gives a short overview of the talk.
In April, I visited Prof. Bernt Schiele’s Computer Vision and Multimodal Computing Department at the Max Planck Institute for Informatics in Saarbrücken. Aside from a presentation on my recent superpixel benchmark, I also met many interesting people and learned a lot about a career in research.
In the course of a seminar on “Selected Topics in Image Processing”, I worked on iPiano, an algorithm for non-convex and non-smooth optimization proposed by Ochs et al. . iPiano combines forward-backward splitting with an inertial force. This article presents the corresponding seminar paper including an implementation in C++ with applications to image denoising, image segmentation and compressed sensing.
In the course of my second seminar on “Current Topics in Computer Vision and Machine Learning”, offered by the Computer Vision Group at RWTH Aachen University, I wrote a report entitled “Neural Codes for Image Retrieval”. The work is motivated by recent research by Bebanko et al.  and the report as well as the corresponding slides can be found here.