IAM

DAVIDSTUTZ

I am looking for full-time (applied) research opportunities in industry, involving (trustworthy and robust) machine learning or (3D) computer vision, starting early 2022. Check out my CV and get in touch on LinkedIn!

TAG»COMPUTER VISION«

ARTICLE

Code Released: Random Bit Error Robustness

The code for my MLSys’21 paper on bit error robustness of deep neural networks has been released on GitHub. The repository includes various fixed-point quantization schemes, routines for quantization-aware and random bit error training, and utilities for bit manipulation and operations for PyTorch tensors.

More ...

ARTICLE

Recorded ICCV’21 Talk “Relating Adversarially Robust Generalization to Flat Minima”

In October this year, my work on relating adversarially robust generalization to flat minima in the (robust) loss surface with respect to weight perturbations was presented at ICCV’21. As oral presentation at ICCV’21, I recorded a 12 minute talk highlighting the main insights how (robust) flatness can avoid robust overfitting of adversarial training and improve robustness against adversarial examples. In this article, I want to share the recording.

More ...

27thJULY2021

PROJECT

Random and adversarial bit error robustness of DNNs for energy-efficient and secure DNN accelerators.

More ...

27thJULY2021

PROJECT

Robust generalization and overfitting linked to flatness of robust loss surface in weight space.

More ...

ARTICLE

Recorded CVPR’21 CV-AML Workshop Outstanding Paper Talk “Bit Error Robustness for Energy-Efficient DNN Accelerators”

In June this year, my work on bit error robustness of deep neural networks (DNNs) was recognized as outstanding paper at the CVPR’21 Workshop on Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges (AML-CV). Thus, as part of the workshop, I prepared a 15 minute talk highlighting how robustness against bit errors in DNN weights can improve the energy-efficiency of DNN accelerators. In this article, I want to share the recording.

More ...

ARTICLE

ArXiv Pre-Print “Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators”

Deep neural network (DNN) accelerators are popular due to reduced cost and energy compared to GPUs. To further reduce energy consumption, the operating voltage of the on-chip memory can be reduced. However, this injects random bit errors, directly impacting the (quantized) DNN weights. As result, improving DNN robustness against these bit errors can significantly improve energy efficiency. Similarly, these chips are subject to bit-level hardware- or software-based attacks. In this case, robustness against adversarial bit errors is required to improve security of DNN accelerators. Our paper presented in this article addresses both problems.

More ...

ARTICLE

ArXiv Pre-Print “Relating Adversarially Robust Generalization to Flat Minima”

Recent work on robustness againt adversarial examples identified a severe problem in adversarial training: (robust) overfitting. That is, during training the training robustness continuously increases, while test robustness starts decreasing eventually. In this pre-print, we relate robust overfitting and good robust generalization to flatness around the found minimum in the robust loss landscape with respect to perturbations in the weights.

More ...

ARTICLE

Updated Pre-Print “Bit Error Robustness for Energy-Efficient DNN Accelerators “

Recently, deep neural network (DNN) accelerators have received considerable attention due to reduced cost and energy compared to mainstream GPUs. In order to further reduce energy consumption, the included memory (storing weights and intermediate computations) is operated at low voltage. However, this causes bit errors in memory cells, directly impacting the stored (quantized) DNN weights. This results in a significant decrease in CNN accuracy. In this paper, we tackle the problem of DNN robustness against random bit errors. By using a robust fixed-point quantization, training with aggressive weight clipping as regularization and injecting random bit errors during training, we increase robustness significantly, allowing energy-efficient DNN accelerators.

More ...

ARTICLE

What Lp Adversarial Examples make Sense on Common Vision Datasets?

Adversarial examples are intended to be imperceptible perturbations that cause mis-classification while not changing the true class. Still, there is no consensus on what changes are considered imperceptible or when the true class actually changes — or is not recognizable anymore. In this article, I want to explore what levels of $L_\infty$, $L_0$ and $L_1$ adversarial noise actually make sense on popular computer vision datasets such as MNIST, Fashion-MNIST, SVHN or Cifar10.

More ...

ARTICLE

ICML Talk “Confidence-Calibrated Adversarial Training”

Confidence-calibrated adversarial training (CCAT) addresses two problems when training on adversarial examples: the lack of robustness against adversarial examples unseen during training, and the reduced (clean) accuracy. In particular, CCAT biases the model towards predicting low-confidence on adversarial examples such that adversarial examples can be rejected by confidence thresholding. In this article, I want to share the slides of the corresponding ICML talk.

More ...