IAM

DAVIDSTUTZ

I am looking for full-time (applied) research opportunities in industry, involving (trustworthy and robust) machine learning or (3D) computer vision, starting early 2022. Check out my CV and get in touch on LinkedIn!

TAG»DNN ACCELERATORS«

27thJULY2021

PROJECT

Random and adversarial bit error robustness of DNNs for energy-efficient and secure DNN accelerators.

More ...

ARTICLE

Recorded CVPR’21 CV-AML Workshop Outstanding Paper Talk “Bit Error Robustness for Energy-Efficient DNN Accelerators”

In June this year, my work on bit error robustness of deep neural networks (DNNs) was recognized as outstanding paper at the CVPR’21 Workshop on Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges (AML-CV). Thus, as part of the workshop, I prepared a 15 minute talk highlighting how robustness against bit errors in DNN weights can improve the energy-efficiency of DNN accelerators. In this article, I want to share the recording.

More ...

ARTICLE

ArXiv Pre-Print “Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators”

Deep neural network (DNN) accelerators are popular due to reduced cost and energy compared to GPUs. To further reduce energy consumption, the operating voltage of the on-chip memory can be reduced. However, this injects random bit errors, directly impacting the (quantized) DNN weights. As result, improving DNN robustness against these bit errors can significantly improve energy efficiency. Similarly, these chips are subject to bit-level hardware- or software-based attacks. In this case, robustness against adversarial bit errors is required to improve security of DNN accelerators. Our paper presented in this article addresses both problems.

More ...

ARTICLE

Recorded MLSys’21 Talk “Bit Error Robustness for Energy-Efficient DNN Accelerators”

In this MLSys’21 paper, we consider the robustness of deep neural networks (DNN) against bit errors in their quantized weights. This is relevant in the context of DNN accelerators, i.e., specialized hardware for DNN inference: In order to reduce energy consumption, the accelerator’s memory may be operated at very low voltages. However, this induces exponentially increasing rates of bit errors that directly affect the DNN weights, reducing accuracy significantly. We propose a robust fixed-point quantization scheme, weight clipping as regularization during training and random bit error training to improve bit error robustness. This article shares my talk recorded for MLSys’21.

More ...

ARTICLE

Recorded RobustAI Workshop Talk “Confidence-Calibrated Adversarial Training and Bit Error Robustness of DNNs”

In January, I had the opportunity to interact with many other robustness researchers from academia and industry at the Robust Artificial Intelligence Workshop. As part of the workshop, organized by Airbus AI Research and TNO (Netherlands applied research organization), I also prepared a presentation talking about two of my PhD projects: confidence-calibrated adversarial training (CCAT) and bit error robustness of neural networks to enable low-energy neural network accelerators. In this article, I want to share the presentation; all other talks from the workshop can be found here.

More ...

ARTICLE

Recorded FOCA’20 Talk “Bit Error Robustness for Energy-Efficient DNN Accelerators”

In October this year, I was invited to talk at IBM’s FOCA workshop about my latest research on bit error robustness of (quantized) DNN weights. Here, the goal is to develop DNN accelerators capable to operating at low-voltage. However, lowering voltage induces bit errors in the accelerators’ memory. While such bit errors can be avoided through hardware mechanisms, such approaches are usually costly in terms of energy and area. Thus, training DNNs robust to such bit errors would enable low-voltage operation, reducing energy consumption, without the need for hardware techniques. In this 5-minute talk, I give a short overview.

More ...

ARTICLE

Updated Pre-Print “Bit Error Robustness for Energy-Efficient DNN Accelerators “

Recently, deep neural network (DNN) accelerators have received considerable attention due to reduced cost and energy compared to mainstream GPUs. In order to further reduce energy consumption, the included memory (storing weights and intermediate computations) is operated at low voltage. However, this causes bit errors in memory cells, directly impacting the stored (quantized) DNN weights. This results in a significant decrease in CNN accuracy. In this paper, we tackle the problem of DNN robustness against random bit errors. By using a robust fixed-point quantization, training with aggressive weight clipping as regularization and injecting random bit errors during training, we increase robustness significantly, allowing energy-efficient DNN accelerators.

More ...

ARTICLE

ArXiv Pre-Print “On Mitigating Random and Adversarial Bit Errors”

Deep neural network (DNN) accelerators are specialized hardware for inference and have received considerable attention in the past years. Here, in order to reduce energy consumption, these accelerators are often operated at low voltage which causes the included accelerator memory to become unreliable. Additionally, recent work demonstrated attacks targeting individual bits in memory. The induced bit errors in both cases can cause significantly reduced accuracy of DNNs. In this paper, we tackle both random (due to low-voltage) and adversarial bit errors in DNNs. By explicitly taking such errors into account during training, wecan improve robustness significantly.

More ...

26thJUNE2020

PROJECT

Random and adversarial bit errors in quantized DNN weights.

More ...