IAM

DAVIDSTUTZ

I am looking for full-time (applied) research opportunities in industry, involving (trustworthy and robust) machine learning or (3D) computer vision, starting early 2022. Check out my CV and get in touch on LinkedIn!

ARTICLE

Recorded MLSys’21 Talk “Bit Error Robustness for Energy-Efficient DNN Accelerators”

In this MLSys’21 paper, we consider the robustness of deep neural networks (DNN) against bit errors in their quantized weights. This is relevant in the context of DNN accelerators, i.e., specialized hardware for DNN inference: In order to reduce energy consumption, the accelerator’s memory may be operated at very low voltages. However, this induces exponentially increasing rates of bit errors that directly affect the DNN weights, reducing accuracy significantly. We propose a robust fixed-point quantization scheme, weight clipping as regularization during training and random bit error training to improve bit error robustness. This article shares my talk recorded for MLSys’21.

Talk

At MLSys'21, which was held virtually, I had the opportunity to present our work on bit error robustness of deep neural networks (DNNs). This was a collaboration with from the IBM T. J. Watson Research Center. The slides, paper and talk can be found below:

Unfortunately, the SlidesLive talk cannot be embedded here anymore. Please follow the below link to watch the talk.

Talk Recording Slides Paper on ArXiv

What is your opinion on this article? Did you find it interesting or useful? Let me know your thoughts in the comments below: