IAM

ARTICLE

Recorded MLSys’21 Talk “Bit Error Robustness for Energy-Efficient DNN Accelerators”

In this MLSys’21 paper, we consider the robustness of deep neural networks (DNN) against bit errors in their quantized weights. This is relevant in the context of DNN accelerators, i.e., specialized hardware for DNN inference: In order to reduce energy consumption, the accelerator’s memory may be operated at very low voltages. However, this induces exponentially increasing rates of bit errors that directly affect the DNN weights, reducing accuracy significantly. We propose a robust fixed-point quantization scheme, weight clipping as regularization during training and random bit error training to improve bit error robustness. This article shares my talk recorded for MLSys’21.

Talk

At MLSys'21, which was held virtually, I had the opportunity to present our work on bit error robustness of deep neural networks (DNNs). This was a collaboration with from the IBM T. J. Watson Research Center. The slides, paper and talk can be found below:

Unfortunately, the SlidesLive talk cannot be embedded here anymore. Please follow the below link to watch the talk.

Talk Recording Slides Paper on ArXiv
What is your opinion on this article? Let me know your thoughts on Twitter @davidstutz92 or LinkedIn in/davidstutz92.