IAM

DAVIDSTUTZ

TAG»DNN ACCELERATORS«

ARTICLE

Recorded FOCA’20 Talk “Bit Error Robustness for Energy-Efficient DNN Accelerators”

In October this year, I was invited to talk at IBM’s FOCA workshop about my latest research on bit error robustness of (quantized) DNN weights. Here, the goal is to develop DNN accelerators capable to operating at low-voltage. However, lowering voltage induces bit errors in the accelerators’ memory. While such bit errors can be avoided through hardware mechanisms, such approaches are usually costly in terms of energy and area. Thus, training DNNs robust to such bit errors would enable low-voltage operation, reducing energy consumption, without the need for hardware techniques. In this 5-minute talk, I give a short overview.

More ...

ARTICLE

Updated Pre-Print “Bit Error Robustness for Energy-Efficient DNN Accelerators “

Recently, deep neural network (DNN) accelerators have received considerable attention due to reduced cost and energy compared to mainstream GPUs. In order to further reduce energy consumption, the included memory (storing weights and intermediate computations) is operated at low voltage. However, this causes bit errors in memory cells, directly impacting the stored (quantized) DNN weights. This results in a significant decrease in CNN accuracy. In this paper, we tackle the problem of DNN robustness against random bit errors. By using a robust fixed-point quantization, training with aggressive weight clipping as regularization and injecting random bit errors during training, we increase robustness significantly, allowing energy-efficient DNN accelerators.

More ...

ARTICLE

ArXiv Pre-Print “On Mitigating Random and Adversarial Bit Errors”

Deep neural network (DNN) accelerators are specialized hardware for inference and have received considerable attention in the past years. Here, in order to reduce energy consumption, these accelerators are often operated at low voltage which causes the included accelerator memory to become unreliable. Additionally, recent work demonstrated attacks targeting individual bits in memory. The induced bit errors in both cases can cause significantly reduced accuracy of DNNs. In this paper, we tackle both random (due to low-voltage) and adversarial bit errors in DNNs. By explicitly taking such errors into account during training, wecan improve robustness significantly.

More ...

26thJUNE2020

PROJECT

Random and adversarial bit errors in quantized DNN weights.

More ...