Mitigating Random and Adversarial Bit Errors in Quantized DNN Weights

Quick links: Paper


Figure 1: Left: Exemplary SRAM bit error rate $p$ and normalized energy per SRAM access when scaling voltage below $V_{\text{min}}$, the minimum voltage for reliable operation. Middle: Original weight value, quantized with 16 bits in $[-0.15, 0.15]$ on CIFAR10, plotted against the weight with random bit errors of rate $p = 1\%$. 5.5M weights are shown. Color indicates error magnitude (from violet, zero to roughly 0.225 in yellow). Right: Impact of bit errors on test error for CIFAR10. Reducing the quantization range to $[-0.15,0.15]$ (clipping) and random bit error training (RandBET) clearly improves robustness.

The design of deep neural network (DNN) accelerators, i.e., specialized hardware for inference, has received considerable attention in past years due to saved cost, area, and energy compared to mainstream hardware. We consider the problem of random and adversarial bit errors in quantized DNN weights stored on accelerator memory. Random bit errors arise when optimizing accelerators for energy efficiency by operating at low voltage. Here, the bit error rate increases exponentially with voltage reduction, causing devastating accuracy drops in DNNs. Additionally, recent work demonstrates attacks on voltage controllers to adversarially reduce voltage. Adversarial bit errors have been shown to be realistic through attacks targeting individual bits in accelerator memory. Besides describing these error models in detail, we make first steps towards DNNs robust to random and adversarial bit errors by explicitly taking bit errors into account during training. Our random or adversarial bit error training improves robustness significantly, potentially leading to more energy-efficient and secure DNN accelerators.


The paper is available on ArXiv:

Paper on ArXiv

    author    = {David Stutz and Nandhini Chandramoorthy and Matthias Hein and Bernt Schiele},
    title     = {On Mitigating Random and Adversarial Bit Errors},
    journal   = {CoRR},
    volume    = {abs/2006.13977},
    year      = {2020}


Coming soon!

News & Updates

June 26, 2019. The paper is available on ArXiv.