IAM

OCTOBER2019

READING

Adnan Siraj Rakin, Zhezhi He, Deliang Fan. Bit-Flip Attack: Crushing Neural Network withProgressive Bit Search. CoRR abs/1903.12269 (2019).

Rakin et al. introduce the bit-flip attack aimed to degrade a network’s performance by flipping a few weight bits. On Cifar10 and ImageNet, common architectures such as ResNets or AlexNet are quantized into 8 bits per weight value (or fewer). Then, on a subset of the validation set, gradients with respect to the training loss are computed and in each layer, bits are selected based on their gradient value. Afterwards, the layer which incurs the maximum increase in training loss is selected. This way, a network’s performance can be degraded to chance level with as few as 17 flipped bits (on ImageNet, using AlexNet).

Also find this summary on ShortScience.org.
What is your opinion on this article? Let me know your thoughts on Twitter @davidstutz92 or LinkedIn in/davidstutz92.