IAM

APRIL2020

READING

Fei Zuo, Bokai Yang, Xiaopeng Li, Qiang Zeng. Exploiting the Inherent Limitation of L0 Adversarial Examples. RAID 2019: 293-307.

Zuo et al. propose a two-stage system for detecting $L_0$ adversarial examples. Their system is based on the following two observations: (a) $L_0$ adversarial examples often result in very drastic changes of individual pixels and (b) these pixels are usually isolated and scattered over the image. Thus, they propose to train a siamese network to detect adversarial examples. To this end, they use a pre-processor and train the network to detect adversarial examples by taking the input and the pre-processed input. The pre-processing is assumed to influence benign images only slightly. In their case, an inpainting mechanism is used. Specifically, pixels where one color channel exhibits extremely small or large values are inpainted using any state-of-the-art approach, as shown in Figure 1. The siamese network learns to detect adversarial examples based on the differences in input images and inpainted images.

Figure 1: Examples of inpainted $L_0$ adversarial examples.

Also find this summary on ShortScience.org.
What is your opinion on this article? Let me know your thoughts on Twitter @davidstutz92 or LinkedIn in/davidstutz92.