Juncheng Li, Frank R. Schmidt, J. Zico Kolter. Adversarial camera stickers: A physical camera-based attack on deep learning systems. ICML 2019: 3896-3904.

Li et al. propose camera stickers that when computed adversarially and physically attached to the camera leads to mis-classification. As illustrated in Figure 1, these stickers are realized using circular patches of uniform color. These individual circular stickers are computed in a gradient-descent fashion by optimizing their location, color and radius. The influence of the camera on these stickers is modeled realistically in order to guarantee success.

Figure 1: Illustration of adversarial stickers on the camera (left) and the effect on the taken photo (right).

Also find this summary on ShortScience.org.

What is your opinion on the summarized work? Or do you know related work that is of interest? Let me know your thoughts in the comments below: