IAM

06thAPRIL2020

READING

Xin Liu, Huanrui Yang, Ziwei Liu, Linghao Song, Yiran Chen, Hai Li. DPATCH: An Adversarial Patch Attack on Object Detectors. SafeAI@AAAI 2019.

Liu et al. propose DPatch, adversarial patches against state-of-the-art object detectors. Similar to existing adversarial patches, where a patch with fixed pixels is placed in an image in order to evade (or change) classification, the authors compute their DPatch using an optimization procedure. During optimization, the patch to be optimized is placed in random locations on all images of, e.g. on PASCAL VOC 2007, and the pixels are updated in order to maximize the loss of the classifier (either in a targeted setting or in an untargeted setting). In experiments, this approach is able to fool several different detectors. Using small $40\times40$ pixel patches as illustrated in Figure 1.

Figure 1: Illustration of the use case of DPatch.

Also find this summary on ShortScience.org.

What is your opinion on the summarized work? Or do you know related work that is of interest? Let me know your thoughts in the comments below: