Xiao et al. propose adversarial examples based on spatial transformations. Actually, this work is very similar to the adversarial deformations of . In particular, a deformation flow field is optimized (allowing individual deformations per pixel) to cause a misclassification. The distance of the perturbation is computed on the flow field directly. Examples on MNIST are shown in Figure 1 – it can clearly be seen that most pixels are moved individually and no kind of smoothness is enforced. They also show that commonly used defense mechanisms are more or less useless against these attacks. Unfortunately, and in contrast to , they do not consider adversarial training on their own adversarial transformations as defense.
Figure 1: Examples of the computed adversarial examples/transformations on MNIST for three different models. Note that these are targeted attacks.
What is your opinion on the summarized work? Or do you know related work that is of interest? Let me know your thoughts in the comments below: