IAM

Check out our latest research on weakly-supervised 3D shape completion.
25thAUGUST2018

READING

Battista Biggio, Fabio Roli. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning. CoRR abs/1712.03141, 2017.

Biggio and Roli provide a comprehensive survey and discussion of work in adversarial machine learning. In contrast to related work [1,2], they explicitly discuss the relation of recent developments regarding the security of deep neural networks (as primarily discussed in [1] and [2]) and adversarial machine learning in general. The latter can be traced back to early work starting in 2004, e.g. involving adversarial attacks on spam filters. As a result, terminology used by Biggio and Roli is slightly different compared to publications focusing on deep neural networks. However, it also turns out that many approaches recently discussed in the deep learning community (such as adversarial training as defense) has already been introduced earlier regarding other machine learning algorithms. They also give a concise discussion of different threat models that is worth reading.

  • [1] N. Akhtar and A. Mian. Threat of adversarial attacks on deep learning in computer vision: A survey. arXiv.org, abs/1801.00553, 2018.
  • [2] X. Yuan, P. He, Q. Zhu, R. R. Bhat, and X. Li. Adversarial examples: Attacks and defenses for deep learning. arXiv.org, abs/1712.07107, 2017.

What is your opinion on the summarized work? Or do you know related work that is of interest? Let me know your thoughts in the comments below or get in touch with me: