IAM

I will be presenting our work on adversarial robustness at ICML'19 and CVPR'19 in Long Beach beginning next week!
22thMARCH2019

READING

Yash Sharma, Pin-Yu Chen. Attacking the Madry Defense Model with L1-based Adversarial Examples. CoRR abs/1710.10733 (2017).

Sharma and Chen provide an experimental comparison of different state-of-the-art attacks against the adversarial training defense by Madry et al. [1]. They consider several attacks, including the Carlini Wagner attacks [2], elastic net attacks [3] as well as projected gradient descent [1]. Their experimental finding – that the defense by Madry et al. Can be broken by increasing the allowed perturbation size (i.e., epsilon) – should not be surprising. Every network trained adversarially will only defend reliable against attacks from the attacker used during training.

  • [1] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. ArXiv, 1706.06083, 2017.
  • [2] N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks.InIEEE Symposiumon Security and Privacy (SP), 39–57., 2017.
  • [3] P.Y. Chen, Y. Sharma, H. Zhang, J. Yi, and C.J. Hsieh. Ead: Elastic-net attacks to deep neuralnetworks via adversarial examples. arXiv preprint arXiv:1709.04114, 2017.
Also find this summary on ShortScience.org.

What is your opinion on the summarized work? Or do you know related work that is of interest? Let me know your thoughts in the comments below: