IAM

18thSEPTEMBER2019

READING

Yujia Liu, Weiming Zhang, Shaohua Li, Nenghai Yu. Enhanced Attacks on Defensively Distilled Deep Neural Networks. CoRR abs/1711.05934 (2017).

Liu et al. propose a white-box attack against defensive distillation. In particular, the proposed attack combines the objective of the Carlini+Wagner attack [1] with a slightly different reparameterization to enforce an $L_\infty$-constraint on the perturbation. In experiments, defensive distillation is shown to no be robust.

  • [1] Nicholas Carlini, David A. Wagner: Towards Evaluating the Robustness of Neural Networks. IEEE Symposium on Security and Privacy 2017: 39-57
Also find this summary on ShortScience.org.

What is your opinion on the summarized work? Or do you know related work that is of interest? Let me know your thoughts in the comments below: