IAM

AUGUST2019

READING

Qi-Zhi Cai, Chang Liu, Dawn Song. Curriculum Adversarial Training. IJCAI, 2018.

Cai et al. propose so-called curriculum adversarial training where adversarial training is applied to increasingly strong attacks. Specifically, considering a gradient-based, iterative attack such as projected gradient descent, a common proxy for the strength of the attack is the number of iterations. To avoid issues with forgetting old adversarial examples and reduced accuracy, the authors propose to apply adversarial training with different numbers of iterations. In each turn (called lesson in the paper), the network is trained adversarially for a given number of iterations until the network has high accuracy against this attack; then, the number of iterations is increased and another “lesson” is started. In experiments, this method is shown to outperform standard adversarial training.

Also find this summary on ShortScience.org.
What is your opinion on this article? Let me know your thoughts on Twitter @davidstutz92 or LinkedIn in/davidstutz92.