Cai et al. propose so-called curriculum adversarial training where adversarial training is applied to increasingly strong attacks. Specifically, considering a gradient-based, iterative attack such as projected gradient descent, a common proxy for the strength of the attack is the number of iterations. To avoid issues with forgetting old adversarial examples and reduced accuracy, the authors propose to apply adversarial training with different numbers of iterations. In each turn (called lesson in the paper), the network is trained adversarially for a given number of iterations until the network has high accuracy against this attack; then, the number of iterations is increased and another “lesson” is started. In experiments, this method is shown to outperform standard adversarial training.