Dong et al. introduce momentum into iterative white-box adversarial examples and also show that attacking ensembles of models improves transferability. Specifically, their contribution is twofold. First, some iterative white-box attacks are extended to include a momentum term. As in optimization or learning, the main motivation is to avoid local maxima and have faster convergence. In experiments, they show that momentum is able to increase the success rates of attacks.
Second, to improve the transferability of adversarial examples in black-box scenarios, Dong et al. propose to compute adversarial examples on ensembles of models. In particular, the logits of multiple models are summed (optionally using weights) and attacks are crafter to fool multiple models at once. In experiments, crafting adversarial examples on an ensemble of diverse networks allows higher success rate sin black-box scenarios.