Park et al. introduce adversarial dropout, a variant of adversarial training based on adversarially computing dropout masks. Specifically, instead of training on adversarial examples, the authors propose an efficient method to compute adversarial dropout masks during training. In experiments, this approach seems to improve generalization performance in semi-supervised settings.
What is your opinion on the summarized work? Or do you know related work that is of interest? Let me know your thoughts in the comments below: