IAM

JUNE2020

READING

Ian J. Goodfellow. A Research Agenda: Dynamic Models to Defend Against Correlated Attacks. CoRR abs/1903.06293 (2019).

Goodfellow motivates the use of dynamical models as “defense” against adversarial attacks that violate both the identical and independent assumptions in machine learning. Specifically, he argues that machine learning is mostly based on the assumption that the data is samples identically and independently from a data distribution. Evasion attacks, meaning adversarial examples, mainly violate the assumption that they come from the same distribution. Adversarial examples computed within an $\epsilon$-ball around test examples basically correspond to an adversarial distribution the is larger (but entails) the original data distribution. In this article, Goodfellow argues that we should also consider attacks violating the independence assumption. This means, as a simple example, that the attacker can also use the same attack over and over again. This yields the idea of correlated attacks as mentioned in the paper’s title. Against this more general threat model, Goodfellow argues that dynamic models are required; meaning the model needs to change (or evolve) – be a moving target that is harder to attack.

Also find this summary on ShortScience.org.
What is your opinion on this article? Let me know your thoughts on Twitter @davidstutz92 or LinkedIn in/davidstutz92.