IAM

AUGUST2019

READING

Paolo Russu, Ambra Demontis, Battista Biggio, Giorgio Fumera, Fabio Roli. Secure Kernel Machines against Evasion Attacks. AISec@CCS, 2016.

Russu et al. discuss robustness of linear and non-linear kernel machines through regularization. In particular, they show that linear classifiers can easily be regularized to be robust. In fact, robustness against $L_\infty$-bounded adversarial examples can be achieved through $L_1$ regularization on the weights. More generally, robustness against $L_p$ attacks are countered by $L_q$ regularization of the weights, with $\frac{1}{p} + \frac{1}{q} = 1$. These insights are generalized to the case of non-linear kernel machines; I refer to the paper for details.

Also find this summary on ShortScience.org.
What is your opinion on this article? Let me know your thoughts on Twitter @davidstutz92 or LinkedIn in/davidstutz92.