IAM

APRIL2019

READING

Yizhen Wang, Somesh Jha, Kamalika Chaudhuri. Analyzing the Robustness of Nearest Neighbors to Adversarial Examples. ICML, 2018.

Wang et al. discuss the robustness of $k$-nearest neighbors against adversarial perturbations, providing both a theoretical analysis as well as a robust 1-nearest neighbor version. Specifically, for low $k$ it is shown that nearest neighbor is usually not robust. Here, robustness is judged in a distributional sense; so for fixed and low $k$, the lowest distance of any training sample to an adversarial sample tends to zero, even if the training set size increases. For $k \in \mathcal{O}(dn \log n)$, however, it is shown that $k$/nearest neighbor can be robust – the prove, showing where the $dn \log n$ comes from can be found in the paper. Finally, they propose a simple but robust $1$-nearest neighbor algorithm. The main idea is to remove samples from the training set that cause adversarial examples. In particular, a minimum distance between any two samples with different labels is enforced.

Also find this summary on ShortScience.org.
What is your opinion on this article? Let me know your thoughts on Twitter @davidstutz92 or LinkedIn in/davidstutz92.