IAM

JULY2018

READING

Andras Rozsa, Manuel Günther, Terrance E. Boult. Adversarial Robustness: Softmax versus Openmax. CoRR abs/1708.01697 (2017)

Rozsa et al. describe an adersarial attack against OpenMax by directly targeting the logits. Specifically, they assume a network using OpenMax instead of a SoftMax layer to compute the final class probabilities. OpenMax allows “open-set” networks by also allowing to reject input samples. By directly targeting the logits of the trained network, i.e. iteratively pushing the logits in a target direction, it does not matter whether SoftMax or OpenMax layers are used on top, the network can be fooled in both cases.

Also find this summary on ShortScience.org.
What is your opinion on this article? Let me know your thoughts on Twitter @davidstutz92 or LinkedIn in/davidstutz92.