Check out our latest research on weakly-supervised 3D shape completion.


Andras Rozsa, Manuel Günther, Terrance E. Boult. Adversarial Robustness: Softmax versus Openmax. CoRR abs/1708.01697 (2017)

Rozsa et al. describe an adersarial attack against OpenMax by directly targeting the logits. Specifically, they assume a network using OpenMax instead of a SoftMax layer to compute the final class probabilities. OpenMax allows “open-set” networks by also allowing to reject input samples. By directly targeting the logits of the trained network, i.e. iteratively pushing the logits in a target direction, it does not matter whether SoftMax or OpenMax layers are used on top, the network can be fooled in both cases.

Also find this summary on ShortScience.org.

What is your opinion on the summarized work? Or do you know related work that is of interest? Let me know your thoughts in the comments below or get in touch with me: