IAM

Check out our latest research on weakly-supervised 3D shape completion.

TAG»DEEP LEARNING«

ARTICLE

Discussion and Survey of Adversarial Examples and Robustness in Deep Learning

Adversarial examples are test images which have been perturbed slightly to cause misclassification. As these adversarial examples are usually unproblematic for us humans, but are able to easily fool deep neural networks, their discovery has sparked quite some interest in the deep learning and privacy/security communities. In this article, I want to provide a rough overview of the topic including a brief survey of relevant literature and some ideas on future research directions.

More ...