The code for my ICCV’21 paper relating adversarial robustness to flatness in the (robust) loss landscape is now available on GitHub. The repository includes implementations of various adversarial attacks, adversarial training variants and “attacks” on model weights in order to measure robust flatness.
The code for our paper on adversarial patch training on location-optimized adversarial patches is now available on GitHub. The repository includes a PyTorch implementation of our adversarial patch attack with location optimization as well as an adversarial training routine. The experiments on Cifar10 and GTSRB presented in the paper can easily be reproduced.
Adversarial training on location-optimized adversarial patches.
The code for my latest paper on confidence-calibrated adversarial training has been released on GitHub. The repository does not only include a PyTorch implementation of confidence-calibrated adversarial training, but also several white- and black box attacks to generate adversarial examples and the proposed confidence-thresholded robust test error. Furthermore, these implementations are fully tested and allow to reproduce the results from the paper. This article gives an overview of the repository and highlights its features and components.
In deep learning and computer vision, data is often assumed to lie on a low-dimensional manifold, embedded within the potentially high-dimensional input space — as, for example, for images. However, the manifold is usually not known which hinders deeper understanding of many phenomena in deep learning, such as adversarial examples. Based on my recent CVPR’19 paper, I want to present FONTS, a MNIST-like, synthetically created dataset with known manifold to study adversarial example.
Obtaining high-quality visualizations of 3D data such as triangular meshes or occupancy grids, as needed for publications in computer graphics and computer vision, is difficult. In this article, I want to present a GitHub repository containing some utility scripts for paper-ready visualizations of meshes and occupancy grids using Blender and Python.
Triangular meshes are commonly used to represent various shapes in computer graphics and computer vision. However, for various deep learning techniques, triangular meshes are not well suited. Therefore, meshes are commonly voxelized into occupancy grids or signed distance functions. This article presents a C++ tool allowing efficient voxelization of (watertight) meshes.
Automatically obtaining high-quality watertight meshes in order to derive well-defined occupancy grids or signed distance functions is a common problem in 3D vision. In this article, I present a mesh fusion approach for obtaining watertight meshes. In combination with a standard mesh simplification algorithm, this approach produces high-quality, but lightweight, watertight meshes.
We are releasing the code and data corresponding to our ArXiv pre-print on weakly-supervised 3D shape completion — a follow-up work on our earlier CVPR’18 paper. The article provides links to the GitHub repositories and data downloads as well as detailed descriptions. It also highlights the differences between the two papers.
Learning 3D shape completion under weak supervision; on ShapeNet, ModelNet, KITTI and Kinect data; published at CVPR and on ArXiv.