Taking adversarial training from this previous article as baseline, this article introduces a new, confidence-calibrated variant of adversarial training that addresses two significant flaws: First, trained with L∞ adversarial examples, adversarial training is not robust against L2 ones. Second, it incurs a significant increase in (clean) test error. Confidence-calibrated adversarial training addresses these problems by encouraging lower confidence on adversarial examples and subsequently rejecting them.
Top-tier conferences in machine learning or computer vision generally require state-of-the-art results as baseline to assess novelty and significance of the paper. Unfortunately, getting state-of-the-art results on many benchmarks can be tricky and extremely time-consuming — even for rather simple benchmarks such as CIFAR-10. In this article, I want to share PyTorch code for obtaining 2.56% test error on CIFAR-10 using a Wide ResNet (WRN-28-10) and AutoAugment as well as Cutout for data augmentation.
In March this year I finally submitted my PhD thesis and successfully defended in July. Now, more than 6 months later, my thesis is finally available in the university’s library. During my PhD, I worked on various topics surrounding robustness and uncertainty in deep learning, including adversarial robustness, robustness to bit errors, out-of-distribution detection and conformal prediction. In this article, I want to share my thesis and give an overview of its contents.
Several mathematical impage processing exercises implemented in C++ and MatLab.
Tutorials for (deep convolutional) neural networks.
Torch/CUDA implementation of batch normalization for OctNets.
The Berkeley Segmentation Benchmark extended by superpixel metrics.
OPEN SOURCE blenderpy Mesh/Voxel Visualization Figure 1 (click to enlarge): Visualization examples of an occupancy grid (left) and a mesh (right) of a chair. The right visualization also shows a point cloud observation (in red). Blender is an open-source “3D creation suite” — a tool for creating and manipulating 3D shapes and scenes. While I […]
Tools to pre-process the NYU Depth v2 segmentations for evaluation.
PhD thesis on uncertainty estimation and (adversarial) robustness in deep learning.