IAM

TAG»PYTHON«

JUNE2023

PROJECT

Report of the 2020 Max Planck PhDNet survey results.

More ...

ARTICLE

47.9% Robust Test Error on CIFAR10 with Adversarial Training and PyTorch

Knowing how to compute adversarial examples from this previous article, it would be ideal to train models for which such adversarial examples do not exist. This is the goal of developing adversarially robust training procedures. In this article, I want to describe a particularly popular approach called adversarial training. The idea is to train on adversarial examples computed during training on-the-fly. I will also discuss a PyTorch implementation that obtains 47.9% robust test error — 52.1% robust accuracy — on CIFAR10 using a WRN-28-10 architecture.

More ...

ARTICLE

Some Research Ideas for Conformal Training

With our paper on conformal training, we showed how conformal prediction can be integrated into end-to-end training pipelines. There are so many interesting directions of how to improve and build upon conformal training. Unfortunately, I just do not have the bandwidth to pursue all of them. So, in this article, I want to share some research ideas so others can pick them up.

More ...

ARTICLE

Lp Adversarial Examples using Projected Gradient Descent in PyTorch

Adversarial examples, slightly perturbed images causing mis-classification, have received considerable attention over the last few years. While many different adversarial attacks have been proposed, projected gradient descent (PGD) and its variants is widely spread for reliable evaluation or adversarial training. In this article, I want to present my implementation of PGD to generate L, L2, L1 and L0 adversarial examples. Besides using several iterations and multiple attempts, the worst-case adversarial example across all iterations is returned and momentum as well as backtracking strengthen the attack.

More ...

ARTICLE

2.56% Test Error on CIFAR-10 using PyTorch and AutoAugment

Top-tier conferences in machine learning or computer vision generally require state-of-the-art results as baseline to assess novelty and significance of the paper. Unfortunately, getting state-of-the-art results on many benchmarks can be tricky and extremely time-consuming — even for rather simple benchmarks such as CIFAR-10. In this article, I want to share PyTorch code for obtaining 2.56% test error on CIFAR-10 using a Wide ResNet (WRN-28-10) and AutoAugment as well as Cutout for data augmentation.

More ...

ARTICLE

Loading and Saving PyTorch Models Without Knowing the Architecture in Advance

PyTorch is a great tool to do deep learning research. However, when running large-scale experiments using various architectures, I always come across this one problem: How can I run the same experiments, evaluations or visualizations on models without knowing their architecture in advance? In this article, I want to present a simple approach allowing to load models without having to initialize the right architecture beforehand. The code of this article is available on GitHub.

More ...

ARTICLE

Monitoring PyTorch Training using Tensorboard

Tensorboard is a great tool to monitor and debugg deep neural network training. Originally developed for TensorFlow, Tensorboard is now also supported by other libraries such as PyTorch. While the integration in PyTorch was shaky in the beginning, it got better and better with more recent releases. In this article, I want to discuss how to use Tensorboard for monitoring training with PyTorch. The article’s code is available on GitHub.

More ...

ARTICLE

Thoroughly Spell-Checking a PhD Thesis

I always used aspell to spell check papers. I did not care about setting up a dictionary of words that aspell does not recognize due to time pressure. As papers usually involve few LaTeX files, this was an OK process. For my PhD thesis, however, I needed a more automatic and thorough process. This is because more files are involved and I had to spell check multiple times, several weeks or months apart, throughout the process. In this article, I want to share a semi-automatic but thorough process based on aspell and TeXtidote that worked well for me.

More ...

ARTICLE

Python Scripts to Prepare ArXiv Submissions

Generally, papers are written to be published at conferences or journals. While some journals care about the LaTeX source used to compile the submitted papers, most venues just expect compiled PDFs to be submitted. However, ArXiv always requires the full LaTeX source to be compiled on the ArXiv servers. As the LaTeX source of every ArXiv paper can be downloaded, this usually involves removing all comments, unused figures/files and “flattening” the directoy structure as ArXiv does not handle subdirectories well. In this article, I want to share two simple scripts that take care of the latter two problems: removing unused files and flattening.

More ...

NOVEMBER2022

PROJECT

An example of a custom TensorFlow operation implemented in C++.

More ...