Code Released: Conformal Training

The code for our ICLR’22 paper on learning optimal conformal classifiers is now available on GitHub. The repository not only includes our implementation of conformal training but also relevant baselines such as coverage training and several conformal predictors for evaluation. Furthermore, it allows to reproduce the majority of experiments from the paper.


Conformal training allows training models explicitly for split conformal prediction (CP). Usually, split CP is used as a separate calibration step - a wrapper - after training with the goal to predict confidence sets of classes instead of making point predictions. The goal of CP is to associate these confidence sets with a so-called coverage guarantee, stating that the true class is included with high probability. However, applying CP after training prevents the underlying model from adapting to the prediction of confidence sets. Conformal training explicitly differentiates through the conformal predictor during training with the goal of training the model with the conformal predictor end-to-end. Specifically, it "simulates" conformalization on mini-batches during training. Compared to standard training, conformal training reduces the average confidence set size (inefficiency) of conformal predictors applied after training. Moreover, it can "shape" the confidence sets predicted at test time, which is difficult for standard CP. We refer to the paper for more background on conformal prediction and a detailed description of conformal training.


The code included in the following repository includes our implementation of conformal training and can be used to reproduce the majority of the experiments included in the paper:

Conformal Training on GitHub

The corresponding paper is available on ArXiv; also check out DeepMind's project page as well as my project page:

Paper on ArXiv
    author = {David Stutz and Krishnamurthy Dvijotham and Ali Taylan Cemgil and Arnaud Doucet},
    title = {Learning Optimal Conformal Classifiers},
    booktitle = {Proc. of the International Conference on Learning Representations (ICLR)},
    year = {2021},
What is your opinion on this article? Let me know your thoughts on Twitter @davidstutz92 or LinkedIn in/davidstutz92.