Jie Ren, Peter J. Liu, Emily Fertig, Jasper Snoek, Ryan Poplin, Mark A. DePristo, Joshua V. Dillon, Balaji Lakshminarayanan. Likelihood Ratios for Out-of-Distribution Detection. ICML Workshop, 2019.

Ren et al. propose a simple likelihood ratio test for out-of-distribution detection. The idea is based on the input samples consisting of background information and semantic, category-specific information. Thus, the likelihood $p(x)$ can be split up into $p(x_B)p(x_S)$ of background features $x_B$ and semantic features $x_S$. Then, given a in-distribution model $p_\theta(x)$ and a background model $p_{\theta_0}$, the likelihood test considers

$LLR(x) = \frac{p_\theta(x)}{p_{\theta_0}(x)}$.

Assuming that both models capture the background information equally well ($p_\theta(x_B) \approx p_{\theta_0}(x_B)$) and substituting the factorization, leads to a simple test:

$LLR(x) \approx \log p_\theta(x_S) - \log p_{\theta_0}(x_S)$.

In practice, the models are obtained using PixelCNN++; for the background model, random noise is applied (see the paper for details). Unfortunately, it is not entirely clear how the semantic features $x_S$ are determined to compute the likelihood ratio.

What is your opinion on the summarized work? Or do you know related work that is of interest? Let me know your thoughts in the comments below: