IAM

OPENSOURCEFAN STUDYING
STUDYINGCOMPUTERSCIENCEANDMATH COMPUTERSCIENCE

Check out the latest superpixel benchmark — Superpixel Benchmark (2016) — and let me know your opinion! @david_stutz
12thAPRIL2017

READING

I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. C. Courville, Y. Bengio. Maxout Networks. ICML, 2013.

Goodfellow et al. propose maxout units as better alternative to rectified linear units (ReLUs) when using dropout. A maxout unit basically represents max pooling across channels:

$h_i(x) = \max_{j \in [1, k]} z_{ij}$

$z_{ij} = x^T W_{\cdot ij} + b_{ij}$

with $W \in \mathbb{R}^{d \times m \times k}$ and $b \in \mathbb{R}^{m \times k}$. When training with dropout, dropout is applied prior to the multiplication by the weights. They also provide a proof that maxout networks are universal approximators and the beneficial properties of maxout units for performance and training are shown experimentally on several datasets.

What is your opinion on the summarized work? Or do you know related work that is of interest? Let me know your thoughts in the comments below or using the following platforms: