IAM

MARCH2017

READING

R. Socher, B. Huval, B. P. Bath, C. D. Manning, A. Y. Ng. Convolutional-Recursive Deep Learning for 3D Object Classification. NIPS, 2012.

Socher et al. use an ensemble of random (not trained) recursive neural network for 3D object classification. Their work is often cited by recent approaches [1,2,3] to 3D shape recognition as one of the first approaches applying ideas and methods from convolutional neural networks to 3D object recognition. However, note that in contrast to the volumetric approaches, Socher et al. treat the depth of RGB-D images as additional channel. Therefore, the approach might be better described as 2.5D object classification.

The overall architecture is summarized in Figure 1 — $K$ convolutional filters are learned in an unsupervised manner using the approach presented in [4]. To this end, patches are extracted from the RGB and the depth channels, mean subtracted, normalized by the standard deviation and finally whitened. Filters are then learned using $k$-means. On the pooled features obtained form these filters, Socher et al. apply several random recursive neural networks which apply the same weight matrix three times to obtain higher-level features. Finally a Softmax classifier is learned on the concatenated features.

Figure 1 (click to enlarge): High-level view of the proposed approach. Both the RGB and the depth channels are separately convolved by $K$ learned filters. The responses are subsequently pooled. These pooled features are merged and processed by recursive neural networks to learn higher-level features. The recursive neural networks consist of random weights that are not learned. Only the Softmax classifier on top of these features is learned by backpropagation.

Interestingly, Socher et al. minimize the training effort to a minimum and also resort to unsupervised training for low-level features. Still, they are able to present superior performance to other approaches which are mostly based on hand-crafted features combined with kernel machines or random forests. They also evaluate several of their design decisions; they show improved performance over using a single, trained recursive neural network instead of the ensemble of random recursive neural networks and using a trained convolutional neural network instead of the $k$-means learned filters.

  • [1] C. R. Qi, H. Su, M. Nießner, A. Dai, M. Yan, L. J. Guibas. Volumetric and Multi-view CNNs for Object Classification on 3D Data. CVPR, 2016.
  • [2] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, J. Xiao. 3D ShapeNets: A deep representation for volumetric shapes. CVPR, 2015.
  • [3] D. Maturana, S. Scherer. 3D Convolutional Neural Networks for landing zone detection from LiDAR. ICRA, 2015.
  • [4] [13] A. Coates, A. Y. Ng, H. Lee. An Analysis of Single-Layer Networks in Unsupervised Feature Learning. AISTATS, 2011.
What is your opinion on this article? Let me know your thoughts on Twitter @davidstutz92 or LinkedIn in/davidstutz92.