IAM

MAY2017

READING

M. Savva, F. Yu, Hao Su, M. Aono, B. Chen, D. Cohen-Or, W. Deng, Hang Su, S. Bai, X. Bai, N. Fish, J. Han, E. Kalogerakis, E. G. Learned-Miller, Y. Li, M. Liao, S. Maji, A. Tatsuma, Y. Wang, N. Zhang and Z. Zhou. SHREC’16 Track Large-Scale 3D Shape Retrieval from ShapeNet Core55. Eurographics Workshop on 3D Object Retrieval (2016).

Savva et al. evaluate several submitted models on the ShapeNet Core55 dataset as part of the SHREC 2016 3D Shape Retrieval Contest. Interestingly, most of the submitted approaches use some sort of multi-view CNN, i.e. multiple images are rendered seeing the models from different viewpoints which are then fed through a CNN to compute features or shape classes. An example is the multi-view CNN approach by Su et al. []. Interestingly, no “real” 3D CNN is among the submitted approaches ...

 
  • [] Hang Su, Subhransu Maji, Evangelos Kalogerakis, Erik G. Learned-Miller: Multi-view Convolutional Neural Networks for 3D Shape Recognition. ICCV, 2015.
What is your opinion on this article? Let me know your thoughts on Twitter @davidstutz92 or LinkedIn in/davidstutz92.