IAM

DAVIDSTUTZ

05thMARCH2018

READING

Saurabh Gupta, Pablo Andrés Arbeláez, Ross B. Girshick, Jitendra Malik. Aligning 3D models to RGB-D images of cluttered scenes. CVPR, 2015.

Based on object detections and instance segmentations from [13], Gupta et al. propose a convolutional neural network for rough pose prediction used as basis for CAD model fitting to improve scene understanding. Their approach is summarized in Figure 1. Their approach has three components. The first is an instance segmenter [13]. Then, a convolutional neural network is trained for pose estimation and a set of CAD models is fitted using an adapted Iterative Closest Point (ICP) algorithm.

Figure 1: Illustration of the proposed approach where the first part (instance segmentation) is taken from [13].

The convolutional neural network consists of three convolutional layers followed by pooling, dropout, local response normalization and ReLU activations. It takes as input the 3-channel surface normals (encoding the angle between the normal and one of the axes) and outputs a binned ose estimate. The model is trained on synthetic data from ModelNet [40]. Bounding boxes with overla of 0.7 are sampled randomly, the content is warped to generate positive samples.

Given the instance segmentation and a rough pose estimate, a search of different models and scales is used to infer the optimal scale, rotation and translation for each model. As clue for the scale, they use the area of the top view of the bounding box. During the search, models are scaled to fit this area. For each model with the selected scale, ICP is used to solve for the rotation and translation. The translation is initialized by assuming the object to stand on the floor and using the mean of the instance segmentation. The rotation is initialized according to the coarse pose estimate. As multiple models are fitted, a linear classifier is learned to select the best candidate.

Figure 2: Qualitative results on the NYU Depth V2 dataset [43].

Qualitative results on the NYU Depth V2 dataset [43] are shown in Figure 2.

  • [13] S. Gupta, R. Girshick, P. Arbeláez, and J. Malik. Learning rich features from RGB-D images for object detection and segmentation. In ECCV, 2014.
  • [32] N. Silberman, D. Hoiem, P. Kohli, and R. Fergus. Indoor segmentation and support inference from RGBD images. In ECCV, 2012.
  • [40] Z. Wu, S. Song, A. Khosla, X. Tang, and J. Xiao. 3D shapenets for 2.5D object recognition and next-best-view prediction. CoRR, abs/1406.5670, 2014.

What is your opinion on the summarized work? Or do you know related work that is of interest? Let me know your thoughts in the comments below: