Ross B. Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. CVPR, 2014.

Girshick et al. propose R-CNN, an object detector which (based on several improvements [1, 2]) long defined the state-of-the-art in object detection and related tasks. The original paper not only describes the first “version” of R-CNN but also provides a thorough experimental study on hyper-parameters and design choices. In its core, the object detection pipeline consists of several modules: an object proposal module (such as selective search [39]), a feature extractor and a classifier. The pipeline additionally includes some post-processing steps such as non-maximum suppression and bounding box regression.

Based on proposals from selective search, features are extracted using a pre-trained convolutional neural network such as AlexNet (referred to as T-Net) [25] or VGG16 (referred to as O-Net) [43]. The 4096-dimensional feature fector of the last fully connected layer is used as features. All proposed bounding boxes are warped to the input size of the network (details can be found in the appendix). The prediction layer of the network is removed and the (pre-trained) network is fine-tuned for object detection. At test time, the computed features are then fed into a class-specific SVM. For all modules, Girshick et al. Detail the used hyper-parameters and training schemes.

In addition to non-maximum suppression, bounding box regression is employed to improve localization. Girshick et al. Detail the approach in the appendix. In short, linear regressors are trained to shift (and scale) a bounding box proposal for better localization.

The object detector is evaluated on PASCAL VOC and ILSVRC where the approach is shown to outperform the state-of-the-art. For details, refer to the paper. In addition, Girshick et al. Provide an ablation study to investigate the design choices. These include a non-fine-tuned model as well as using different layers as features. Interestingly, the fully connected layers are very sensitive to fine tuning while the last convolutional layer is not. They also provide visualization where individual units are interpreted as object detectors and the highest-confidence boudning boxes corresponding to these units are visualized as in Figure 1.

Figure 1: Visualization of intermediate units as described in the text.

I can only recommend reading the paper including its experimental section as well as follow-up work [1, 2].

  • [1] Shaoqing Ren, Kaiming He, Ross B. Girshick, Jian Sun. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. CoRR abs/1506.01497 (2015).
  • [2] Ross B. Girshick. Fast R-CNN. CoRR abs/1504.08083 (2015).
  • [25] [25] A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, 2012.
  • [39] J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders. Selective search for object recognition. IJCV, 2013.
  • [43] K. Simonyan and A. Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv preprint, arXiv:1409.1556, 2014.
What is your opinion on this article? Let me know your thoughts on Twitter @davidstutz92 or LinkedIn in/davidstutz92.