27^{th}OCTOBER2016

P. Arbeláez J. Pont-Tuset, J. T. Barron, F. Marqués, J. Malik. *Multiscale Combinatorial Grouping*.Conference on Computer Vision and Pattern Recognition, 2014.

What is **your opinion** on the summarized work? Or do you know related work that is of interest? **Let me know** your thoughts in the comments below:

Arbeláez et al. propose a novel hierarchical approach to image segmentation where the lowest level is based on the well-known normalized cuts algoithms [1]. While the approach itself produces promising results, I was most interested in the proposed "fast" normalized cuts algorithms.

As derived in [1], the normalized cuts algorithms involves solving for the eigenvalues of the Laplacian $A = D - W$ where $W$ is the weight matrix of the weighted graph corresponding to the image and $D$ is the diagonal matrix holding the total outgoing weights of each pixel. For producing an oversegmentation, the $k$ smallest eigenvalues and the corresponding eigenvectors are sought for. With high resolution images, this problem is computationally challenging (not to say infeasible). However, Arbeláez et al. propose an efficient variant shown in Algorithm 1.

Algorithm 1: Efficient normalized cuts by downsampling.

`dncuts`

(for downsampled normalized cuts) is motivated by two observations:The algorithm proceeds as follows: given $A$, it is subsampled by taken only every other pixel (this is done via

`pixel_decimate`

which returns every oher pixel index). Let $i$ denote the subsampled indices, then $A[i, i]$ denotes the subsampled matrix. As this results in neighboring pixels being disconneted, Arbeléz et al. consider $A[i,i]^2$. The eigenvectors of $A[i,i]^2$ are computed and upsampled by multiplying by $A[:,i]$. In algorithm 1, this procedure is repeated $D$ times.Normalized Cuts and Image Segmentationn. Conference on Computer Vision and Pattern Recognition, 1997.