Check out the latest superpixel benchmark — Superpixel Benchmark (2016) — and let me know your opinion! @david_stutz


Installing Bazel, Masking Graphics Cards for Tensorflow

In this series, I collect problems I come across when using Ubuntu for research and development. In this article: installing Bazel on Ubuntu and masking graphics cards from being considered by Tensorflow.

Installing Bazel is easy:

sudo apt-get install openjdk-8-jdk
wget https://github.com/bazelbuild/bazel/releases/download/0.4.3/bazel-0.4.3-installer-linux-x86_64.sh
./bazel-0.4.3-installer-linux-x86_64.sh --user
rm bazel-0.4.3-installer-linux-x86_64.sh

For building Tensorflow, the created ~/bin directory should be added to the path (e.g. in .bashrc):

# Bazel
export PATH=$PATH:~/bin

Another interesting error I came across using Tensorflow is a result of having multiple GPUs. For example, one GPU dedicated for heavy computation, and another, cheaper GPU to handle multiple displays and provide a smooth user interface. The latter is obviously not meant to be used with CUDA. Still, Tensorflow tries to grab all available graphics cards (as described here) which results in the following error:

W tensorflow/stream_executor/cuda/cuda_driver.cc:590] creating context when one is currently active; existing: 0x3fd7d50
E tensorflow/core/common_runtime/direct_session.cc:136] Internal: failed initializing StreamExecutor for CUDA device ordinal 1: Internal: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_INVALID_DEVICE
# ...
tensorflow.python.framework.errors_impl.InternalError: Failed to create session.

The solution is to tell CUDA which graphics cards to consider:

export CUDA_VISIBLE_DEVICES=0 # use nvidia-smi to check the correct index

Alternatively, the full GPU id can also be used — these can be used using nvidia-smi -L.

What is your opinion on this article? Did you find it interesting or useful? Let me know your thoughts in the comments below or using the following platforms: