Seven Story Rabbit Hole

Sometimes awesome things happen in deep rabbit holes. Or not.

   images

Docker on AWS GPU Ubuntu 14.04 / CUDA 6.5

Architecture

After going through the steps in this blog post, you’ll end up with this:

architecture diagram

Setup host

Before you can start your docker container, you will need to go deeper down the rabbit hole.

You’ll first need to complete the steps here:

Setting up an Ubuntu 14.04 box running on a GPU-enabled AWS instance

After you’re done, you’ll end up with a host OS with the following properties:

  • A GPU enabled AWS instance running Ubuntu 14.04
  • Nvidia kernel module
  • Nvidia device drivers
  • CUDA 6.5 installed and verified

Install Docker

Once your host OS is setup, you’re ready to install docker. The latest instructions are avaialable on the Docker website. Currently for Ubuntu 14.0.4 you need to:

1
2
$ sudo apt-get update && sudo apt-get install curl
$ curl -sSL https://get.docker.com/ | sh

As the post-install message suggests, enable docker for non-root users:

1
$ sudo usermod -aG docker ubuntu

Verify correct install via:

1
$ sudo docker run hello-world

Mount GPU devices

Mount

1
2
$ cd /usr/local/cuda/samples/1_Utilities/deviceQuery
$ ./deviceQuery

You should see something like this:

1
2
3
4
5
6
7
8
9
10
11
12
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GRID K520"
  CUDA Driver Version / Runtime Version          6.5 / 6.5
  ... snip ...

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime Version = 6.5, NumDevs = 1, Device0 = GRID K520
Result = PASS

Verify: Find all your nvidia devices

1
$ ls -la /dev | grep nvidia

You should see:

1
2
3
crw-rw-rw-  1 root root    195,   0 Oct 25 19:37 nvidia0
crw-rw-rw-  1 root root    195, 255 Oct 25 19:37 nvidiactl
crw-rw-rw-  1 root root    251,   0 Oct 25 19:37 nvidia-uvm

Run GPU enabled docker image

Launch docker container

The easiest way to get going is to use this pre-built docker image that has the cuda drivers pre-installed. Or if you want to build your own, the accompanying dockerfile will be a useful starting point. (Update: Nvidia has released an official docker container which you should probably use, but I haven’t tried yet as of the time of this writing. Please post a comment if you get this to work)

You’ll have to adapt the DOCKER_NVIDIA_DEVICES variable below to match your particular devices.

To start the docker container, run:

1
2
$ DOCKER_NVIDIA_DEVICES="--device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-uvm:/dev/nvidia-uvm"
$ sudo docker run -ti $DOCKER_NVIDIA_DEVICES tleyden5iwx/ubuntu-cuda /bin/bash

After running the above command, you should be at a shell inside your docker container:

1
root@1149788c731c:# 

Verify CUDA access from inside the docker container

Install CUDA samples

1
2
$ cd /opt/nvidia_installers
$ ./cuda-samples-linux-6.5.14-18745345.run -noprompt -cudaprefix=/usr/local/cuda-6.5/

Build deviceQuery sample

1
2
3
$ cd /usr/local/cuda/samples/1_Utilities/deviceQuery
$ make
$ ./deviceQuery   

You should see the following output

1
2
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime Version = 6.5, NumDevs = 1, Device0 = GRID K520
Result = PASS

References

Comments