Architecture
After going through the steps in this blog post, you’ll end up with this:
Setup host
Before you can start your docker container, you will need to go deeper down the rabbit hole.
You’ll first need to complete the steps here:
Setting up an Ubuntu 14.04 box running on a GPU-enabled AWS instance
After you’re done, you’ll end up with a host OS with the following properties:
- A GPU enabled AWS instance running Ubuntu 14.04
- Nvidia kernel module
- Nvidia device drivers
- CUDA 6.5 installed and verified
Install Docker
Once your host OS is setup, you’re ready to install docker. The latest instructions are avaialable on the Docker website. Currently for Ubuntu 14.0.4 you need to:
1 2 |
|
As the post-install message suggests, enable docker for non-root users:
1
|
|
Verify correct install via:
1
|
|
Mount GPU devices
Mount
1 2 |
|
You should see something like this:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
Verify: Find all your nvidia devices
1
|
|
You should see:
1 2 3 |
|
Run GPU enabled docker image
Launch docker container
The easiest way to get going is to use this pre-built docker image that has the cuda drivers pre-installed. Or if you want to build your own, the accompanying dockerfile will be a useful starting point. (Update: Nvidia has released an official docker container which you should probably use, but I haven’t tried yet as of the time of this writing. Please post a comment if you get this to work)
You’ll have to adapt the DOCKER_NVIDIA_DEVICES
variable below to match your particular devices.
To start the docker container, run:
1 2 |
|
After running the above command, you should be at a shell inside your docker container:
1
|
|
Verify CUDA access from inside the docker container
Install CUDA samples
1 2 |
|
Build deviceQuery sample
1 2 3 |
|
You should see the following output
1 2 |
|