Seven Story Rabbit Hole

Sometimes awesome things happen in deep rabbit holes. Or not.

   images

Running Neural Style on an AWS GPU Instance

These instructions will walk you through getting neural-style up and running on an AWS GPU instance.

Spin up CUDA-enabled AWS instance

Follow these instructions to install CUDA 7.5 on AWS GPU Instance Running Ubuntu 14.04.

SSH into AWS instance

1
$ ssh ubuntu@<instance-ip>

Install Docker

1
2
$ sudo apt-get update && sudo apt-get install curl
$ curl -sSL https://get.docker.com/ | sh

As the post-install message suggests, enable docker for non-root users:

1
$ sudo usermod -aG docker ubuntu

Verify correct install via:

1
$ sudo docker run hello-world

Mount GPU devices

Mount

1
2
3
$ cd /usr/local/cuda/samples/1_Utilities/deviceQuery
$ sudo make
$ sudo ./deviceQuery

You should see something like this:

1
2
3
4
5
6
7
8
9
10
11
12
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GRID K520"
  CUDA Driver Version / Runtime Version          6.5 / 6.5
  ... snip ...

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime Version = 6.5, NumDevs = 1, Device0 = GRID K520
Result = PASS

Verify: Find all your nvidia devices

1
$ ls -la /dev | grep nvidia

You should see:

1
2
3
crw-rw-rw-  1 root root    195,   0 Oct 25 19:37 nvidia0
crw-rw-rw-  1 root root    195, 255 Oct 25 19:37 nvidiactl
crw-rw-rw-  1 root root    251,   0 Oct 25 19:37 nvidia-uvm

Start Docker container

1
2
$ export DOCKER_NVIDIA_DEVICES="--device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-uvm:/dev/nvidia-uvm"
$ sudo docker run -ti $DOCKER_NVIDIA_DEVICES kaixhin/cuda-torch /bin/bash

Re-install CUDA 7.5 in the Docker container

As reported in the Torch7 Google Group and in Kaixhin/dockerfiles, there is an API version mismatch with the docker container and the host’s version of CUDA.

The workaround is to re-install CUDA 7.5 via:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1404/x86_64/cuda-repo-ubuntu1404_7.5-18_amd64.deb
$ sudo dpkg -i cuda-repo-ubuntu1404_7.5-18_amd64.
deb
$ sudo apt-get update
$ sudo apt-get upgrade -y
$ sudo apt-get install -y opencl-headers build-essential protobuf-compiler \
    libprotoc-dev libboost-all-dev libleveldb-dev hdf5-tools libhdf5-serial-dev \
    libopencv-core-dev  libopencv-highgui-dev libsnappy-dev libsnappy1 \
    libatlas-base-dev cmake libstdc++6-4.8-dbg libgoogle-glog0 libgoogle-glog-dev \
    libgflags-dev liblmdb-dev git python-pip gfortran
$ sudo apt-get clean
$ sudo apt-get install -y linux-image-extra-`uname -r` linux-headers-`uname -r` linux-image-`uname -r`
$ sudo apt-get install -y cuda

Verify CUDA inside docker container

Running:

1
$ nvidia-smi 

Should show info about the GPU driver and not return any errors.

Running this torch command:

1
$ th -e "require 'cutorch'; require 'cunn'; print(cutorch)"

Should produce this output:

1
2
3
4
5
{
  getStream : function: 0x4054b760
  getDeviceCount : function: 0x408bca58
  .. etc
}

Install neural-style

The following should be run inside the docker container:

1
2
3
$ apt-get install -y wget libpng-dev libprotobuf-dev protobuf-compiler
$ git clone --depth 1 https://github.com/jcjohnson/neural-style.git
$ /root/torch/install/bin/luarocks install loadcaffe

Download models

1
2
$ cd neural-style
$ sh models/download_models.sh

Run neural style

First, grab a few images to test with

1
2
3
$ mkdir images
$ wget https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg/1280px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg -O images/vangogh.jpg
$ wget http://exp.cdn-hotels.com/hotels/1000000/10000/7500/7496/7496_42_z.jpg -O images/hotel_del_coronado.jpg

Run it:

1
$ th neural_style.lua -style_image images/vangogh.jpg -content_image images/hotel_del_coronado.jpg

CuDNN (optional)

CuDNN can potentially speed things up.

download cuDNN

Install via:

1
2
3
4
5
tar -xzvf cudnn-7.0-linux-x64-v3.0-prod.tgz
cd cuda/
sudo cp lib64/libcudnn* /usr/local/cuda-7.5/lib64/
sudo cp include/cudnn.h /usr/local/cuda-7.5/include
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-7.5/lib64/

Install the torch bindings for cuDNN:

1
luarocks install cudnn

References

  • Neural-Style INSTALL.md
  • ami-84c787ee — this AMI has everything pre-installed, however it is installed on the host rather than under docker, which was due to time constraints.

Comments