Seven Story Rabbit Hole

Sometimes awesome things happen in deep rabbit holes. Or not.

   images

Adding Vendoring to a Go Project

Install gvt

After doing some research, I decided to try gvt since it seemed simple and well documented, and integrated well with exiting tools like go get.

1
2
$ export GO15VENDOREXPERIMENT=1
$ go get -u github.com/FiloSottile/gvt

Go get target project to be updated

I’m going to update todolite-appserver to use vendored dependencies for some of it’s dependencies, just to see how things go.

1
$ go get -u github.com/tleyden/todolite-appserver

Vendor dependencies

I’m going to vendor the dependency on kingpin since it has transitive dependencies of it’s own (github.com/alecthomas/units, etc). gvt handles this by automatically pulling all of the transitive dependencies.

1
$ gvt fetch github.com/alecthomas/kingpin

Now my directory structure looks like this:

1
2
3
4
5
6
7
├── main.go
└── vendor
    ├── github.com
    │   └── alecthomas
    ├── gopkg.in
    │   └── alecthomas
    └── manifest

Here is the manifest

gvt list shows the following:

1
2
3
4
5
$  gvt list
github.com/alecthomas/kingpin  https://github.com/alecthomas/kingpin  master 46aba6af542541c54c5b7a71a9dfe8f2ab95b93a
github.com/alecthomas/template https://github.com/alecthomas/template master 14fd436dd20c3cc65242a9f396b61bfc8a3926fc
github.com/alecthomas/units    https://github.com/alecthomas/units    master 2efee857e7cfd4f3d0138cc3cbb1b4966962b93a
gopkg.in/alecthomas/kingpin.v2 https://gopkg.in/alecthomas/kingpin.v2 master 24b74030480f0aa98802b51ff4622a7eb09dfddd

Verify it’s using the vendor folder

I opened up the vendor/github.com/alecthomas/kingpin/global.go and made the following change:

1
2
3
4
5
// Errorf prints an error message to stderr.
func Errorf(format string, args ...interface{}) {
  fmt.Println("CALLED IT!!")
  CommandLine.Errorf(format, args...)
}

Now verify that code is getting compiled and run:

1
2
3
$ go run main.go changesfollower
CALLED IT!!
main: error: URL is empty

(note: export GO15VENDOREXPERIMENT=1 is still in effect in my shell)

Restore the dependency

Before I check in the vendor directory to git, I want to reset it to it’s previous state before I made the above change to the global.go source file.

1
$ gvt restore

Now if I open global.go again, it’s back to it’s original state. Nice!

Add the vendor folder and push

1
2
3
$ git add vendor
$ git commit -m "..."
$ git push origin master

Also, I updated the README to tell users to set the GO15VENDOREXPERIMENT=1 variable:

1
2
3
$ export GO15VENDOREXPERIMENT=1
$ go get -u github.com/tleyden/todolite-appserver
$ todolite-appserver --help

but the instructions otherwise remained the same. If someone tries to use this but forgets to set GO15VENDOREXPERIMENT=1 in Go 1.5, it will still work, it will just use the kingpin dependency in the $GOPATH rather than the vendor/ directory. Ditto for someone using go 1.4 or earlier.

Removing a vendored dependency

As it turns out, I don’t even need kingpin in this project, since I’m using cobra. The kingpin dependency was caused by some leftover code I forgot to cleanup.

To remove it, I ran:

1
2
3
4
$ gvt delete github.com/alecthomas/kingpin
$ gvt delete github.com/alecthomas/template
$ gvt delete github.com/alecthomas/units
$ gvt delete gopkg.in/alecthomas/kingpin.v2

In this case, since it was my only dependency, it was easy to identify the transitive dependencies. In general though it looks like it’s up to you as a user to track down which ones to remove. I filed gvt issue 16 to hopefully address that.

Editor annoyances

I have emacs setup using the steps in this blog post, and I’m running into the following annoyances:

  • When I use godef to jump into the code of vendored dependency, it takes me to source code that lives in the GOPATH, which might be different than what’s under vendor/. Also, if I edit it there, my changes won’t be reflected when I rebuild.
  • I usually search for things in the project via M-x rgrep, but now it’s searching through every repo under vendor/ and returning things I’m not interested in .. since most of the time I only want to search within my project.

Configure Emacs as a Go Editor From Scratch Part 3

This is a continuation from a previous blog post. In this post I’m going to focus on making emacs look a bit better.

Currently:

screenshot

Install a nicer theme

I like the taming-mr-arneson-theme, so let’s install that one. Feel free to browse the emacs themes and find one that you like more.

1
2
$ `mkdir ~/.emacs.d/color-themes`
$ `wget https://raw.githubusercontent.com/emacs-jp/replace-colorthemes/d23b086141019c76ea81881bda00fb385f795048/taming-mr-arneson-theme.el`

Update your ~/emacs.d/init.el to add the following lines to the top of the file:

1
2
(add-to-list 'custom-theme-load-path "/Users/tleyden/.emacs.d/color-themes/")
(load-theme 'taming-mr-arneson t)

Now when you restart emacs it should look like this:

screenshot

## Directory Tree

1
2
$ cd ~/DevLibraries
$ git clone https://github.com/jaypei/emacs-neotree.git neotree

Update your ~/emacs.d/init.el to add the following lines:

1
2
(add-to-list 'load-path "/some/path/neotree")
(require 'neotree)

Open a .go file and the enter M-x neotree-dir to show a directory browser:

screnshot

Ref: NeoTree

Octopress Under Docker

I’m setting up a clean install of El Capitan, and want to get my Octopress blog going. However, I don’t want to install it directly on my OSX workstation — I want to have it contained in a docker container.

Install Docker

That’s beyond the scope of this blog post, but what I ended up doing on my new OSX installation was to:

Run tleyden5iwx/octopress

1
$ docker run -itd -v ~/Documents/blog/:/blog tleyden5iwx/octopress /bin/bash

What’s in ~/Documents/blog/? Basically, the octopress instance I’d setup as described in Octopress Setup Part I.

Bundle install

From inside the docker container:

1
2
# cd /blog/octopress
# bundle install

Edit a blog post

On OSX, open up ~/Documents/blog/source/_posts/path-to-post and make some minor edits

Push source

1
2
3
# git push origin source
Username for 'https://github.com': [enter your username]
Password for 'https://username@github.com': [enter your password]

Generate and push to master

Attempt 1

1
2
3
4
5
# rake generate
rake aborted!
Gem::LoadError: You have already activated rake 10.4.2, but your Gemfile requires rake 0.9.6. Using bundle exec may solve this.
/blog/octopress/Rakefile:2:in `<top (required)>'
(See full trace by running task with --trace) 

I have no idea why this is happening, but I just conceded defeat against these ruby weirdisms, wished I was using Go (and thought about converting my blog to Hugo), and took their advice and prefixed every command thereafter with bundle exec.

Attempt 2

1
2
3
# bundle exec rake generate && bundle exec rake deploy
Username for 'https://github.com': [enter your username]
Password for 'https://username@github.com': [enter your password]

Success!

Setting Up Uniqush With APNS

This walks you through running Uniqush in the cloud (under Docker) and setting up an iOS app to receive messages via APNS (Apple Push Notification Service).

Run Uniqush under Docker

Config

  • mkdir -p volumes/uniqush
  • wget https://git.io/vgYXN -O volumes/uniqush/uniqush-push.conf

Security note: the above config has Uniqush listening on all interfaces, but depending on your setup you probably want to change that to localhost or something more restrictive.

Docker run

1
docker run -itd -p 9898:9898 -v ~/volumes/uniqush/uniqush-push.conf:/etc/uniqush/uniqush-push.conf tleyden5iwx/uniqush uniqush-push

Kick off redis (hack)

So the right way to do this is to run redis in a separate container and link the containers via Docker Networks. In the meantime, this little hack will work… shell into the container and kick off redis.

  • container=$(docker ps | grep -i uniqush | awk '{print $1}')
  • docker exec -ti $container bash
  • /etc/init.d/redis-server start (<—– inside the running container)
  • exit (to get out of the container)

Verify Uniqush is running

Run this curl command outside of the docker container to verify that Uniqush is responding to HTTP requests:

1
2
$ curl localhost:9898/version
uniqush-push 1.5.2

Create APNS certificate

In my case, I already had an app id for my app (com.couchbase.todolite), but push notifications are not enabled, so I needed to enable them:

screenshot

Create a new push cert:

screenshot

Choose the correct app id:

screenshot

Generate CSR according to instructions in keychain:

screenshot

This will save a CSR on your file system, and the next wizard step will ask you to upload this CSSR and generate the certificate. Now you can download it:

screenshot

Double click the downloaded cert and it will be added to your keychain.

This is where I got a bit confused, since I had to also download the cert from the app id section — go to the app id and hit “Edit”, then download the cert and double click it to add to your keychain. (I’m confused because I thought these were the same certs and this second step felt redundant)

screenshot

Create and use provisioning profile

Go to the Provisioning Profiles / Development section and hit the “+” button:

screenshot

Choose all certs and all devices, and then give your provisioning profile an easy to remember name.

screenshot

Download this provisioning profile and double click it to install it.

In xcode under Build Settings, choose this provisioning profile:

screenshot

Register for push notifications in your app

Add the following code to your didFinishLaunchingWithOptions::

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
    
    // Register for push notifications
    if ([application respondsToSelector:@selector(isRegisteredForRemoteNotifications)])
    {
        // iOS 8 Notifications
        [application registerUserNotificationSettings:[UIUserNotificationSettings settingsForTypes:(UIUserNotificationTypeSound | UIUserNotificationTypeAlert | UIUserNotificationTypeBadge) categories:nil]];
        
        [application registerForRemoteNotifications];
    }
    else
    {
        // iOS < 8 Notifications
        [application registerForRemoteNotificationTypes:
         (UIRemoteNotificationTypeBadge | UIRemoteNotificationTypeAlert | UIRemoteNotificationTypeSound)];
    }

    // rest of your code goes here ...

}

And the following callback methods which will be called if remote notification is successful:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
- (void)application:(UIApplication *)app didRegisterForRemoteNotificationsWithDeviceToken:(NSData *)deviceToken
{
    
    NSString *deviceTokenStr = [NSString stringWithFormat:@"%@",deviceToken];
    NSLog(@"didRegisterForRemoteNotificationsWithDeviceToken, Device token: %@", deviceTokenStr);
    
    NSString* deviceTokenCleaned = [[[[deviceToken description]
                                      stringByReplacingOccurrencesOfString: @"<" withString: @""]
                                     stringByReplacingOccurrencesOfString: @">" withString: @""]
                                    stringByReplacingOccurrencesOfString: @" " withString: @""];
    
     NSLog(@"didRegisterForRemoteNotificationsWithDeviceToken, Cleaned device token token: %@", deviceTokenCleaned);

}

and this callback which will be called if it’s not unsuccessful:

1
2
3
4
5
- (void)application:(UIApplication *)app didFailToRegisterForRemoteNotificationsWithError:(NSError *)err
{
    NSString *str = [NSString stringWithFormat: @"Error: %@", err];
    NSLog(@"Error registering device token.  Push notifications will not work%@", str);
}

If you now run this app on a simulator, you can expect an error like Error registering device token. Push notifications will not workError.

Run the app on a device you should see a popup dialog in the app asking if it’s OK to receive push notifications, and the following log messages in the xcode console:

1
2
didRegisterForRemoteNotificationsWithDeviceToken, Device token: <281c8710 1b029fdb 16c8e134 39436336 116001ce bf6519e6 8edefab5 23dab4e9>
didRegisterForRemoteNotificationsWithDeviceToken, Cleaned device token token: 281c87101b029fdb16c8e13439436336116001cebf6519e68edefab523dab4e9

Export APNS keys to .PEM format

Open keychain, select the login keychain and the My Certificates category:

screenshot

  • Right click on the certificate (not the private key) “Apple Development Push Services: (your app id)”
  • Choose Export “Apple Development Push Services: (your app id)″.
  • Save this as apns-prod-cert.p12 file somewhere you can access it.
  • When it prompts you for a password, leave it blank (or add one if you want, but this tutorial will assume it was left blank)
  • Repeat with the private key (in this case, TodoLite Push Notification Cert) and save it as apns-prod-key.p12.

Now they need to be converted from .p12 to .pem format.

1
2
3
$ openssl pkcs12 -clcerts -nokeys -out apns-prod-cert.pem -in apns-prod-cert.p12
Enter Import Password: <return>
MAC verified OK
1
2
3
4
$ openssl pkcs12 -nocerts -out apns-prod-key.pem -in apns-prod-key.p12
Enter Import Password:
MAC verified OK
Enter PEM pass phrase: hello <return>

Remove the PEM passphrase:

1
2
3
$ openssl rsa -in apns-prod-key.pem -out apns-prod-key-noenc.pem
Enter pass phrase for apns-prod-key.pem: hello
writing RSA key

Add PEM files to Uniqush docker container

When you call the Uniqush REST API to add a Push Service Provider, it expects to find the PEM files on it’s local file system. Use the following commands to get these files into the running container in the /tmp directory:

1
2
3
$ `container=$(docker ps | grep -i uniqush | awk '{print $1}')`
$ docker cp /tmp/apns-prod-cert.pem $container:/tmp/apns-prod-cert.pem
$ docker cp /tmp/apns-prod-key-noenc.pem $container:/tmp/apns-prod-key-noenc.pem

Create APNS provider in Uniqush via REST API

1
2
3
4
5
6
$ export UNIQUSH_HOSTNAME=ec2-54-73-10-60.compute-1.amazonaws.com:9898
$ curl -v http://$UNIQUSH_HOSTNAME/addpsp -d service=myservice \
                                          -d pushservicetype=apns \
                    -d cert=/tmp/apns-prod-cert.pem \
                    -d key=/tmp/apns-prod-key-noenc.pem \
                    -d sandbox=true

(Note: I’m using a development cert, but if this was a distribution cert you’d want to use sandbox=false)

You should get a 200 OK response with:

1
[AddPushServiceProvider][Info] 2016/02/03 20:35:29 From=24.23.246.59:59447 Service=myservice PushServiceProvider=apns:9f49c9c618c97bebe21bea159d3c7a8577934bdf00 Success!

Add Uniqush subscriber

Using the cleaned up device token from the previous step 281c87101b029fdb16c8e13439436336116001cebf6519e68edefab523dab1e9, create a subscriber with the name mytestsubscriber via:

1
2
3
4
$ curl -v http://$UNIQUSH_HOSTNAME/subscribe -d service=myservice \
                                             -d subscriber=mytestsubscriber \
                       -d pushservicetype=apns \
                       -d devtoken=281c87101b029fdb16c8e13439436336116001cebf6519e68edefab523dab1e9 

You should receive a 200 OK response with:

1
[Subscribe][Info] 2016/02/03 20:43:21 From=24.23.246.59:60299 Service=myservice Subscriber=mytestsubscriber PushServiceProvider=apns:9f49c9c618c97bebe21bea159d3c7a8577934bdf00 DeliveryPoint=apns:2cbecd0798cc6731d96d5b0fb01d813c7c9a83af00 Success!

Push a test message

The moment of truth!

First, you need to either background your app by pressing the home button, or add some code like this so that an alert will be shown if the app is foregrounded.

1
2
3
$ curl -v http://$UNIQUSH_HOSTNAME/push -d service=myservice \
                                        -d subscriber=mytestsubscriber \
                  -d msg=HelloWorld

You should get a 200 OK response with:

1
2
[Push][Info] 2016/02/03 20:46:08 RequestId=56b26710-INbW8UWMUONtH8Ttddd2Qg== From=24.23.246.59:60634 Service=myservice NrSubscribers=1 Subscribers="[mytestsubscriber]"
[Push][Info] 2016/02/03 20:46:09 RequestID=56b26710-INbW8UWMUONtH8Ttddd2Qg== Service=myservice Subscriber=mytestsubscriber PushServiceProvider=apns:9f49c9c618c97bebe21bea159d3c7a8577934bdf00 DeliveryPoint=apns:2cbecd0798cc6731d96d5b0fb01d813c7c9a83af MsgId=apns:apns:9f49c9c618c97bebe21bea159d3c7a8577934bdf-1 Success!

And a push notification on the device!

screenshot

References

CUDA 7.5 on AWS GPU Instance Running Ubuntu 14.04

Launch stock Ubuntu AMI

  • Launch ami-d05e75b8
  • Choose a GPU instance type: g2.2xlarge or g2.8xlarge
  • Increase the size of the storage (this depends on what else you plan to install, I’d suggest at least 20 GB)

SSH in

1
$ ssh ubuntu@<instance ip>

Install CUDA repository

1
2
$ wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1404/x86_64/cuda-repo-ubuntu1404_7.5-18_amd64.deb
$ sudo dpkg -i cuda-repo-ubuntu1404_7.5-18_amd64.deb

Update APT

1
2
3
4
5
6
7
$ sudo apt-get update
$ sudo apt-get upgrade -y
$ sudo apt-get install -y opencl-headers build-essential protobuf-compiler \
    libprotoc-dev libboost-all-dev libleveldb-dev hdf5-tools libhdf5-serial-dev \
    libopencv-core-dev  libopencv-highgui-dev libsnappy-dev libsnappy1 \
    libatlas-base-dev cmake libstdc++6-4.8-dbg libgoogle-glog0 libgoogle-glog-dev \
    libgflags-dev liblmdb-dev git python-pip gfortran

You will get a dialog regarding the menu.lst file, just choose the default option it gives you.

Do some cleanup:

1
$ sudo apt-get clean

DRM module workaround

1
$ sudo apt-get install -y linux-image-extra-`uname -r` linux-headers-`uname -r` linux-image-`uname -r`

For an explanation of why this is needed, see Caffe on EC2 Ubuntu 14.04 Cuda 7 and search for this command.

Install CUDA

1
2
$ sudo apt-get install -y cuda
$ sudo apt-get clean

Verify CUDA

1
$ nvidia-smi

You should see:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
+------------------------------------------------------+
| NVIDIA-SMI 352.63     Driver Version: 352.63         |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GRID K520           Off  | 0000:00:03.0     Off |                  N/A |
| N/A   30C    P0    36W / 125W |     11MiB /  4095MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Make sure kernel module and devices are present:

1
2
3
4
5
6
ubuntu@ip-10-33-135-228:~$ lsmod | grep -i nvidia
nvidia               8642880  0
drm                   303102  1 nvidia
ubuntu@ip-10-33-135-228:~$ ls -alh /dev | grep -i nvidia
crw-rw-rw-  1 root root    195,   0 Nov 23 01:59 nvidia0
crw-rw-rw-  1 root root    195, 255 Nov 23 01:58 nvidiactl

References

Running Neural Style on an AWS GPU Instance

These instructions will walk you through getting neural-style up and running on an AWS GPU instance.

Spin up CUDA-enabled AWS instance

Follow these instructions to install CUDA 7.5 on AWS GPU Instance Running Ubuntu 14.04.

SSH into AWS instance

1
$ ssh ubuntu@<instance-ip>

Install Docker

1
2
$ sudo apt-get update && sudo apt-get install curl
$ curl -sSL https://get.docker.com/ | sh

As the post-install message suggests, enable docker for non-root users:

1
$ sudo usermod -aG docker ubuntu

Verify correct install via:

1
$ sudo docker run hello-world

Mount GPU devices

Mount

1
2
3
$ cd /usr/local/cuda/samples/1_Utilities/deviceQuery
$ sudo make
$ sudo ./deviceQuery

You should see something like this:

1
2
3
4
5
6
7
8
9
10
11
12
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GRID K520"
  CUDA Driver Version / Runtime Version          6.5 / 6.5
  ... snip ...

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime Version = 6.5, NumDevs = 1, Device0 = GRID K520
Result = PASS

Verify: Find all your nvidia devices

1
$ ls -la /dev | grep nvidia

You should see:

1
2
3
crw-rw-rw-  1 root root    195,   0 Oct 25 19:37 nvidia0
crw-rw-rw-  1 root root    195, 255 Oct 25 19:37 nvidiactl
crw-rw-rw-  1 root root    251,   0 Oct 25 19:37 nvidia-uvm

Start Docker container

1
2
$ export DOCKER_NVIDIA_DEVICES="--device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-uvm:/dev/nvidia-uvm"
$ sudo docker run -ti $DOCKER_NVIDIA_DEVICES kaixhin/cuda-torch /bin/bash

Re-install CUDA 7.5 in the Docker container

As reported in the Torch7 Google Group and in Kaixhin/dockerfiles, there is an API version mismatch with the docker container and the host’s version of CUDA.

The workaround is to re-install CUDA 7.5 via:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1404/x86_64/cuda-repo-ubuntu1404_7.5-18_amd64.deb
$ sudo dpkg -i cuda-repo-ubuntu1404_7.5-18_amd64.
deb
$ sudo apt-get update
$ sudo apt-get upgrade -y
$ sudo apt-get install -y opencl-headers build-essential protobuf-compiler \
    libprotoc-dev libboost-all-dev libleveldb-dev hdf5-tools libhdf5-serial-dev \
    libopencv-core-dev  libopencv-highgui-dev libsnappy-dev libsnappy1 \
    libatlas-base-dev cmake libstdc++6-4.8-dbg libgoogle-glog0 libgoogle-glog-dev \
    libgflags-dev liblmdb-dev git python-pip gfortran
$ sudo apt-get clean
$ sudo apt-get install -y linux-image-extra-`uname -r` linux-headers-`uname -r` linux-image-`uname -r`
$ sudo apt-get install -y cuda

Verify CUDA inside docker container

Running:

1
$ nvidia-smi 

Should show info about the GPU driver and not return any errors.

Running this torch command:

1
$ th -e "require 'cutorch'; require 'cunn'; print(cutorch)"

Should produce this output:

1
2
3
4
5
{
  getStream : function: 0x4054b760
  getDeviceCount : function: 0x408bca58
  .. etc
}

Install neural-style

The following should be run inside the docker container:

1
2
3
$ apt-get install -y wget libpng-dev libprotobuf-dev protobuf-compiler
$ git clone --depth 1 https://github.com/jcjohnson/neural-style.git
$ /root/torch/install/bin/luarocks install loadcaffe

Download models

1
2
$ cd neural-style
$ sh models/download_models.sh

Run neural style

First, grab a few images to test with

1
2
3
$ mkdir images
$ wget https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg/1280px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg -O images/vangogh.jpg
$ wget http://exp.cdn-hotels.com/hotels/1000000/10000/7500/7496/7496_42_z.jpg -O images/hotel_del_coronado.jpg

Run it:

1
$ th neural_style.lua -style_image images/vangogh.jpg -content_image images/hotel_del_coronado.jpg

CuDNN (optional)

CuDNN can potentially speed things up.

download cuDNN

Install via:

1
2
3
4
5
tar -xzvf cudnn-7.0-linux-x64-v3.0-prod.tgz
cd cuda/
sudo cp lib64/libcudnn* /usr/local/cuda-7.5/lib64/
sudo cp include/cudnn.h /usr/local/cuda-7.5/include
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-7.5/lib64/

Install the torch bindings for cuDNN:

1
luarocks install cudnn

References

  • Neural-Style INSTALL.md
  • ami-84c787ee — this AMI has everything pre-installed, however it is installed on the host rather than under docker, which was due to time constraints.

Running the Sync Gateway Amazon AMI

How to run the Couchbase Sync Gateway AWS AMI

Kick off AWS instance

  • Browse to the Sync Gateway AMI in the AWS Marketplace
  • Click Continue
  • Change all ports to “MY IP” except for port 4984
  • Make sure you choose a key that you have locally

SSH in and start Sync Gateway

  • Go to the AWS console, find the EC2 instance, and find the instance public ip address. It should look like this: ec2-54-161-201-224.compute-1.amazonaws.com. The rest of the instructions will refer to this as
  • ssh ec2-user@<instance public ip> (this should let you in without prompting you for a password. if not, you chose a key when you launched that you don’t have locally)
  • Start the Sync Gateway with this command:
1
/opt/couchbase-sync-gateway/bin/sync_gateway -interface=0.0.0.0:4984 -url=http://localhost:8091 -bucket=sync_gateway -dbname=sync_gateway
  • You should see output like this:
1
2
3
4
5
6
7
8
9
10
2015-11-03T19:37:05.384Z ==== Couchbase Sync Gateway/1.1.0(28;86f028c) ====
2015-11-03T19:37:05.384Z Opening db /sync_gateway as bucket "sync_gateway", pool "default", server <http://localhost:8091>
2015-11-03T19:37:05.384Z Opening Couchbase database sync_gateway on <http://localhost:8091>
2015/11/03 19:37:05  Trying with selected node 0
2015/11/03 19:37:05  Trying with selected node 0
2015-11-03T19:37:05.536Z Using default sync function 'channel(doc.channels)' for database "sync_gateway"
2015-11-03T19:37:05.536Z     Reset guest user to config
2015-11-03T19:37:05.536Z Starting profile server on
2015-11-03T19:37:05.536Z Starting admin server on 127.0.0.1:4985
2015-11-03T19:37:05.550Z Starting server on localhost:4984 ...

Verify via curl

From your workstation:

1
$ curl http://<instance public ip>:4984/sync_gateway/

You should get a response like:

1
{"committed_update_seq":1,"compact_running":false,"db_name":"sync_gateway","disk_format_version":0,"instance_start_time":1446579479331843,"purge_seq":0,"update_seq":1}

Customize configuration

For more advanced Sync Gateway configuration, you will want to create a JSON config file on the EC2 instance itself and pass that to Sync Gateway when you launch it, or host your config JSON on the internet somewhere and pass Sync Gateway the URL to the file.

View Couchbase Server UI

In order to login to the Couchbase Server UI, go to :8091 and use:

  • Username: Administrator
  • Password: <aws instance id, eg: i-8a9f8335>

Running Couchbase Server Under Joyent Triton

Joyent has recently announced their new Triton Docker container hosting service. There are several advantages of running Docker containers on Triton over a more traditional cloud hosting platform:

  • Better performance since there is no hardware level virtualization overhead. Your containers run on bare-metal.

  • Simplified networking between containers. Each container gets its own private (and optionally public) ip address.

  • Hosts are abstracted away — you just deploy into the “container cloud”, and don’t care which host your container is running on.

For more details, check out Bryan Cantrill’s talk about Docker and the Future of Containers in Production.

Let’s give it a spin with a “hello world” container, and then with a cluster of Couchbase servers.

Sign up for a Joyent account

Follow the signup instructions on the Joyent website

You will also need to add your SSH key to your account.

Install or Upgrade Docker

If you don’t have Docker installed already and you are on Ubuntu, run:

1
$ curl -sSL https://get.docker.com/ | sh

See install Docker on Ubuntu for more details.

Upgrade Docker client to 1.4.1 or later

Check your version of Docker with:

1
2
$ docker --version
Docker version 1.0.1, build 990021a

If you are on a version before 1.4.1 (like I was), you can upgrade Docker via the boot2docker installers.

Joyent + Docker setup

Get the sdc-docker repo (sdc == Smart Data Center):

1
$ git clone https://github.com/joyent/sdc-docker.git

Perform setup via:

1
2
$ cd sdc-docker
$  ./tools/sdc-docker-setup.sh -k 165.225.168.22 $ACCOUNT ~/.ssh/$PRIVATE_KEY_FILE

Replace values as follows:

  • $ACCOUNT: you can get this by logging into the Joyent web ui and going to the Account menu from the pulldown in the top-right corner. Find the Username field, and use that
  • $PRIVATE_KEY_FILE: the name of the file where your private key is stored, typically this will be id_rsa

Run the command and you should see the following output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Setting up Docker client for SDC using:
    CloudAPI:        https://165.225.168.22
    Account:         <your username>
    Key:             /home/ubuntu/.ssh/id_rsa

[..snip..]

Wrote certificate files to /home/ubuntu/.sdc/docker/<username>

Docker service endpoint is: tcp://<generated ip>:2376

* * *
Success. Set your environment as follows:

    export DOCKER_CERT_PATH=/home/ubuntu/.sdc/docker/<username>
    export DOCKER_HOST=tcp://<generated-ip>:2376
    alias docker="docker --tls"

Then you should be able to run 'docker info' and see your account
name 'SDCAccount: <username>' in the output.

Export environment variables

As the output above suggests, copy and paste the commands from the output. Here’s an example of what that will look like (but you should copy and paste from your command output, not the snippet below):

1
2
3
$ export DOCKER_CERT_PATH=/home/ubuntu/.sdc/docker/<username>
$ export DOCKER_HOST=tcp://<generated-ip>:2376
$ alias docker="docker --tls"

Docker Hello World

Let’s spin up an Ubuntu docker image that says hello world.

Remember you’re running the Docker client on your workstation, not in the cloud. Here’s an overview on what’s going to be happening:

diagram

To start the docker container::

1
$ docker run --rm ubuntu:14.04 echo "Hello Docker World, from Joyent"

You should see the following output:

1
2
3
4
Unable to find image 'ubuntu:14.04' locally
Pulling repository library/ubuntu
...
Hello Docker World, from Joyent

Also, since the --rm flag was passed, the container will have been removed after exiting. You can verify this by running docker ps -a. This is important because stopped containers incur charges on Joyent.

Congratulations! You’ve gotten a “hello world” Docker container running on Joyent.

Run Couchbase Server containers

Now it’s time to run Couchbase Server.

To kick off three Couchbase Server containers, run:

1
2
3
4
$ for i in `seq 1 3`; do \
      echo "Starting container $i"; \
      export container_$i=$(docker run --name couchbase-server-$i -d -P couchbase/server); \
  done

To confirm the containers are up, run:

1
$ docker ps

and you should see:

1
2
3
4
CONTAINER ID        IMAGE                                       COMMAND             CREATED             STATUS              PORTS               NAMES
5bea8901814c        couchbase/server   "couchbase-start"   3 minutes ago       Up 2 minutes                            couchbase-server-1
bef1f2f32726        couchbase/server   "couchbase-start"   2 minutes ago       Up 2 minutes                            couchbase-server-2
6f4e2a1e8e63        couchbase/server   "couchbase-start"   2 minutes ago       Up About a minute                       couchbase-server-3

At this point you will have environment variables defined with the container ids of each container. You can check this by running:

1
2
3
4
$ echo $container_1 && echo $container_2 && echo $container_3
21264e44d66b4004b4828b7ae408979e7f71924aadab435aa9de662024a37b0e
ff9fb4db7b304e769f694802e6a072656825aa2059604ba4ab4d579bd2e5d18d
0c6f8ca2951448e497d7e12026dcae4aeaf990ec51e047cf9d8b2cbdd9bd7668

Get public ip addresses of the containers

Each container will have two IP addresses assigned:

  • A public IP, accessible from anywhere
  • A private IP, only accessible from containers/machines in your Joyent account

To get the public IP, we can use the Docker client. (to get the private IP, you need to use the Joyent SmartDataCenter tools, which is described below)

1
2
3
$ container_1_ip=`docker inspect $container_1 | grep -i IPAddress | awk -F: '{print $2}' |  grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b"`
$ container_2_ip=`docker inspect $container_2 | grep -i IPAddress | awk -F: '{print $2}' |  grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b"`
$ container_3_ip=`docker inspect $container_3 | grep -i IPAddress | awk -F: '{print $2}' |  grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b"`

You will now have the public IP addresses of each container defined in environment variables. You can check that it worked via:

1
2
3
4
$ echo $container_1_ip && echo $container_2_ip && echo $container_3_ip
165.225.185.11
165.225.185.12
165.225.185.13

Connect to Couchbase Web UI

Open your browser to $container_1_ip:8091 and you should see:

Couchbase Welcome Screen

At this point, it’s possible to setup the cluster by going to each Couchbase node’s Web UI and following the Setup Wizard. However, in case you want to automate this in the future, let’s do this over the command line instead.

Setup first Couchbase node

Let’s arbitrarily pick container_1 as the first node in the cluster. This node is special in the sense that other nodes will join it.

The following command will do the following:

  • Set the Administrator’s username and password to Administrator / password (you should change this)
  • Set the cluster RAM size to 600 MB

Note: the -u admin -p password should be left as-is, since that is just passing in the default admin name and password for auth purposes.

1
2
3
4
5
6
$ docker run --rm --entrypoint=/opt/couchbase/bin/couchbase-cli couchbase/server \
cluster-init -c $container_1_ip \
--cluster-init-username=Administrator \
--cluster-init-password=password \
--cluster-init-ramsize=600 \
-u admin -p password

You should see a response like:

1
SUCCESS: init 165.225.185.11

Create a default bucket

A bucket is equivalent to a database in typical RDMS systems.

1
2
3
4
5
6
7
8
$ docker run --rm --entrypoint=/opt/couchbase/bin/couchbase-cli couchbase/server \
bucket-create -c $container_1_ip:8091 \
--bucket=default \
--bucket-type=couchbase \
--bucket-port=11211 \
--bucket-ramsize=600 \
--bucket-replica=1 \
-u Administrator -p password

You should see:

1
SUCCESS: bucket-create

Add 2nd Couchbase node

Add in the second Couchbase node with this command

1
2
3
4
5
6
$ docker run --rm --entrypoint=/opt/couchbase/bin/couchbase-cli couchbase/server \
server-add -c $container_1_ip \
-u Administrator -p password \
--server-add $container_2_ip \
--server-add-username Administrator \
--server-add-password password 

You should see:

1
SUCCESS: server-add 165.225.185.12:8091

To verify it was added, run:

1
2
3
$ docker run --rm --entrypoint=/opt/couchbase/bin/couchbase-cli couchbase/server \
server-list -c $container_1_ip \
-u Administrator -p password

which should return the list of Couchbase Server nodes that are now part of the cluster:

1
2
ns_1@165.225.185.11 165.225.185.11:8091 healthy active
ns_1@165.225.185.12 165.225.185.12:8091 healthy inactiveAdded

Add 3rd Couchbase node and rebalance

In this step we will:

  • Add the 3rd Couchbase node
  • Trigger a “rebalance”, which distributes the (empty) bucket’s data across the cluster
1
2
3
4
5
6
$ docker run --rm --entrypoint=/opt/couchbase/bin/couchbase-cli couchbase/server \
rebalance -c $container_1_ip \
-u Administrator -p password \
--server-add $container_3_ip \
--server-add-username Administrator \
--server-add-password password 

You should see:

1
2
3
4
5
6
INFO: rebalancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SUCCESS: rebalanced cluster
close failed in file object destructor:
Error in sys.excepthook:

Original exception was:

If you see SUCCESS, then it worked. (I’m not sure why the “close failed in file ..” error is happening, but so far it appears that it can be safely ignored.)

Login to Web UI

Open your browser to $container_1_ip:8091 and you should see:

Couchbase Login Screen

Login with:

  • Username: Administrator
  • Password: password

And you should see:

Couchbase Nodes

Congratulations! You have a Couchbase Server cluster up and running on Joyent Triton.

Teardown

To stop and remove your Couchbase server containers, run:

1
2
$ docker stop $container_1 $container_2 $container_3
$ docker rm $container_1 $container_2 $container_3

To double check that you no longer have any containers running or in the stopped state, run docker ps -a and you should see an empty list.

Installing the SDC tools (optional)

Installing the Joyent Smart Data Center (SDC) tools will allow you to gain more visibility into your container cluster — for example being able to view the internal IP of each continer.

Here’s how to install the sdc-tools suite.

Install smartdc

First install NodeJS + NPM

Install smartdc:

1
npm install -g smartdc

Configure environment variables

1
2
3
$ export SDC_URL=https://us-east-3b.api.joyent.com
$ export SDC_ACCOUNT=<ACCOUNT>
$ export SDC_KEY_ID=$(ssh-keygen -l -f $HOME/.ssh/id_rsa.pub | awk '{print $2}')

Replace values as follows:

  • ACCOUNT: you can get this by logging into the Joyent web ui and going to the Account menu from the pulldown in the top-right corner. Find the Username field, and use that

List machines

Run sdc-listmachines to list all the containers running under your Joyent account. Your output should look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ sdc-listmachines
[
{
    "id": "0c6f8ca2-9514-48e4-97d7-e12026dcae4a",
    "name": "couchbase-server-3",
    "type": "smartmachine",
    "state": "running",
    "image": "335a8046-0749-1174-5666-6f084472b5ef",
    "ips": [
      "192.168.128.32",
      "165.225.185.13"
    ],
    "memory": 1024,
    "disk": 25600,
    "metadata": {},
    "tags": {},
    "created": "2015-03-26T14:50:31.196Z",
    "updated": "2015-03-26T14:50:45.000Z",
    "networks": [
      "7cfe29d4-e313-4c3b-a967-a28ea34342e9",
      "178967cb-8d11-4f53-8434-9c91ff819a0d"
    ],
    "dataset": "335a8046-0749-1174-5666-6f084472b5ef",
    "primaryIp": "165.225.185.13",
    "firewall_enabled": false,
    "compute_node": "44454c4c-4400-1046-8050-b5c04f383432",
    "package": "t4-standard-1G"
  },
]

Find private IP of an individual machine

1
$ sdc-getmachine <machine_id> | json -aH ips | json -aH | egrep "10\.|192\.”

References

Setting Up Octopress Under Docker

I got a new computer last week. It’s the latest macbook retina, and I needed to refresh because I wanted a bigger SSD drive (and after having an SSD drive, I’ll never go back)

Anyway, I’m trying to get my Octopress blog going again, and oh my God, what a nightmare. Octopress was working beautifully for me for years, and then all of the sudden I am at the edge of Ruby Dependency Hell staring at an Octopress giving me eight fingers.

With the help of Docker, I’ve managed to tame this eight legged beast, barely.

Run Docker

See Installing Docker for instructions.

This blog post assumes you already have an Octopress git repo. If you are starting from scratch, then check out Octopress Setup Part I to become even more confused.

Install Octopress Docker image

1
$ docker run -ti tleyden5iwx/octopress /bin/bash

After this point, the rest of the instructions assume that you are executing commands from inside the Docker Container.

Delete Octopress dir + clone your Octopress repo

The Docker container will contain an Octopress directory, but it’s not needed.

From within the container:

1
2
3
4
$ cd /root
$ rm -rf octopress/
$ git clone https://github.com/your-github-username/your-github-username.github.io.git octopress
$ cd octopress/

Now, switch to the source branch (which contains the content)

1
$ git checkout source

Re-install dependencies:

1
$ bundle install

Prevent ASCII encoding errors:

1
$ export LC_ALL=C.UTF-8

Clone deploy directory

1
$ git clone https://github.com/your-github-username/your-github-username.github.io.git _deploy

Rake preview

As a smoke test, run:

1
$ bundle exec rake preview

NOTE: I have no idea why bundle exec is required here, I just used this in response to a previous error message and it’s accompanying suggestion.

If this gives no errors, that’s a good sign.

Create a new blog post

1
$ bundle exec rake new_post["Setting up Octopress under Docker"]

It will tell you the path to the blog post. Now open the file in your favorite editor and add contect.

Push to Source branch

The source branch has the source markdown content. It’s actually the most important thing to preserve, because the HTML can always be regnerated from it.

1
$ git push origin source

Deploy to Master branch

The master branch contains the rendered HTML content. Here’s how to push it up to your github pages repo (remember, in an earlier step you cloned your github pages repo at https://github.com/your-github-username/your-github-username.github.io.git)

1
$ bundle exec rake generate && bundle exec rake deploy

After the above command, the changes should be visible on your github pages blog (eg, your-username.github.io)

Common errors

If you get:

1
YAML Exception reading 2014-04-09-a-successful-git-branching-model-with-enterprise-support.markdown: invalid byte sequence in US-ASCII

Run:

1
$ export LC_ALL=C.UTF-8

References

Test Coverage for Go With drone.io and coveralls.io

This will walk you through setting up a test coverage report on coveralls.io which will be updated every time a new build happens on drone.io (a continuous integration server similar to TavisCI).

I’m going to use the couchbaselabs/sg-replicate repo as an example, since it currently does not have any test coverage statistics. The goal is to end up with a badge in the README that points to a test coverage report hosted on coveralls.io.

Clone the repo

1
2
$ git clone https://github.com/couchbaselabs/sg-replicate.git
$ cd sg-replicate

Test coverage command line stats

1
2
3
$ go test -cover
go tool: no such tool "cover"; to install:
  go get golang.org/x/tools/cmd/cover

Try again:

1
2
3
4
$ go get golang.org/x/tools/cmd/cover && go test -cover
PASS
coverage: 69.4% of statements
ok    github.com/couchbaselabs/sg-replicate   0.156s

Ouch, 69.4% is barely a C-. (if you round up!)

Coverage breakdown

Text report:

1
2
3
4
5
6
$ go test -coverprofile=coverage.out 
$ go tool cover -func=coverage.out
github.com/couchbaselabs/sg-replicate/attachment.go:15:           NewAttachment           84.6%
github.com/couchbaselabs/sg-replicate/changes_feed_parameters.go:20:  NewChangesFeedParams        100.0%
github.com/couchbaselabs/sg-replicate/changes_feed_parameters.go:30:  FeedType            100.0%
github.com/couchbaselabs/sg-replicate/changes_feed_parameters.go:34:  Limit               100.0%

HTML report:

1
2
$ go test -coverprofile=coverage.out 
$ go tool cover -html=coverage.out

This should open up the following report in your default browser:

html report

Coveralls.io setup

  • Login to coveralls.io
  • Create a new repo
  • Get the repo token from the SET UP COVERALLS section

At this point, your empty coveralls repo will look something like this:

empty coveralls repo

Configure Drone.io + Goveralls

If you have not already done so, setup a drone.io build for your repo.

On the drone.io Settings page, make the following changes:

Environment Variables

In the Environment Variables section of the web ui, add:

1
COVERALLS_TOKEN=<coveralls_repo_token>

Commands

In the commands section, you can replace your existing go test call with:

1
2
3
go get github.com/axw/gocov/gocov
go get github.com/mattn/goveralls
goveralls -service drone.io -repotoken $COVERALLS_TOKEN

Here’s what it should look like:

drone io ui

Kick off a build

Go to the drone.io project page for your repo, and hit Build Now

At the bottom of the build output, you should see:

1
2
Job #1.1
https://coveralls.io/jobs/5189501

If you follow the link, you should see something like:

coveralls report

Looks like we just went from a C- to a B! I have no idea why the coverage improved, but I’ll take it.

Add a badge, call it a day

On the coveralls.io project page for your repo, you should see a button near the top called Badge URLS. Click and copy/paste the markdown, which should look something like this:

1
[![Coverage Status](https://coveralls.io/repos/couchbaselabs/sg-replicate/badge.svg?branch=master)](https://coveralls.io/r/couchbaselabs/sg-replicate?branch=master)

And add it to your project’s README.

badges

References