Seven Story Rabbit Hole

Sometimes awesome things happen in deep rabbit holes. Or not.

   images

Goroutines vs Threads

Here are some of the advantages of Goroutines over threads:

  • You can run more goroutines on a typical system than you can threads.
  • Goroutines have growable segmented stacks.
  • Goroutines have a faster startup time than threads.
  • Goroutines come with built-in primitives to communicate safely between themselves (channels).
  • Goroutines allow you to avoid having to resort to mutex locking when sharing data structures.
  • Goroutines are multiplexed onto a small number of OS threads, rather than a 1:1 mapping.
  • You can write massively concurrent servers withouth having to resort to evented programming.

You can run more of them

On Java you can run 1000’s or tens of 1000’s threads. On Go you can run hundreds of thousands or millions of goroutines.

Java threads map directly to OS threads, and are relatively heavyweight. Part of the reason they are heavyweight is their rather large fixed stack size. This caps the number of them you can run in a single VM due to the increasing memory overhead.

Go OTOH has a segmented stack that grows as needed. They are “Green threads”, which means the Go runtime does the scheduling, not the OS. The runtime multiplexes the goroutines onto real OS threads, the number of which is controlled by GOMAXPROCS. Typically you’ll want to set this to the number of cores on your system, to maximize potential parellelism.

They let you avoid locking hell

One of the biggest drawback of threaded programming is the complexity and brittleness of many codebases that use threads to achieve high concurrency. There can be latent deadlocks and race conditions, and it can become near impossible to reason about the code.

Go OTOH gives you primitives that allow you to avoid locking completely. The mantra is don’t communicate by sharing memory, share memory by communicating. In other words, if two goroutines need to share data, they can do so safely over a channel. Go handles all of the synchronization for you, and it’s much harder to run into things like deadlocks.

No callback spaghetti, either

There are other approaches to achieving high concurrency with a small number of threads. Python Twisted was one of the early ones that got a lot of attention. Node.js is currently the most prominent evented frameworks out there.

The problem with these evented frameworks is that the code complexity is also high, and difficult to reason about. Rather than “straightline” coding, the programmer is forced to chain callbacks, which gets interleaved with error handling. While refactoring can help tame some of the mental load, it’s still an issue.

Running Caffe on AWS GPU Instance via Docker

This is a tutorial to help you get the Caffe deep learning framework up and running on a GPU-powered AWS instance running inside a Docker container.

Architecture

architecture diagram

Setup host

Before you can start your docker container, you will need to go deeper down the rabbit hole.

You’ll first need to complete the steps here:

Setting up an Ubuntu 14.04 box running on a GPU-enabled AWS instance

After you’re done, you’ll end up with a host OS with the following properties:

  • A GPU enabled AWS instance running Ubuntu 14.04
  • Nvidia kernel module
  • Nvidia device drivers
  • CUDA 6.5 installed and verified

Install Docker

Once your host OS is setup, you’re ready to install docker. (version 1.3 at the time of this writing)

Setup the key for the docker repo:

1
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9

Add the docker repo:

1
2
$ sudo sh -c "echo deb https://get.docker.com/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
$ sudo apt-get update

Install docker:

1
$ sudo apt-get install lxc-docker

Run the docker container

Find your nvidia devices

1
$ ls -la /dev | grep nvidia

You should see:

1
2
3
crw-rw-rw-  1 root root    195,   0 Oct 25 19:37 nvidia0
crw-rw-rw-  1 root root    195, 255 Oct 25 19:37 nvidiactl
crw-rw-rw-  1 root root    251,   0 Oct 25 19:37 nvidia-uvm

You’ll have to adapt the DOCKER_NVIDIA_DEVICES variable below to match your particular devices.

Here’s how to start the docker container:

1
2
$ DOCKER_NVIDIA_DEVICES="--device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-uvm:/dev/nvidia-uvm"
$ sudo docker run -ti $DOCKER_NVIDIA_DEVICES tleyden5iwx/caffe-gpu /bin/bash

It’s a large docker image, so this might take a few minutes, depending on your network connection.

Run caffe test suite

After the above docker run command completes, your shell will now be inside a docker container that has Caffe installed.

You’ll want run the Caffe test suite and make sure it passes. This will validate your environment, including your GPU drivers.

1
2
$ cd /opt/caffe
$ make test && make runtest

Expected Result: ... [ PASSED ] 838 tests.

Run the MNIST LeNet example

A more comprehensive way to verify your environment is to train the MNIST LeNet example:

1
2
3
4
5
$ cd /opt/caffe/data/mnist
$ ./get_mnist.sh
$ cd /opt/caffe
$ ./examples/mnist/create_mnist.sh
$ ./examples/mnist/train_lenet.sh

This will take a few minutes.

Expected output:

1
2
3
4
5
libdc1394 error: Failed to initialize libdc1394 
I1018 17:02:23.552733    66 caffe.cpp:90] Starting Optimization 
I1018 17:02:23.553583    66 solver.cpp:32] Initializing solver from parameters:
... lots of output ...
I1018 17:17:58.684598    66 caffe.cpp:102] Optimization Done.

Congratulations, you’ve got GPU-powered Caffe running in a docker container — celebrate with a cup of Philz!

References

Docker on AWS GPU Ubuntu 14.04 / CUDA 6.5

Architecture

After going through the steps in this blog post, you’ll end up with this:

architecture diagram

Setup host

Before you can start your docker container, you will need to go deeper down the rabbit hole.

You’ll first need to complete the steps here:

Setting up an Ubuntu 14.04 box running on a GPU-enabled AWS instance

After you’re done, you’ll end up with a host OS with the following properties:

  • A GPU enabled AWS instance running Ubuntu 14.04
  • Nvidia kernel module
  • Nvidia device drivers
  • CUDA 6.5 installed and verified

Install Docker

Once your host OS is setup, you’re ready to install docker. (version 1.3 at the time of this writing)

Setup the key for the docker repo:

1
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9

Add the docker repo:

1
2
$ sudo sh -c "echo deb https://get.docker.com/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
$ sudo apt-get update

Install docker:

1
$ sudo apt-get install lxc-docker

Run GPU enabled docker image

Find all your nvidia devices

1
$ ls -la /dev | grep nvidia

You should see:

1
2
3
crw-rw-rw-  1 root root    195,   0 Oct 25 19:37 nvidia0
crw-rw-rw-  1 root root    195, 255 Oct 25 19:37 nvidiactl
crw-rw-rw-  1 root root    251,   0 Oct 25 19:37 nvidia-uvm

Launch docker container

The easiest way to get going is to use this pre-built docker image that has the cuda drivers pre-installed. Or if you want to build your own, the accompanying dockerfile will be a useful starting point.

You’ll have to adapt the DOCKER_NVIDIA_DEVICES variable below to match your particular devices.

To start the docker container, run:

1
2
$ DOCKER_NVIDIA_DEVICES="--device /dev/nvidia0:/dev/nvidia0 --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-uvm:/dev/nvidia-uvm"
$ sudo docker run -ti $DOCKER_NVIDIA_DEVICES tleyden5iwx/ubuntu-cuda /bin/bash

After running the above command, you should be at a shell inside your docker container:

1
root@1149788c731c:# 

Verify CUDA access from inside the docker container

Install CUDA samples

1
2
$ cd /opt/nvidia_installers
$ ./cuda-samples-linux-6.5.14-18745345.run -noprompt -cudaprefix=/usr/local/cuda-6.5/

Build deviceQuery sample

1
2
3
$ cd /usr/local/cuda/samples/1_Utilities/deviceQuery
$ make
$ ./deviceQuery   

You should see the following output

1
2
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime Version = 6.5, NumDevs = 1, Device0 = GRID K520
Result = PASS

References

CUDA 6.5 on AWS GPU Instance Running Ubuntu 14.04

Using a pre-built public AMI

Based on the instructions in this blog post, I’ve created an AMI and shared it publicly. So the easiest thing to do is just use that pre-built AMI:

  • Image: ami-2cbf3e44 (Ubuntu Server 14.04 LTS (HVM) – CUDA 6.5)
  • Instance type: g2.2xlarge
  • Storage: Use at least 8 GB, 20+ GB recommended

If you use the pre-built AMI, then you can skip the rest of this article, since all of these steps are “baked in” to the AMI.

Building from scratch

Or if you prefer to build your own instance from scratch, keep reading.

Create a new EC2 instance:

  • Image: ami-9eaa1cf6 (Ubuntu Server 14.04 LTS (HVM), SSD Volume Type)
  • Instance type: g2.2xlarge
  • Storage: Use at least 8 GB, 20+ GB recommended

Install build-essential:

1
$ apt-get update && apt-get install build-essential

Get CUDA installer:

1
$ wget http://developer.download.nvidia.com/compute/cuda/6_5/rel/installers/cuda_6.5.14_linux_64.run

Extract CUDA installer:

1
2
3
$ chmod +x cuda_6.5.14_linux_64.run
$ mkdir nvidia_installers
$ ./cuda_6.5.14_linux_64.run -extract=`pwd`/nvidia_installers

Run Nvidia driver installer:

1
2
$ cd nvidia_installers
$ ./NVIDIA-Linux-x86_64-340.29.run

At this point it will popup an 8-bit UI that will ask you to accept a license agreement, and then start installing.

screenshot

At this point, I got an error:

1
2
3
4
5
6
7
Unable to load the kernel module 'nvidia.ko'.  This happens most frequently when this kernel module was built against the wrong or
         improperly configured kernel sources, with a version of gcc that differs from the one used to build the target kernel, or if a driver
         such as rivafb, nvidiafb, or nouveau is present and prevents the NVIDIA kernel module from obtaining ownership of the NVIDIA graphics
         device(s), or no NVIDIA GPU installed in this system is supported by this NVIDIA Linux graphics driver release.

         Please see the log entries 'Kernel module load error' and 'Kernel messages' at the end of the file '/var/log/nvidia-installer.log'
         for more information.

After reading this forum post I installed:

1
$ sudo apt-get install linux-image-extra-virtual

When it prompted me what do to about the grub changes, I chose “choose package maintainers version”.

Reboot:

1
$ reboot

Disable nouveau

At this point you need to disable nouveau, since it conflicts with the nvidia kernel module.

Open a new file

1
$ vi /etc/modprobe.d/blacklist-nouveau.conf

and add these lines to it

1
2
3
4
5
blacklist nouveau
blacklist lbm-nouveau
options nouveau modeset=0
alias nouveau off
alias lbm-nouveau off

and then save the file.

Disable the Kernel Nouveau:

1
$ echo options nouveau modeset=0 | sudo tee -a /etc/modprobe.d/nouveau-kms.conf

Reboot:

1
2
$ update-initramfs -u
$ reboot

One more try — this time it works

Get Kernel source:

1
2
$ apt-get install linux-source
$ apt-get install linux-headers-3.13.0-37-generic

Rerun Nvidia driver installer:

1
2
$ cd nvidia_installers
$ ./NVIDIA-Linux-x86_64-340.29.run

Load nvidia kernel module:

1
$ modprobe nvidia

Run CUDA + samples installer:

1
2
$ ./cuda-linux64-rel-6.5.14-18749181.run
$ ./cuda-samples-linux-6.5.14-18745345.run

Verify CUDA is correctly installed

1
2
3
$ cd /usr/local/cuda/samples/1_Utilities/deviceQuery
$ make
$ ./deviceQuery   

You should see the following output:

1
2
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime Version = 6.5, NumDevs = 1, Device0 = GRID K520
Result = PASS

References

Debugging Into Android Source

Debugging into the core Android source code can be useful. Here’s how to do it in Android Studio 0.8.2.

Starting out, if we hit a breakpoint where we have a sqlite database object:

screenshot

And if you step in, you get this, which isn’t very useful:

screenshot

To fix that, go to Android SDK, find the API level you are using, and check the Sources for Android SDK box.

screenshot

You must restart Android Studio at this point

Did you restart Android Studio? Now re-run your app in the debugger, and when you try to step into the database.execSQL() method, you should see this:

screenshot

It worked! Now you can debug into any Android code.

Running Couchbase Sync Gateway on Google Compute Engine

First, a quick refresh of what Couchbase Sync Gateway actually is.

So here’s a birds-eye-view of the Couchbase Mobile architecture:

diagram

Sync Gateway allows Couchbase Lite mobile apps to sync data between each other and the Couchbase Server running on the backend.

This blog post will walk you through how to run Sync Gateway in a Docker container on Google Compute Engine.

Create GCE instance and ssh in

Follow the instructions on Running Docker on Google Compute Engine.

At this point you should be ssh’d into your GCE instance

Create a configuration JSON

Here’s a sample example JSON configuration for Sync Gateway which uses walrus as it’s backing store, rather than Couchbase Server. Later we will swap in Couchbase Server as a backing store.

Run Sync Gateway docker container

1
gce:~$ sudo docker run -d -name sg -p 4984:4984 -p 4985:4985 tleyden5iwx/couchbase-sync-gateway sync_gateway "https://gist.githubusercontent.com/tleyden/d97d985eb1e0725e858e/raw"

This will return a container id, eg 8ffb83fd1f.

Check the logs to make sure there are no serious errors in the logs:

1
gce:~$ sudo docker logs 8ffb83fd1f

You should see something along the lines of:

1
2
3
4
5
6
02:23:58.905587 Enabling logging: [REST]
02:23:58.905818 ==== Couchbase Sync Gateway/1.00 (unofficial) ====
02:23:58.905863 Opening db /sync_gateway as bucket "sync_gateway", pool "default", server <walrus:/opt/sync_gateway/data>
02:23:58.905964 Opening Walrus database sync_gateway on <walrus:/opt/sync_gateway/data>
02:23:58.909659 Starting admin server on :4985
02:23:58.913260 Starting server on :4984 ...

Expose API port 4984 via Firewall rule

On your workstation with the gcloud tool installed, run:

1
$ gcloud compute firewalls create sg-4984 --allow tcp:4984

Verify that it’s running

Find out external ip address of instance

On your workstation with the gcloud tool installed, run:

1
2
3
$ gcloud compute instances list
name     status  zone          machineType internalIP   externalIP
couchbse RUNNING us-central1-a f1-micro    10.240.74.44 142.222.178.49

Your external ip is listed under the externalIP column, eg 142.222.178.49 in this example.

Run curl request

On your workstation, replace the ip below with your own ip, and run:

1
$ curl http://142.222.178.49:4984

You should get a response like:

1
{"couchdb":"Welcome","vendor":{"name":"Couchbase Sync Gateway","version":1},"version":"Couchbase Sync Gateway/1.00 (unofficial)"}

Re-run it with Couchbase Server backing store

OK, so we’ve gotten it working with walrus. But have you looked at the walrus website lately? One click and it’s pretty obvious that this thing is not exactly meant to be a scalable production ready backend, nor has it ever claimed to be.

Let’s dump walrus for now and use Couchbase Server from this point onwards.

Start Couchbase Server

Before moving on, you will need to go through the instructions in Running Couchbase Server on GCE in order to get a Couchbase Server instance running.

Stop Sync Gateway

Run this command to stop the Sync Gateway container and completely remove it, using the same container id you used earlier:

1
gce:~$ sudo docker stop 8ffb83fd1f && sudo docker rm 8ffb83fd1f

Update config

Copy this example JSON configuration, which expects a Couchbase Server running on http://172.17.0.2:8091, and update it with the ip address of the docker instance where your Couchbase Server is running. To get this ip address, follow the these instructions in the “Find the Docker instance IP address” section.

Now upload your modified JSON configuration to a website that is publicly accessible, for example in a Github Gist.

Run Sync Gateway

Run Sync Gateway again, this time using Couchbase Server as a backing store this time.

Replace http://yourserver.co/yourconfig.json with the URL where you’ve uploaded your JSON configuration from the previous step.

1
gce:~$ sudo docker run -d -name sg -p 4984:4984 -p 4985:4985 tleyden5iwx/couchbase-sync-gateway sync_gateway "http://yourserver.co/yourconfig.json"

This will return a container id, eg 9ffb83fd1f. Again, check the logs to make sure there are no serious errors in the logs:

1
gce:~$ sudo docker logs 9ffb83fd1f

You should see something along the lines of:

1
2
... 
02:23:58.913260 Starting server on :4984 ...

with no errors.

Verify it’s working

Save a document via curl

The easiest way to add a document is via the Admin port, since there is no authentication to worry about. Since we haven’t added a firewall rule to expose the admin port (4985), (and doing so without tight filtering would be a major security hole), the following command to create a new document must be run on the GCE instance.

1
gce:~$ curl -H "Content-Type: application/json" -d '{"such":"json"}' http://localhost:4985/sync_gateway/

If it worked, you should see a response like:

1
{"id":"3cbfbe43e76b7eb5c4c221a78b2cf0cc","ok":true,"rev":"1-cd809becc169215072fd567eebd8b8de"}

View document on Couchbase Server

To verify the document was successfully stored on Couchbase Server, you’ll need to login to the Couchbase Server Web Admin UI. There are instructions here on how to do that.

From there, navigate to Data Buckets / default / Documents, and you should see:

screenshot

Click on the document that has a UUID (eg, “29f8d7..” in the screenshot above), and you should see the document’s contents:

screenshot

The _sync metadata field is used internally by the Sync Gateway and can be ignored. The actual doc contents are towards the end of the file: .."such":"json"}

Running a Couchbase Cluster on Google Compute Engine

The easiest way to run Couchbase cluster on Google Compute Engine is to run all of the nodes in Docker containers.

Create GCE instance and ssh in

Follow the instructions on Running Docker on Google Compute Engine.

At this point you should be ssh’d into your GCE instance

Increase max number of files limit

If you try to run Couchbase Server at this point, you will get this warning because the file ulimit is too low.

Here’s how to fix it:

  • Edit /etc/default/docker
  • Add a new line in the file with:
1
ulimit -n 262144
  • Restart the GCE instance in the GCE web admin by going to Compute Engine / VM Instances / and hitting the “Reboot” button.

Note: in theory it should be possible to just restart docker via sudo service docker restart, however this didn’t work for me when I tried it, so I ended up restarting the whole GCE instance

Start Couchbase Server

1
gce:~$ sudo docker run -d -name cb1 -p 8091:8091 -p 8092:8092 -p 11210:11210 -p 11211:11211 ncolomer/couchbase

Verify it’s running

Find the Docker instance IP address

On the GCE instance, run:

1
gce:~$ sudo docker inspect -format '{{ .NetworkSettings.IPAddress }}' cb1

This should return an ip address, eg 172.17.0.2

Set it as an environment variable so we can use it in later steps:

1
gce:~$ export CB1_IP=172.17.0.2

Run couchbase-cli

To verify that couchbase server is running, use the couchbase-cli to ask for server info:

1
gce:~$ sudo docker run -rm ncolomer/couchbase couchbase-cli server-info -c ${CB1_IP} -u Administrator -p couchbase

If everything is working correctly, this should return a json response, eg:

1
2
3
4
5
6
7
{
  "availableStorage": {
    "hdd": [
      {
        "path": "/",
  ...
}

Start a 3-node cluster

On the GCE instance, run the following commands:

1
2
gce:~$ sudo docker run -d -name cb2 ncolomer/couchbase couchbase-start ${CB1_IP}
gce:~$ sudo docker run -d -name cb3 ncolomer/couchbase couchbase-start ${CB1_IP}

The nodes cb2 and cb3 will automatically join the cluster via cb1. The cluster needs a rebalance to be fully operational. To do so, run the following command:

1
gce:~$ sudo docker run -rm ncolomer/couchbase couchbase-cli rebalance -c ${CB1_IP} -u Administrator -p couchbase

Connect to admin web UI

The easiest way to manage a Couchbase Server cluster is via the built-in Web Admin UI.

In order to access it, we will need to make some network changes.

Expose port 8091 via firewall rule for your machine

Go to whatismyip.com or equivalent, and find your ip address. Eg, 67.161.66.7

On your workstation with the gcloud tool installed, run:

1
$ gcloud compute firewalls create cb-8091 --allow tcp:8091 --source-ranges 67.161.66.7/32

This will allow your machine, as well any other machine behind your internet router, to connect to the Couchbase Web UI running on GCE.

To increase security, you should use ipv6 and pass your workstation’s ipv6 hostname in the --source-ranges parameter.

Find out external ip address of instance

On your workstation with the gcloud tool installed, run:

1
2
3
$ gcloud compute instances list
name     status  zone          machineType internalIP   externalIP
couchbse RUNNING us-central1-a f1-micro    10.240.74.44 142.222.178.49

Your external ip is listed under the externalIP column, eg 142.222.178.49 in this example.

Go to admin in web browser

Go to http://142.222.178.49:8091 into your web browser (replacing w/ your external ip)

You should see a screen like this:

screenshot

Login with the default credentials:

  • Username: Administrator
  • Password: couchbase

And you should see the Web Admin dashboard:

screenshot

Increase default bucket size

The default bucket size is set to a very low number by default (128M in my case). To increase this:

  • In Web Admin UI, go to Data Buckets / Default / Edit
  • Change Per Node RAM Quota to 1024 MB
  • Hit “Save” button

References

Configure Emacs as a Go Editor From Scratch Part 2

This is a continuation of Part 1, so if you haven’t read that already, you should do so now.

goimports

The idea of goimports is that every time you save a file, it will automatically update all of your imports, so you don’t have to. This can save a lot of time. Kudos to @bradfitz for taking the time to build this nifty tool.

Since this project is hosted on Google Code’s mercurial repository, if you don’t have mercurial installed already, you’ll first need to install it with:

1
$ brew install hg

Next, go get goimports with:

1
$ go get code.google.com/p/go.tools/cmd/goimports

Continuing on previous .emacs in Part 1, update your .emacs to:

1
2
3
4
5
6
7
8
9
10
11
12
(defun my-go-mode-hook ()
  ; Use goimports instead of go-fmt
  (setq gofmt-command "goimports")
  ; Call Gofmt before saving
  (add-hook 'before-save-hook 'gofmt-before-save)
  ; Customize compile command to run go build
  (if (not (string-match "go" compile-command))
      (set (make-local-variable 'compile-command)
           "go build -v && go test -v && go vet"))
  ; Godef jump key binding
  (local-set-key (kbd "M-.") 'godef-jump))
(add-hook 'go-mode-hook 'my-go-mode-hook)

Restart emacs to force it to reload the configuration

Testing out goimports

  • Open an existing .go file that contains imports
  • Remove one or more of the imports
  • Save the file

After you save the file, it should re-add the imports. Yay!

Basically any time you add or remove code that requires a different set of imports, saving the file will cause it to re-write the file with the correct imports.

The Go Oracle

The Go Oracle will blow your mind! It can do things like find all the callers of a given function/method. It can also show you all the functions that read or write from a given channel. In short, it rocks.

Here’s what you need to do in order to wield this powerful tool from within Emacs.

Go get oracle

1
go get code.google.com/p/go.tools/cmd/oracle

Move oracle binary so Emacs can find it

1
sudo mv $GOPATH/bin/oracle $GOROOT/bin/

Update .emacs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
; Go Oracle
(load-file "$GOPATH/src/code.google.com/p/go.tools/cmd/oracle/oracle.el")

(defun my-go-mode-hook ()
  ; Use goimports instead of go-fmt
  (setq gofmt-command "goimports")
  ; Call Gofmt before saving
  (add-hook 'before-save-hook 'gofmt-before-save)
  ; Customize compile command to run go build
  (if (not (string-match "go" compile-command))
      (set (make-local-variable 'compile-command)
           "go build -v && go test -v && go vet"))
  ; Godef jump key binding
  (local-set-key (kbd "M-.") 'godef-jump))
  ; Go Oracle
  (go-oracle-mode)
(add-hook 'go-mode-hook 'my-go-mode-hook)

Restart Emacs to make these changes take effect.

Get a test package to play with

This package works with go-oracle (I tested it out while writing this blog post), so you should use it to give Go Oracle a spin:

1
go get github.com/tleyden/checkers-bot-minimax

Set the oracle analysis scope

From within emacs, open $GOPATH/src/github.com/tleyden/checkers-bot-minimax/thinker.go

You need to tell Go Oracle the main package scope under which you want it to operate:

M-x go-oracle-set-scope

it will prompt you with:

Go oracle scope:

and you should enter:

github.com/tleyden/checkers-bot-minimax and hit Enter.

Nothing will appear to happen, but now Go Oracle is now ready to show it’s magic. (note it will not autocomplete packages in this dialog, which is mildly annoying. Make sure to spell them correctly.)

Important: When you call go-oracle-set-scope, you always need to give it a main package. This is something that will probably frequently trip you up while using Go Oracle.

Use oracle to find the callers of a method

You should still have the $GOPATH/src/github.com/tleyden/checkers-bot-minimax/thinker.go file open within emacs.

Position the cursor on the “T” in the Think method (line 13 of thinker.go):

screenshot

And then run

1
M-x go-oracle-callers

Emacs should open a new buffer on the right hand side with all of the places where the Think method is called. In this case, there is only one place in the code that calls it:

screenshot

To go to the call site, position your cursor on the red underscore to the left of “dynamic method call” and hit Enter. It should take you to line 240 in gamecontroller.go:

screenshot

Note that it actually crossed package boundaries, since the called function (Think) was in the main package, while the call site was in the checkersbot package.

If you got this far, you are up and running with The Go Oracle on Emacs!

Now you should try it with one of your own packages.

This is just scratching the surface — to get more information on how to use Go Oracle, see go oracle: user manual.

Configure Emacs as a Go Editor From Scratch

This explains the steps to get a productive Emacs environment for Go programming on OSX, starting from scratch.

Install Emacs

I recommend using the emacs from emacsformacosx.com.

It has a GUI installer so I won’t say much more about it.

Install Go

1
2
3
export GOROOT=/usr/local/go
export GOPATH=~/Development/gocode
export PATH=$PATH:$GOROOT/bin

Configure go-mode

Go-mode is an Emacs major mode for editing Go code. An absolute must for anyone writing Go w/ Emacs.

The following is a brief summary of Dominik Honnef’s instructions

  • mkdir -p ~/Misc/emacs && cd ~/Misc/emacs
  • git clone git@github.com:dominikh/go-mode.el.git
  • From within Emacs, run M-x update-file-autoloads, point it at the go-mode.el file in the cloned directory.
  • Emacs will prompt you for a result file, and you should enter go-mode-load.el
  • Add these two lines to your ~/.emacs
1
2
(add-to-list 'load-path "~/Misc/emacs/go-mode.el/")
(require 'go-mode-load)

Restart Emacs and open a .go file, you should see the mode as “Go” rather than “Fundamental”.

For a full description of what go-mode can do for you, see Dominik Honnef’s blog, but one really useful thing to be aware of is that you can quickly import packages via C-c C-a

Update Emacs config for godoc

It’s really useful to be able to able to pull up 3rd party or standard library docs from within Emacs using the godoc tool.

Unfortunately, it was necessary to duplicate the $PATH and $GOPATH environment variables in the .emacs file, so that the GUI Emacs app can see it. @eentzel tweeted me a blog post that explains how to deal with this, and I will update this blog post to reflect that at some point.

NOTE: you will need to modify the snippet below to reflect the $PATH and $GOPATH variables, don’t just blindly copy and paste these.

  • Add your $PATH and $GOPATH to your ~/.emacs
1
2
(setenv "PATH" "/Users/tleyden/.rbenv/shims:/Users/tleyden/.rbenv/shims:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/go/bin")
(setenv "GOPATH" "/Users/tleyden/Development/gocode")

After doing this step, you should be able to run M-x godoc and it should be able to autocomplete paths of packages. (of course, you may want to go get some packages first if you don’t have any)

Automatically call gofmt on save

gofmt reformats code into the One True Go Style Coding Standard. You’ll want to call it every time you save a file.

Add these to your ~/.emacs:

1
2
3
(setq exec-path (cons "/usr/local/go/bin" exec-path))
(add-to-list 'exec-path "/Users/tleyden/Development/gocode/bin")
(add-hook 'before-save-hook 'gofmt-before-save)

After this step, whenever you save a Go file, it will automatically reformat the file with gofmt.

Godef

Godef is essential: it lets you quickly jump around the code, as you might be used to with a full featured IDE.

From what I can tell, installing go-mode seems to automatically install godef.

To verify that godef is indeed installed:

  • Putting the cursor over a method name
  • Try doing M-x godef-jump to jump into the method, and M-* to go back.

In order to add godef key bindings, add these to your ~/.emacs:

1
2
3
4
5
6
(defun my-go-mode-hook ()
  ; Call Gofmt before saving                                                    
  (add-hook 'before-save-hook 'gofmt-before-save)
  ; Godef jump key binding                                                      
  (local-set-key (kbd "M-.") 'godef-jump))
(add-hook 'go-mode-hook 'my-go-mode-hook)

and remove your previous call to (add-hook 'before-save-hook 'gofmt-before-save) since it’s now redundant

Now you can jump into code with M-. and jump back with M-*

Autocomplete

The following is a brief summary of the emacs autocomplete manual

1
2
3
4
(add-to-list 'load-path "/Users/tleyden/.emacs.d/")
(require 'auto-complete-config)
(add-to-list 'ac-dictionary-directories "/Users/tleyden/.emacs.d//ac-dict")
(ac-config-default)

To see any effect, we need to install gocode in the next step.

Gocode: Go aware Autocomplete

The following is a brief summary of the gocode README

  • go get -u -v github.com/nsf/gocode
  • cp /Users/tleyden/Development/gocode/src/github.com/nsf/gocode/emacs/go-autocomplete.el ~/.emacs.d/
  • Add the following to your ~/.emacs
1
2
(require 'go-autocomplete)
(require 'auto-complete-config)

At this point, after you restart emacs, when you start typing something, you should see a popup menu with choices, like this screenshot.

Customize compile command to run go build

It’s convenient to be able to run M-x compile to compile and test your Go code from within emacs.

To do that, edit your ~/.emacs and replace your go-mode hook with:

1
2
3
4
5
6
7
8
9
10
(defun my-go-mode-hook ()
  ; Call Gofmt before saving
  (add-hook 'before-save-hook 'gofmt-before-save)
  ; Customize compile command to run go build
  (if (not (string-match "go" compile-command))
      (set (make-local-variable 'compile-command)
           "go build -v && go test -v && go vet"))
  ; Godef jump key binding
  (local-set-key (kbd "M-.") 'godef-jump))
(add-hook 'go-mode-hook 'my-go-mode-hook)

After that, restart emacs, and when you type M-x compile, it should try to execute go build -v && go test -v && go vet instead of the default behavior.

Power tip: you can jump straight to each compile error by running C-x `. Each time you do it, it will jump to the next error.

Is this too easy for you?

If you’re yawning and you already know all this stuff, or you’re ready to take it to the next level, check out 5 minutes of go in emacs

(PS: thanks @dlsspy for taking the time to teach me the Emacs wrestling techniques needed to get this far.)

Continue to Part 2

go-imports and go-oracle are covered in Part 2

What Is Couchbase Mobile and Why Should You Care?

Couchbase Mobile just announced it’s 1.0 release today.

What is Couchbase Mobile?

  • Couchbase Lite is an open source iOS/Android NoSQL DB with built-in sync capability.
  • Couchbase Mobile refers to the “full stack” solution, which includes the (also open source) server components that Couchbase Lite uses for sync.

To give a deeper look at what problem Couchbase Mobile is meant to solve, let me tell you the story of how I came to discover Couchbase Lite as a developer. In my previous startup, we built a mobile CRM app for sales associates.

The very first pilot release of the app, the initial architecture was:

screenshot

It was very simple, and the server was almost the Single Point of Truth, except for our JSON caching layer which had a very short expiry time before it would refetch from the server. The biggest downside to this architecture was that it only worked well when the device had a fast connection to the internet.

But there was another problem: getting updates to sync across devices in a timely manner. When sales associate #1 would update a customer, sales associate #2 wouldn’t see the change because:

  • How does the app for sales associate #2 know it needs to “re-sync” the data?
  • How will the app know that something changed on the backend that should cause it to invalidate that locally cached data?

We realized that the data sync between the devices was going to be a huge issue going forward, and so we decided to change our architecture to something like this:

screenshot

So the app would be displaying what’s stored in the Core Data datastore, and we’d build a sync engine component that would shuttle data bidirectionally between Core Data and the backend server.

That seemed like a fine idea on paper, except that I refused to build it. I knew it would take way too long to build, and once it was built it probably would entail endless debugging and tuning.

Instead, after some intense debate we embarked on a furious sprint to convert everything over to Couchbase Lite iOS. We ended up with an architecture like this:

screenshot

It was similar in spirit to our original plans, except we didn’t have to build any of the hard stuff — the storage engine and the sync was already taken care of for us by Couchbase Lite.

(note: there were also components that listened for changes to the backend server database and fired off emails and push notifications, but I’m not showing them here)

After the conversion ..

On the upside

  • Any updates to customer data would quickly sync across all devices.
  • Our app still worked even when the device was completely offline.
  • Our app was orders of magnituted faster in “barely connected” scenarios, because Couchbase Lite takes the network out of the critical path.
  • Our data was now “document oriented”, and so we could worry less about rolling out schema changes while having several different versions of our app out in the wild.

On the downside

  • We ran into a few bizarre situations where a client app would regurgitate a ton of unwanted data back into the system after we’d thought we’d removed it. To be fair, that was our fault, but I mention it because Couchbase Lite can throw you some curve balls if you aren’t paying attention.
  • Certain things were awkward. For example for our initial login experience, we had to sync the data before the sales associate could login. We ended up re-working that to have the login go directly against the server, which meant that logging in required the user to be online.
  • When things went wrong, they were a bit complicated to debug. (but because Couchbase Lite is Open Source, we could diagnose and fix bugs ourselves, which was a huge win.)

So what can Couchbase Lite do for you?

Sync Engine included, so you don’t have to build one

If I had to sum up one quick elevator pitch of Couchbase Lite, it would be:

If you find that you’re building a “sync engine” to sync data from your app to other instances of your app and/or the cloud, then you should probably be building it on top of Couchbase Lite instead of going down that rabbit hole — since you may never make it back out.

Your app now works well in offline or occasionally connected scenarios

This is something that users expect your app to handle. For example, if I’m on the BART going from SF –> Oakland and have no signal, I should still be able to read my tweets, and even send new tweets that will be queued to sync once the device comes back online.

If your app is based on Couchbase Lite, you essentially get these features for free.

  • When you load tweets, it is loaded from the local Couchbase Lite store, without any need to hit the server.
  • When you create a new tweet, you just save it to Couchbase Lite, and let it handle the heavy lifting of getting that pushed up to the server once the device is back online.

Your data model is now Document Oriented

This is a double edged sword, and to be perfectly honest a Document Oriented approach is not always the ideal data model for every application. However, for some applications (like CRM), it’s a much more natural fit than the relational model.

And you’ll never have to worry about getting yourself stuck in Core Data schema migration hell.

What’s the dark side of Couchbase Lite?

Queries can be faster, but they have certain limitations

With SQL, you can run arbitrary queries, irregardless if there is an index or not.

Couchbase Lite cannot be queried with SQL. Instead you must define Views, which are essentially indexes, and run queries on those views. Views are extremely fast and efficient, but if you don’t have a view, you can’t run a query, period.

For people who are used to SQL, defining lower level map/reduce views takes some time to wrap your head around.

Complex queries can get downright awkward

Views are powerful, but they have their limitations, and if your query is complex enough, you may end up needing to write multiple views and coalescing/sorting the data in memory.

It’s not a black box, but it is complicated.

The replication code in Couchbase Lite is complicated. I know, because I’ve spent a better part of the last year staring at it.

As an app developer, you are putting your trust that the replication will work as you would expect and that it will be performant and easy on the battery.

The good news is that it’s 100% open source under the Apache 2 license. So you can debug into it, send issues and pull requests to our github repo, and even maintain your own fork if needed.