Seven Story Rabbit Hole

Sometimes awesome things happen in deep rabbit holes. Or not.

   images

Running Couchbase Sync Gateway on Google Compute Engine

First, a quick refresh of what Couchbase Sync Gateway actually is.

So here’s a birds-eye-view of the Couchbase Mobile architecture:

diagram

Sync Gateway allows Couchbase Lite mobile apps to sync data between each other and the Couchbase Server running on the backend.

This blog post will walk you through how to run Sync Gateway in a Docker container on Google Compute Engine.

Create GCE instance and ssh in

Follow the instructions on Running Docker on Google Compute Engine.

At this point you should be ssh’d into your GCE instance

Create a configuration JSON

Here’s a sample example JSON configuration for Sync Gateway which uses walrus as it’s backing store, rather than Couchbase Server. Later we will swap in Couchbase Server as a backing store.

Run Sync Gateway docker container

1
gce:~$ sudo docker run -d -name sg -p 4984:4984 -p 4985:4985 tleyden5iwx/couchbase-sync-gateway sync_gateway "https://gist.githubusercontent.com/tleyden/d97d985eb1e0725e858e/raw"

This will return a container id, eg 8ffb83fd1f.

Check the logs to make sure there are no serious errors in the logs:

1
gce:~$ sudo docker logs 8ffb83fd1f

You should see something along the lines of:

1
2
3
4
5
6
02:23:58.905587 Enabling logging: [REST]
02:23:58.905818 ==== Couchbase Sync Gateway/1.00 (unofficial) ====
02:23:58.905863 Opening db /sync_gateway as bucket "sync_gateway", pool "default", server <walrus:/opt/sync_gateway/data>
02:23:58.905964 Opening Walrus database sync_gateway on <walrus:/opt/sync_gateway/data>
02:23:58.909659 Starting admin server on :4985
02:23:58.913260 Starting server on :4984 ...

Expose API port 4984 via Firewall rule

On your workstation with the gcloud tool installed, run:

1
$ gcloud compute firewalls create sg-4984 --allow tcp:4984

Verify that it’s running

Find out external ip address of instance

On your workstation with the gcloud tool installed, run:

1
2
3
$ gcloud compute instances list
name     status  zone          machineType internalIP   externalIP
couchbse RUNNING us-central1-a f1-micro    10.240.74.44 142.222.178.49

Your external ip is listed under the externalIP column, eg 142.222.178.49 in this example.

Run curl request

On your workstation, replace the ip below with your own ip, and run:

1
$ curl http://142.222.178.49:4984

You should get a response like:

1
{"couchdb":"Welcome","vendor":{"name":"Couchbase Sync Gateway","version":1},"version":"Couchbase Sync Gateway/1.00 (unofficial)"}

Re-run it with Couchbase Server backing store

OK, so we’ve gotten it working with walrus. But have you looked at the walrus website lately? One click and it’s pretty obvious that this thing is not exactly meant to be a scalable production ready backend, nor has it ever claimed to be.

Let’s dump walrus for now and use Couchbase Server from this point onwards.

Start Couchbase Server

Before moving on, you will need to go through the instructions in Running Couchbase Server on GCE in order to get a Couchbase Server instance running.

Stop Sync Gateway

Run this command to stop the Sync Gateway container and completely remove it, using the same container id you used earlier:

1
gce:~$ sudo docker stop 8ffb83fd1f && sudo docker rm 8ffb83fd1f

Update config

Copy this example JSON configuration, which expects a Couchbase Server running on http://172.17.0.2:8091, and update it with the ip address of the docker instance where your Couchbase Server is running. To get this ip address, follow the these instructions in the “Find the Docker instance IP address” section.

Now upload your modified JSON configuration to a website that is publicly accessible, for example in a Github Gist.

Run Sync Gateway

Run Sync Gateway again, this time using Couchbase Server as a backing store this time.

Replace http://yourserver.co/yourconfig.json with the URL where you’ve uploaded your JSON configuration from the previous step.

1
gce:~$ sudo docker run -d -name sg -p 4984:4984 -p 4985:4985 tleyden5iwx/couchbase-sync-gateway sync_gateway "http://yourserver.co/yourconfig.json"

This will return a container id, eg 9ffb83fd1f. Again, check the logs to make sure there are no serious errors in the logs:

1
gce:~$ sudo docker logs 9ffb83fd1f

You should see something along the lines of:

1
2
... 
02:23:58.913260 Starting server on :4984 ...

with no errors.

Verify it’s working

Save a document via curl

The easiest way to add a document is via the Admin port, since there is no authentication to worry about. Since we haven’t added a firewall rule to expose the admin port (4985), (and doing so without tight filtering would be a major security hole), the following command to create a new document must be run on the GCE instance.

1
gce:~$ curl -H "Content-Type: application/json" -d '{"such":"json"}' http://localhost:4985/sync_gateway/

If it worked, you should see a response like:

1
{"id":"3cbfbe43e76b7eb5c4c221a78b2cf0cc","ok":true,"rev":"1-cd809becc169215072fd567eebd8b8de"}

View document on Couchbase Server

To verify the document was successfully stored on Couchbase Server, you’ll need to login to the Couchbase Server Web Admin UI. There are instructions here on how to do that.

From there, navigate to Data Buckets / default / Documents, and you should see:

screenshot

Click on the document that has a UUID (eg, “29f8d7..” in the screenshot above), and you should see the document’s contents:

screenshot

The _sync metadata field is used internally by the Sync Gateway and can be ignored. The actual doc contents are towards the end of the file: .."such":"json"}

Running a Couchbase Cluster on Google Compute Engine

The easiest way to run Couchbase cluster on Google Compute Engine is to run all of the nodes in Docker containers.

Create GCE instance and ssh in

Follow the instructions on Running Docker on Google Compute Engine.

At this point you should be ssh’d into your GCE instance

Increase max number of files limit

If you try to run Couchbase Server at this point, you will get this warning because the file ulimit is too low.

Here’s how to fix it:

  • Edit /etc/default/docker
  • Add a new line in the file with:
1
ulimit -n 262144
  • Restart the GCE instance in the GCE web admin by going to Compute Engine / VM Instances / and hitting the “Reboot” button.

Note: in theory it should be possible to just restart docker via sudo service docker restart, however this didn’t work for me when I tried it, so I ended up restarting the whole GCE instance

Start Couchbase Server

1
gce:~$ sudo docker run -d -name cb1 -p 8091:8091 -p 8092:8092 -p 11210:11210 -p 11211:11211 ncolomer/couchbase

Verify it’s running

Find the Docker instance IP address

On the GCE instance, run:

1
gce:~$ sudo docker inspect -format '{{ .NetworkSettings.IPAddress }}' cb1

This should return an ip address, eg 172.17.0.2

Set it as an environment variable so we can use it in later steps:

1
gce:~$ export CB1_IP=172.17.0.2

Run couchbase-cli

To verify that couchbase server is running, use the couchbase-cli to ask for server info:

1
gce:~$ sudo docker run -rm ncolomer/couchbase couchbase-cli server-info -c ${CB1_IP} -u Administrator -p couchbase

If everything is working correctly, this should return a json response, eg:

1
2
3
4
5
6
7
{
  "availableStorage": {
    "hdd": [
      {
        "path": "/",
  ...
}

Start a 3-node cluster

On the GCE instance, run the following commands:

1
2
gce:~$ sudo docker run -d -name cb2 ncolomer/couchbase couchbase-start ${CB1_IP}
gce:~$ sudo docker run -d -name cb3 ncolomer/couchbase couchbase-start ${CB1_IP}

The nodes cb2 and cb3 will automatically join the cluster via cb1. The cluster needs a rebalance to be fully operational. To do so, run the following command:

1
gce:~$ sudo docker run -rm ncolomer/couchbase couchbase-cli rebalance -c ${CB1_IP} -u Administrator -p couchbase

Connect to admin web UI

The easiest way to manage a Couchbase Server cluster is via the built-in Web Admin UI.

In order to access it, we will need to make some network changes.

Expose port 8091 via firewall rule for your machine

Go to whatismyip.com or equivalent, and find your ip address. Eg, 67.161.66.7

On your workstation with the gcloud tool installed, run:

1
$ gcloud compute firewalls create cb-8091 --allow tcp:8091 --source-ranges 67.161.66.7/32

This will allow your machine, as well any other machine behind your internet router, to connect to the Couchbase Web UI running on GCE.

To increase security, you should use ipv6 and pass your workstation’s ipv6 hostname in the --source-ranges parameter.

Find out external ip address of instance

On your workstation with the gcloud tool installed, run:

1
2
3
$ gcloud compute instances list
name     status  zone          machineType internalIP   externalIP
couchbse RUNNING us-central1-a f1-micro    10.240.74.44 142.222.178.49

Your external ip is listed under the externalIP column, eg 142.222.178.49 in this example.

Go to admin in web browser

Go to http://142.222.178.49:8091 into your web browser (replacing w/ your external ip)

You should see a screen like this:

screenshot

Login with the default credentials:

  • Username: Administrator
  • Password: couchbase

And you should see the Web Admin dashboard:

screenshot

Increase default bucket size

The default bucket size is set to a very low number by default (128M in my case). To increase this:

  • In Web Admin UI, go to Data Buckets / Default / Edit
  • Change Per Node RAM Quota to 1024 MB
  • Hit “Save” button

References

Configure Emacs as a Go Editor From Scratch Part 2

This is a continuation of Part 1, so if you haven’t read that already, you should do so now.

goimports

The idea of goimports is that every time you save a file, it will automatically update all of your imports, so you don’t have to. This can save a lot of time. Kudos to @bradfitz for taking the time to build this nifty tool.

Since this project is hosted on Google Code’s mercurial repository, if you don’t have mercurial installed already, you’ll first need to install it with:

1
$ brew install hg

Next, go get goimports with:

1
$ go get code.google.com/p/go.tools/cmd/goimports

Continuing on previous .emacs in Part 1, update your .emacs to:

1
2
3
4
5
6
7
8
9
10
11
12
(defun my-go-mode-hook ()
  ; Use goimports instead of go-fmt
  (setq gofmt-command "goimports")
  ; Call Gofmt before saving
  (add-hook 'before-save-hook 'gofmt-before-save)
  ; Customize compile command to run go build
  (if (not (string-match "go" compile-command))
      (set (make-local-variable 'compile-command)
           "go build -v && go test -v && go vet"))
  ; Godef jump key binding
  (local-set-key (kbd "M-.") 'godef-jump))
(add-hook 'go-mode-hook 'my-go-mode-hook)

Restart emacs to force it to reload the configuration

Testing out goimports

  • Open an existing .go file that contains imports
  • Remove one or more of the imports
  • Save the file

After you save the file, it should re-add the imports. Yay!

Basically any time you add or remove code that requires a different set of imports, saving the file will cause it to re-write the file with the correct imports.

The Go Oracle

The Go Oracle will blow your mind! It can do things like find all the callers of a given function/method. It can also show you all the functions that read or write from a given channel. In short, it rocks.

Here’s what you need to do in order to wield this powerful tool from within Emacs.

Go get oracle

1
go get code.google.com/p/go.tools/cmd/oracle

Move oracle binary so Emacs can find it

1
sudo mv $GOPATH/bin/oracle $GOROOT/bin/

Update .emacs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
; Go Oracle
(load-file "$GOPATH/src/code.google.com/p/go.tools/cmd/oracle/oracle.el")

(defun my-go-mode-hook ()
  ; Use goimports instead of go-fmt
  (setq gofmt-command "goimports")
  ; Call Gofmt before saving
  (add-hook 'before-save-hook 'gofmt-before-save)
  ; Customize compile command to run go build
  (if (not (string-match "go" compile-command))
      (set (make-local-variable 'compile-command)
           "go build -v && go test -v && go vet"))
  ; Godef jump key binding
  (local-set-key (kbd "M-.") 'godef-jump))
  ; Go Oracle
  (go-oracle-mode)
(add-hook 'go-mode-hook 'my-go-mode-hook)

Restart Emacs to make these changes take effect.

Get a test package to play with

This package works with go-oracle (I tested it out while writing this blog post), so you should use it to give Go Oracle a spin:

1
go get github.com/tleyden/checkers-bot-minimax

Set the oracle analysis scope

From within emacs, open $GOPATH/src/github.com/tleyden/checkers-bot-minimax/thinker.go

You need to tell Go Oracle the main package scope under which you want it to operate:

M-x go-oracle-set-scope

it will prompt you with:

Go oracle scope:

and you should enter:

github.com/tleyden/checkers-bot-minimax and hit Enter.

Nothing will appear to happen, but now Go Oracle is now ready to show it’s magic. (note it will not autocomplete packages in this dialog, which is mildly annoying. Make sure to spell them correctly.)

Important: When you call go-oracle-set-scope, you always need to give it a main package. This is something that will probably frequently trip you up while using Go Oracle.

Use oracle to find the callers of a method

You should still have the $GOPATH/src/github.com/tleyden/checkers-bot-minimax/thinker.go file open within emacs.

Position the cursor on the “T” in the Think method (line 13 of thinker.go):

screenshot

And then run

1
M-x go-oracle-callers

Emacs should open a new buffer on the right hand side with all of the places where the Think method is called. In this case, there is only one place in the code that calls it:

screenshot

To go to the call site, position your cursor on the red underscore to the left of “dynamic method call” and hit Enter. It should take you to line 240 in gamecontroller.go:

screenshot

Note that it actually crossed package boundaries, since the called function (Think) was in the main package, while the call site was in the checkersbot package.

If you got this far, you are up and running with The Go Oracle on Emacs!

Now you should try it with one of your own packages.

This is just scratching the surface — to get more information on how to use Go Oracle, see go oracle: user manual.

Configure Emacs as a Go Editor From Scratch

This explains the steps to get a productive Emacs environment for Go programming on OSX, starting from scratch.

Install Emacs

I recommend using the emacs from emacsformacosx.com.

It has a GUI installer so I won’t say much more about it.

Install Go

1
2
3
export GOROOT=/usr/local/go
export GOPATH=~/Development/gocode
export PATH=$PATH:$GOROOT/bin

Configure go-mode

Go-mode is an Emacs major mode for editing Go code. An absolute must for anyone writing Go w/ Emacs.

The following is a brief summary of Dominik Honnef’s instructions

  • mkdir -p ~/Misc/emacs && cd ~/Misc/emacs
  • git clone git@github.com:dominikh/go-mode.el.git
  • From within Emacs, run M-x update-file-autoloads, point it at the go-mode.el file in the cloned directory.
  • Emacs will prompt you for a result file, and you should enter go-mode-load.el
  • Add these two lines to your ~/.emacs
1
2
(add-to-list 'load-path "~/Misc/emacs/go-mode.el/")
(require 'go-mode-load)

Restart Emacs and open a .go file, you should see the mode as “Go” rather than “Fundamental”.

For a full description of what go-mode can do for you, see Dominik Honnef’s blog, but one really useful thing to be aware of is that you can quickly import packages via C-c C-a

Update Emacs config for godoc

It’s really useful to be able to able to pull up 3rd party or standard library docs from within Emacs using the godoc tool.

Unfortunately, it was necessary to duplicate the $PATH and $GOPATH environment variables in the .emacs file, so that the GUI Emacs app can see it. @eentzel tweeted me a blog post that explains how to deal with this, and I will update this blog post to reflect that at some point.

NOTE: you will need to modify the snippet below to reflect the $PATH and $GOPATH variables, don’t just blindly copy and paste these.

  • Add your $PATH and $GOPATH to your ~/.emacs
1
2
(setenv "PATH" "/Users/tleyden/.rbenv/shims:/Users/tleyden/.rbenv/shims:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/local/go/bin")
(setenv "GOPATH" "/Users/tleyden/Development/gocode")

After doing this step, you should be able to run M-x godoc and it should be able to autocomplete paths of packages. (of course, you may want to go get some packages first if you don’t have any)

Automatically call gofmt on save

gofmt reformats code into the One True Go Style Coding Standard. You’ll want to call it every time you save a file.

Add these to your ~/.emacs:

1
2
3
(setq exec-path (cons "/usr/local/go/bin" exec-path))
(add-to-list 'exec-path "/Users/tleyden/Development/gocode/bin")
(add-hook 'before-save-hook 'gofmt-before-save)

After this step, whenever you save a Go file, it will automatically reformat the file with gofmt.

Godef

Godef is essential: it lets you quickly jump around the code, as you might be used to with a full featured IDE.

From what I can tell, installing go-mode seems to automatically install godef.

To verify that godef is indeed installed:

  • Putting the cursor over a method name
  • Try doing M-x godef-jump to jump into the method, and M-* to go back.

In order to add godef key bindings, add these to your ~/.emacs:

1
2
3
4
5
6
(defun my-go-mode-hook ()
  ; Call Gofmt before saving                                                    
  (add-hook 'before-save-hook 'gofmt-before-save)
  ; Godef jump key binding                                                      
  (local-set-key (kbd "M-.") 'godef-jump))
(add-hook 'go-mode-hook 'my-go-mode-hook)

and remove your previous call to (add-hook 'before-save-hook 'gofmt-before-save) since it’s now redundant

Now you can jump into code with M-. and jump back with M-*

Autocomplete

The following is a brief summary of the emacs autocomplete manual

1
2
3
4
(add-to-list 'load-path "/Users/tleyden/.emacs.d/")
(require 'auto-complete-config)
(add-to-list 'ac-dictionary-directories "/Users/tleyden/.emacs.d//ac-dict")
(ac-config-default)

To see any effect, we need to install gocode in the next step.

Gocode: Go aware Autocomplete

The following is a brief summary of the gocode README

  • go get -u -v github.com/nsf/gocode
  • cp /Users/tleyden/Development/gocode/src/github.com/nsf/gocode/emacs/go-autocomplete.el ~/.emacs.d/
  • Add the following to your ~/.emacs
1
2
(require 'go-autocomplete)
(require 'auto-complete-config)

At this point, after you restart emacs, when you start typing something, you should see a popup menu with choices, like this screenshot.

Customize compile command to run go build

It’s convenient to be able to run M-x compile to compile and test your Go code from within emacs.

To do that, edit your ~/.emacs and replace your go-mode hook with:

1
2
3
4
5
6
7
8
9
10
(defun my-go-mode-hook ()
  ; Call Gofmt before saving
  (add-hook 'before-save-hook 'gofmt-before-save)
  ; Customize compile command to run go build
  (if (not (string-match "go" compile-command))
      (set (make-local-variable 'compile-command)
           "go build -v && go test -v && go vet"))
  ; Godef jump key binding
  (local-set-key (kbd "M-.") 'godef-jump))
(add-hook 'go-mode-hook 'my-go-mode-hook)

After that, restart emacs, and when you type M-x compile, it should try to execute go build -v && go test -v && go vet instead of the default behavior.

Power tip: you can jump straight to each compile error by running C-x `. Each time you do it, it will jump to the next error.

Is this too easy for you?

If you’re yawning and you already know all this stuff, or you’re ready to take it to the next level, check out 5 minutes of go in emacs

(PS: thanks @dlsspy for taking the time to teach me the Emacs wrestling techniques needed to get this far.)

Continue to Part 2

go-imports and go-oracle are covered in Part 2

What Is Couchbase Mobile and Why Should You Care?

Couchbase Mobile just announced it’s 1.0 release today.

What is Couchbase Mobile?

  • Couchbase Lite is an open source iOS/Android NoSQL DB with built-in sync capability.
  • Couchbase Mobile refers to the “full stack” solution, which includes the (also open source) server components that Couchbase Lite uses for sync.

To give a deeper look at what problem Couchbase Mobile is meant to solve, let me tell you the story of how I came to discover Couchbase Lite as a developer. In my previous startup, we built a mobile CRM app for sales associates.

The very first pilot release of the app, the initial architecture was:

screenshot

It was very simple, and the server was almost the Single Point of Truth, except for our JSON caching layer which had a very short expiry time before it would refetch from the server. The biggest downside to this architecture was that it only worked well when the device had a fast connection to the internet.

But there was another problem: getting updates to sync across devices in a timely manner. When sales associate #1 would update a customer, sales associate #2 wouldn’t see the change because:

  • How does the app for sales associate #2 know it needs to “re-sync” the data?
  • How will the app know that something changed on the backend that should cause it to invalidate that locally cached data?

We realized that the data sync between the devices was going to be a huge issue going forward, and so we decided to change our architecture to something like this:

screenshot

So the app would be displaying what’s stored in the Core Data datastore, and we’d build a sync engine component that would shuttle data bidirectionally between Core Data and the backend server.

That seemed like a fine idea on paper, except that I refused to build it. I knew it would take way too long to build, and once it was built it probably would entail endless debugging and tuning.

Instead, after some intense debate we embarked on a furious sprint to convert everything over to Couchbase Lite iOS. We ended up with an architecture like this:

screenshot

It was similar in spirit to our original plans, except we didn’t have to build any of the hard stuff — the storage engine and the sync was already taken care of for us by Couchbase Lite.

(note: there were also components that listened for changes to the backend server database and fired off emails and push notifications, but I’m not showing them here)

After the conversion ..

On the upside

  • Any updates to customer data would quickly sync across all devices.
  • Our app still worked even when the device was completely offline.
  • Our app was orders of magnituted faster in “barely connected” scenarios, because Couchbase Lite takes the network out of the critical path.
  • Our data was now “document oriented”, and so we could worry less about rolling out schema changes while having several different versions of our app out in the wild.

On the downside

  • We ran into a few bizarre situations where a client app would regurgitate a ton of unwanted data back into the system after we’d thought we’d removed it. To be fair, that was our fault, but I mention it because Couchbase Lite can throw you some curve balls if you aren’t paying attention.
  • Certain things were awkward. For example for our initial login experience, we had to sync the data before the sales associate could login. We ended up re-working that to have the login go directly against the server, which meant that logging in required the user to be online.
  • When things went wrong, they were a bit complicated to debug. (but because Couchbase Lite is Open Source, we could diagnose and fix bugs ourselves, which was a huge win.)

So what can Couchbase Lite do for you?

Sync Engine included, so you don’t have to build one

If I had to sum up one quick elevator pitch of Couchbase Lite, it would be:

If you find that you’re building a “sync engine” to sync data from your app to other instances of your app and/or the cloud, then you should probably be building it on top of Couchbase Lite instead of going down that rabbit hole — since you may never make it back out.

Your app now works well in offline or occasionally connected scenarios

This is something that users expect your app to handle. For example, if I’m on the BART going from SF –> Oakland and have no signal, I should still be able to read my tweets, and even send new tweets that will be queued to sync once the device comes back online.

If your app is based on Couchbase Lite, you essentially get these features for free.

  • When you load tweets, it is loaded from the local Couchbase Lite store, without any need to hit the server.
  • When you create a new tweet, you just save it to Couchbase Lite, and let it handle the heavy lifting of getting that pushed up to the server once the device is back online.

Your data model is now Document Oriented

This is a double edged sword, and to be perfectly honest a Document Oriented approach is not always the ideal data model for every application. However, for some applications (like CRM), it’s a much more natural fit than the relational model.

And you’ll never have to worry about getting yourself stuck in Core Data schema migration hell.

What’s the dark side of Couchbase Lite?

Queries can be faster, but they have certain limitations

With SQL, you can run arbitrary queries, irregardless if there is an index or not.

Couchbase Lite cannot be queried with SQL. Instead you must define Views, which are essentially indexes, and run queries on those views. Views are extremely fast and efficient, but if you don’t have a view, you can’t run a query, period.

For people who are used to SQL, defining lower level map/reduce views takes some time to wrap your head around.

Complex queries can get downright awkward

Views are powerful, but they have their limitations, and if your query is complex enough, you may end up needing to write multiple views and coalescing/sorting the data in memory.

It’s not a black box, but it is complicated.

The replication code in Couchbase Lite is complicated. I know, because I’ve spent a better part of the last year staring at it.

As an app developer, you are putting your trust that the replication will work as you would expect and that it will be performant and easy on the battery.

The good news is that it’s 100% open source under the Apache 2 license. So you can debug into it, send issues and pull requests to our github repo, and even maintain your own fork if needed.

A Successful Git Branching Model With Enterprise Support

This further extends A Slight Tweak on a Successful Git Branching Model with the addition of the concept of support branches.

diagram

Release Branches

  • When completed the release branch would be merged into both the master and stable branches, the commit on the stable branch would be tagged with a release tag (eg, 1.0.0).

  • The release branch would be discarded after being merged back into master and stable.

  • Release branches would be named “release/xxx”, where xxx is the target release tag for that release. Eg, “release/1.0.0”.

  • Release branches should only have bugfixes related to the release being committed to them. All other changes should be on feature branches or the master branch, isolated from the release process.

  • Release branches would help avoid making developers having to “double-commit” bugfixes related to a release to both the release branch and the master branch — because the release branch will be merged into master at the time of release, they only need to commit the fix to the release branch.

  • Release branches should be periodically merged back into the master branch if they run longer than normal (eg, if it was expected to last 3 weeks and ended up lasting 8 weeks), rather than waiting until the time of release. This will reduce the chance of having major merge conflicts trying to merge back into master.

  • When a release is ready to be tagged, if the release branch does not easily merge into master, it is up to the dev lead on that team to handle the merge (not the build engineer). In this case, the build engineer should not be blocked, because the merge into stable will be a fast-forward merge, and so the release can proceed despite not having been merged into master yet.

Support Branches

  • Support branches would be created “on demand” when requested by customers who are stuck on legacy releases and are not able to move forward to current releases, but need security and other bug fixes.

  • Support branches should be avoided if possible, by encouraging customers to move to the current release, because they create extra work for the entire team.

  • Support branches would follow a similar naming scheme and would be named “support/xxx”, where xxx is the release tag from which they were branched off of. Eg, “support/1.0.1”.

  • Support branches are essentially dead-end branches, since their changes would unlikely need to be merged back into master (or stable) as the support branch contains “ancient code” and most likely those fixes would already have been integrated into the codebase.

  • If a customer is on the current release, then there is no need to create a support branch for their required fix. Instead, a hotfix branch should be used and a new release tag should be created.

Hotfix Branches

  • Hotfix branches would branch off of the stable branch, and be used for minor post-release bugfixes.

  • Hotfix branches would be named “hotfix/xxx”, where xxx might typically be an issue id. Once their changes have been merged into master and stable, they should be deleted.

  • Hotfix branches are expected to undergo less QA compared to release branches, and therefore are expected to contain minimum changes to fix showstopper bugs in a release. The changes should not include refactoring or any other risky changes.

  • If it’s being branched off the master branch, it’s not a hotfix branch. Hotfixes are only branched off the stable branch.

  • Hotfix branches should verified on the CI server using the automated QA suite before being considered complete.

  • After being accepted by QA, hotfix branches are merged back into master and stable, and the latest commit on stable is tagged with a release tag. (eg, 1.0.1)

  • Similar to release branches, if hotfixes do not easily merge back into master, the build engineer would assign the dev lead the responsibility for completing the merge, but this should not block the release. However since hotfix branches are so short-lived, this is very unlikely to happen.

Stable Branch

  • The stable branch would represent the “released” mainline development.

  • The latest commit on stable should always correspond to the latest release tag.

  • All release tags should be made against commits on stable, except for those on legacy support branches.

  • Developers who wanted to subscribe to the latest released code would follow the stable branch.

Master Branch

  • The master branch would represent the “as-yet-unreleased” mainline development.

Feature Branches

  • All non-trivial changes should be done on feature branches and undergo code review before being merged into the master branch.

A Slight Tweak on a Successful Git Branching Model

A great “best practices” for juggling git branches in a release cycle is A successful git branching model. It is also accompanied with a tool called git flow that makes it easy to implement in practice.

It does however have one major issue: many people use a different naming scheme.

Instead of:

  • Master – the latest “stable” branch
  • Development – bleeding edge development branch

a slightly more common naming pattern is:

  • Master – bleeding edge development branch
  • Stable – the latest “stable” branch

To that end, I’ve tweaked the original diagram to be.

diagram

Every branch solves a problem

The natural reaction to most people seeing this diagram is dude, that’s way too many branches, this is way more complicated than it needs to be. Actually, I’d argue that it’s minimally complex to solve the problems that these branches are designed to solve. In other words, each type of branch justifies it’s existence by the problem it’s designed to solve.

From the perspective of a library developer (in my case, Couchbase Lite for Android), here are the problems each branch is intended to solve.

Permanent and External Facing Branches

These branches are permanent, in the sense they have a continuous lifetime. They are also external, and consumers of the library would typically depend on one or both of these branches

Master

  • A home for the latest “feature complete” / reviewed code.
  • Anyone internal or external who wants to stay up to date with latest project developments needs this branch.

Stable

  • A home for the code that corresponds to the latest tagged release.
  • This branch would be the basis for post-release bugfixes, eg an external developer who finds a bug in the latest release would send a pull request against this branch.
  • Developers depending on this code directly in their app would likely point to this branch on their own stable branch.

Ephemeral and Internal Only Branches

These branches are emphemeral in nature, and are thrown away once they are no longer useful. Developers consuming the library would typically ignore these

FeatureX

  • A place for in progress features to live without de-stabilizing the master branch. Can be many of these.

Hotfix

  • An in progress fix would go here, where that fix is destined to be merged back into latest stable but not part of a new release that is branched off of master.
  • While this hotfix is in progress, since these commits are not part of a tagged release, they cannot go on stable (yet), otherwise it would be a violation of the stable branch contract that says that stable reflects the latest tagged release.
  • They could go directly on a “local stable” branch, which is only on the developers workstation, but that prevents sharing the branch or running CI unit tests against it, so it’s not a good solution.
  • NOTE: when hotfix merged, new release tagged simultaneously with a merge to stable, so the stable branch contract stays satisfied.

Release

  • During QA release process, need a place to QA and apply bugfixes, while isolated from destabilizing changes such as merging feature branches.
  • Release branch allows feature branch merging to happen concurrently on master branch, which is especially crucial if release gets delayed.

Playing With Go and OpenGL

Get this spinning gopher working with Go and OpenGL on OSX Mavericks via gogl

Steps

1
2
3
4
5
6
7
8
$ brew install glfw2
$ go get -u -v github.com/tleyden/gogl
$ go get -u -v github.com/go-gl/glfw  
$ cd $GOPATH/src/github.com/tleyden/gogl
$ make bindings
$ go get -u -v github.com/chsc/gogl
$ cd $GOPATH/src/github.com/tleyden/gogl/examples/gopher
$ go run gopher.go

NOTE: if pull request #37 has already been merged into gogl, then replace github.com/tleyden/gogl with https://github.com/chsc/gogl in steps above.

Why Use Go When You Can Just Use C++?

I recently saw this on a mailing list:

What are the advantages of Go against C++ or other languages?

Go has sane concurrency.

In C++ the main tool to deal with concurrency is pthreads, but making threaded programming correct is extremely difficult and error prone. Trying to make it performant by minimizing locking makes it even more challenging.

Go, OTOH, has a concept of goroutines and channels for concurrency. The idea has its roots in CSP (communicating sequential processing), and is not unlike Erlang’s style of having processes that communicating by message passing.

In Go, instead of having threads communicating by sharing memory (and locking), goroutines share memory by communicating over channels. Eg, concurrent goroutines communicate over channels, and each goroutine’s internal state is private to that goroutine. Also, concurrency and their constructs like channels are built right into the Go language, which affords many advantages over languages that have had concurrency slapped on as an afterthought.

Other Strengths

  • No unsafe pointer arithmetic.
  • Array bound checking
  • Write once run anywhere
  • Closures (eg, lambdas)
  • Functions are first class objects
  • Multiple return values
  • Does not have IFDEF’s, so no IFDEF hell and unmaintainable code.
  • Compiles blazingly fast
  • Gofmt – All code is uniformly formatted, making codebases much easier to read. (a la Python)
  • Garbage collection

Weaknesses

  • Lack of generics
  • Not quite as fast as C/C++ (partly due to GC overhead)
  • Integration with existing native code is a bit limited (you can’t build libraries in Go or link Go code into a C/C++ executable)
  • IDE support is limited compared to C/C++/Obj-C/Java
  • Doesn’t work with regular debugging tools because it doesn’t use normal calling conventions.

A Stubbed Out Gradle Multi-project Hierarchy Using Git Submodules

I’m about to split up the couchbase-lite android library into two parts: a pure-java and an android specific library. The android library will depend on the pure-java library.

Before going down this project refactoring rabbit hole for the real project, I decided to stub out a complete project hierarchy to validate that it was going to actually work. (I’m calling this project hierarchy “stubworld” for lack of a sillier name)

There are five projects in total, here are the two top-level projects

https://github.com/tleyden/stubworld-app-android (an Android app)

https://github.com/tleyden/stubworld-app-cmdline (a pure java Command Line application)

The projects are all self contained gradle projects and can be imported in either Android Studio or IntelliJ CE 13, and the dependencies on lower level projects are done via git submodules (and even git sub-submodules, gulp). In either top level project you can easily debug and edit the code in the lower level projects, since they use source level dependencies.

The biggest sticking point I ran into was that initially I tried to include a dependency via:

1
compile project(':libraries:stubworld-corelib-java')

which totally broke when trying to embed this project into another parent project. The fix was to change it to use a relative path by removing the leading colon:

1
compile project('libraries:stubworld-corelib-java')

I’m posting this in case it’s useful to anyone who needs to do something similar, or if any Gradle / Android Studio Jedi masters have any feedback like “why did you do X, when you just could have done Y?”