Seven Story Rabbit Hole

Sometimes awesome things happen in deep rabbit holes. Or not.

   images

Running Couchbase Server Under Joyent Triton

Joyent has recently announced their new Triton Docker container hosting service. There are several advantages of running Docker containers on Triton over a more traditional cloud hosting platform:

  • Better performance since there is no hardware level virtualization overhead. Your containers run on bare-metal.

  • Simplified networking between containers. Each container gets its own private (and optionally public) ip address.

  • Hosts are abstracted away — you just deploy into the “container cloud”, and don’t care which host your container is running on.

For more details, check out Bryan Cantrill’s talk about Docker and the Future of Containers in Production.

Let’s give it a spin with a “hello world” container, and then with a cluster of Couchbase servers.

Sign up for a Joyent account

Follow the signup instructions on the Joyent website

You will also need to add your SSH key to your account.

Install or Upgrade Docker

If you don’t have Docker installed already and you are on Ubuntu, run:

1
$ curl -sSL https://get.docker.com/ | sh

See install Docker on Ubuntu for more details.

Upgrade Docker client to 1.4.1 or later

Check your version of Docker with:

1
2
$ docker --version
Docker version 1.0.1, build 990021a

If you are on a version before 1.4.1 (like I was), you can upgrade Docker via the boot2docker installers.

Joyent + Docker setup

Get the sdc-docker repo (sdc == Smart Data Center):

1
$ git clone https://github.com/joyent/sdc-docker.git

Perform setup via:

1
2
$ cd sdc-docker
$  ./tools/sdc-docker-setup.sh -k 165.225.168.22 $ACCOUNT ~/.ssh/$PRIVATE_KEY_FILE

Replace values as follows:

  • $ACCOUNT: you can get this by logging into the Joyent web ui and going to the Account menu from the pulldown in the top-right corner. Find the Username field, and use that
  • $PRIVATE_KEY_FILE: the name of the file where your private key is stored, typically this will be id_rsa

Run the command and you should see the following output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Setting up Docker client for SDC using:
    CloudAPI:        https://165.225.168.22
    Account:         <your username>
    Key:             /home/ubuntu/.ssh/id_rsa

[..snip..]

Wrote certificate files to /home/ubuntu/.sdc/docker/<username>

Docker service endpoint is: tcp://<generated ip>:2376

* * *
Success. Set your environment as follows:

    export DOCKER_CERT_PATH=/home/ubuntu/.sdc/docker/<username>
    export DOCKER_HOST=tcp://<generated-ip>:2376
    alias docker="docker --tls"

Then you should be able to run 'docker info' and see your account
name 'SDCAccount: <username>' in the output.

Export environment variables

As the output above suggests, copy and paste the commands from the output. Here’s an example of what that will look like (but you should copy and paste from your command output, not the snippet below):

1
2
3
$ export DOCKER_CERT_PATH=/home/ubuntu/.sdc/docker/<username>
$ export DOCKER_HOST=tcp://<generated-ip>:2376
$ alias docker="docker --tls"

Docker Hello World

Let’s spin up an Ubuntu docker image that says hello world.

Remember you’re running the Docker client on your workstation, not in the cloud. Here’s an overview on what’s going to be happening:

diagram

To start the docker container::

1
$ docker run --rm ubuntu:14.04 echo "Hello Docker World, from Joyent"

You should see the following output:

1
2
3
4
Unable to find image 'ubuntu:14.04' locally
Pulling repository library/ubuntu
...
Hello Docker World, from Joyent

Also, since the --rm flag was passed, the container will have been removed after exiting. You can verify this by running docker ps -a. This is important because stopped containers incur charges on Joyent.

Congratulations! You’ve gotten a “hello world” Docker container running on Joyent.

Run Couchbase Server containers

Now it’s time to run Couchbase Server.

To kick off three Couchbase Server containers, run:

1
2
3
4
$ for i in `seq 1 3`; do \
      echo "Starting container $i"; \
      export container_$i=$(docker run --name couchbase-server-$i -d -P couchbase/server); \
  done

To confirm the containers are up, run:

1
$ docker ps

and you should see:

1
2
3
4
CONTAINER ID        IMAGE                                       COMMAND             CREATED             STATUS              PORTS               NAMES
5bea8901814c        couchbase/server   "couchbase-start"   3 minutes ago       Up 2 minutes                            couchbase-server-1
bef1f2f32726        couchbase/server   "couchbase-start"   2 minutes ago       Up 2 minutes                            couchbase-server-2
6f4e2a1e8e63        couchbase/server   "couchbase-start"   2 minutes ago       Up About a minute                       couchbase-server-3

At this point you will have environment variables defined with the container ids of each container. You can check this by running:

1
2
3
4
$ echo $container_1 && echo $container_2 && echo $container_3
21264e44d66b4004b4828b7ae408979e7f71924aadab435aa9de662024a37b0e
ff9fb4db7b304e769f694802e6a072656825aa2059604ba4ab4d579bd2e5d18d
0c6f8ca2951448e497d7e12026dcae4aeaf990ec51e047cf9d8b2cbdd9bd7668

Get public ip addresses of the containers

Each container will have two IP addresses assigned:

  • A public IP, accessible from anywhere
  • A private IP, only accessible from containers/machines in your Joyent account

To get the public IP, we can use the Docker client. (to get the private IP, you need to use the Joyent SmartDataCenter tools, which is described below)

1
2
3
$ container_1_ip=`docker inspect $container_1 | grep -i IPAddress | awk -F: '{print $2}' |  grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b"`
$ container_2_ip=`docker inspect $container_2 | grep -i IPAddress | awk -F: '{print $2}' |  grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b"`
$ container_3_ip=`docker inspect $container_3 | grep -i IPAddress | awk -F: '{print $2}' |  grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b"`

You will now have the public IP addresses of each container defined in environment variables. You can check that it worked via:

1
2
3
4
$ echo $container_1_ip && echo $container_2_ip && echo $container_3_ip
165.225.185.11
165.225.185.12
165.225.185.13

Connect to Couchbase Web UI

Open your browser to $container_1_ip:8091 and you should see:

Couchbase Welcome Screen

At this point, it’s possible to setup the cluster by going to each Couchbase node’s Web UI and following the Setup Wizard. However, in case you want to automate this in the future, let’s do this over the command line instead.

Setup first Couchbase node

Let’s arbitrarily pick container_1 as the first node in the cluster. This node is special in the sense that other nodes will join it.

The following command will do the following:

  • Set the Administrator’s username and password to Administrator / password (you should change this)
  • Set the cluster RAM size to 600 MB

Note: the -u admin -p password should be left as-is, since that is just passing in the default admin name and password for auth purposes.

1
2
3
4
5
6
$ docker run --rm --entrypoint=/opt/couchbase/bin/couchbase-cli couchbase/server \
cluster-init -c $container_1_ip \
--cluster-init-username=Administrator \
--cluster-init-password=password \
--cluster-init-ramsize=600 \
-u admin -p password

You should see a response like:

1
SUCCESS: init 165.225.185.11

Create a default bucket

A bucket is equivalent to a database in typical RDMS systems.

1
2
3
4
5
6
7
8
$ docker run --rm --entrypoint=/opt/couchbase/bin/couchbase-cli couchbase/server \
bucket-create -c $container_1_ip:8091 \
--bucket=default \
--bucket-type=couchbase \
--bucket-port=11211 \
--bucket-ramsize=600 \
--bucket-replica=1 \
-u Administrator -p password

You should see:

1
SUCCESS: bucket-create

Add 2nd Couchbase node

Add in the second Couchbase node with this command

1
2
3
4
5
6
$ docker run --rm --entrypoint=/opt/couchbase/bin/couchbase-cli couchbase/server \
server-add -c $container_1_ip \
-u Administrator -p password \
--server-add $container_2_ip \
--server-add-username Administrator \
--server-add-password password 

You should see:

1
SUCCESS: server-add 165.225.185.12:8091

To verify it was added, run:

1
2
3
$ docker run --rm --entrypoint=/opt/couchbase/bin/couchbase-cli couchbase/server \
server-list -c $container_1_ip \
-u Administrator -p password

which should return the list of Couchbase Server nodes that are now part of the cluster:

1
2
ns_1@165.225.185.11 165.225.185.11:8091 healthy active
ns_1@165.225.185.12 165.225.185.12:8091 healthy inactiveAdded

Add 3rd Couchbase node and rebalance

In this step we will:

  • Add the 3rd Couchbase node
  • Trigger a “rebalance”, which distributes the (empty) bucket’s data across the cluster
1
2
3
4
5
6
$ docker run --rm --entrypoint=/opt/couchbase/bin/couchbase-cli couchbase/server \
rebalance -c $container_1_ip \
-u Administrator -p password \
--server-add $container_3_ip \
--server-add-username Administrator \
--server-add-password password 

You should see:

1
2
3
4
5
6
INFO: rebalancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
SUCCESS: rebalanced cluster
close failed in file object destructor:
Error in sys.excepthook:

Original exception was:

If you see SUCCESS, then it worked. (I’m not sure why the “close failed in file ..” error is happening, but so far it appears that it can be safely ignored.)

Login to Web UI

Open your browser to $container_1_ip:8091 and you should see:

Couchbase Login Screen

Login with:

  • Username: Administrator
  • Password: password

And you should see:

Couchbase Nodes

Congratulations! You have a Couchbase Server cluster up and running on Joyent Triton.

Teardown

To stop and remove your Couchbase server containers, run:

1
2
$ docker stop $container_1 $container_2 $container_3
$ docker rm $container_1 $container_2 $container_3

To double check that you no longer have any containers running or in the stopped state, run docker ps -a and you should see an empty list.

Installing the SDC tools (optional)

Installing the Joyent Smart Data Center (SDC) tools will allow you to gain more visibility into your container cluster — for example being able to view the internal IP of each continer.

Here’s how to install the sdc-tools suite.

Install smartdc

First install NodeJS + NPM

Install smartdc:

1
npm install -g smartdc

Configure environment variables

1
2
3
$ export SDC_URL=https://us-east-3b.api.joyent.com
$ export SDC_ACCOUNT=<ACCOUNT>
$ export SDC_KEY_ID=$(ssh-keygen -l -f $HOME/.ssh/id_rsa.pub | awk '{print $2}')

Replace values as follows:

  • ACCOUNT: you can get this by logging into the Joyent web ui and going to the Account menu from the pulldown in the top-right corner. Find the Username field, and use that

List machines

Run sdc-listmachines to list all the containers running under your Joyent account. Your output should look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
$ sdc-listmachines
[
{
    "id": "0c6f8ca2-9514-48e4-97d7-e12026dcae4a",
    "name": "couchbase-server-3",
    "type": "smartmachine",
    "state": "running",
    "image": "335a8046-0749-1174-5666-6f084472b5ef",
    "ips": [
      "192.168.128.32",
      "165.225.185.13"
    ],
    "memory": 1024,
    "disk": 25600,
    "metadata": {},
    "tags": {},
    "created": "2015-03-26T14:50:31.196Z",
    "updated": "2015-03-26T14:50:45.000Z",
    "networks": [
      "7cfe29d4-e313-4c3b-a967-a28ea34342e9",
      "178967cb-8d11-4f53-8434-9c91ff819a0d"
    ],
    "dataset": "335a8046-0749-1174-5666-6f084472b5ef",
    "primaryIp": "165.225.185.13",
    "firewall_enabled": false,
    "compute_node": "44454c4c-4400-1046-8050-b5c04f383432",
    "package": "t4-standard-1G"
  },
]

Find private IP of an individual machine

1
$ sdc-getmachine <machine_id> | json -aH ips | json -aH | egrep "10\.|192\.”

References

Setting Up Octopress Under Docker

I got a new computer last week. It’s the latest macbook retina, and I needed to refresh because I wanted a bigger SSD drive (and after having an SSD drive, I’ll never go back)

Anyway, I’m trying to get my Octopress blog going again, and oh my God, what a nightmare. Octopress was working beautifully for me for years, and then all of the sudden I am at the edge of Ruby Dependency Hell staring at an Octopress giving me eight fingers.

With the help of Docker, I’ve managed to tame this eight legged beast, barely.

Run Docker

See Installing Docker for instructions.

This blog post assumes you already have an Octopress git repo. If you are starting from scratch, then check out Octopress Setup Part I to become even more confused.

Install Octopress Docker image

1
$ docker run -ti tleyden5iwx/octopress /bin/bash

After this point, the rest of the instructions assume that you are executing commands from inside the Docker Container.

Delete Octopress dir + clone your Octopress repo

The Docker container will contain an Octopress directory, but it’s not needed.

From within the container:

1
2
3
4
$ cd /root
$ rm -rf octopress/
$ git clone https://github.com/your-github-username/your-github-username.github.io.git octopress
$ cd octopress/

Now, switch to the source branch (which contains the content)

1
$ git checkout source

Re-install dependencies:

1
$ bundle install

Prevent ASCII encoding errors:

1
$ export LC_ALL=C.UTF-8

Clone deploy directory

1
$ git clone https://github.com/your-github-username/your-github-username.github.io.git _deploy

Rake preview

As a smoke test, run:

1
$ bundle exec rake preview

NOTE: I have no idea why bundle exec is required here, I just used this in response to a previous error message and it’s accompanying suggestion.

If this gives no errors, that’s a good sign.

Create a new blog post

1
$ bundle exec rake new_post["Setting up Octopress under Docker"]

It will tell you the path to the blog post. Now open the file in your favorite editor and add contect.

Push to Source branch

The source branch has the source markdown content. It’s actually the most important thing to preserve, because the HTML can always be regnerated from it.

1
$ git push origin source

Deploy to Master branch

The master branch contains the rendered HTML content. Here’s how to push it up to your github pages repo (remember, in an earlier step you cloned your github pages repo at https://github.com/your-github-username/your-github-username.github.io.git)

1
$ bundle exec rake generate && bundle exec rake deploy

After the above command, the changes should be visible on your github pages blog (eg, your-username.github.io)

Common errors

If you get:

1
YAML Exception reading 2014-04-09-a-successful-git-branching-model-with-enterprise-support.markdown: invalid byte sequence in US-ASCII

Run:

1
$ export LC_ALL=C.UTF-8

References

Test Coverage for Go With drone.io and coveralls.io

This will walk you through setting up a test coverage report on coveralls.io which will be updated every time a new build happens on drone.io (a continuous integration server similar to TavisCI).

I’m going to use the couchbaselabs/sg-replicate repo as an example, since it currently does not have any test coverage statistics. The goal is to end up with a badge in the README that points to a test coverage report hosted on coveralls.io.

Clone the repo

1
2
$ git clone https://github.com/couchbaselabs/sg-replicate.git
$ cd sg-replicate

Test coverage command line stats

1
2
3
$ go test -cover
go tool: no such tool "cover"; to install:
  go get golang.org/x/tools/cmd/cover

Try again:

1
2
3
4
$ go get golang.org/x/tools/cmd/cover && go test -cover
PASS
coverage: 69.4% of statements
ok    github.com/couchbaselabs/sg-replicate   0.156s

Ouch, 69.4% is barely a C-. (if you round up!)

Coverage breakdown

Text report:

1
2
3
4
5
6
$ go test -coverprofile=coverage.out 
$ go tool cover -func=coverage.out
github.com/couchbaselabs/sg-replicate/attachment.go:15:           NewAttachment           84.6%
github.com/couchbaselabs/sg-replicate/changes_feed_parameters.go:20:  NewChangesFeedParams        100.0%
github.com/couchbaselabs/sg-replicate/changes_feed_parameters.go:30:  FeedType            100.0%
github.com/couchbaselabs/sg-replicate/changes_feed_parameters.go:34:  Limit               100.0%

HTML report:

1
2
$ go test -coverprofile=coverage.out 
$ go tool cover -html=coverage.out

This should open up the following report in your default browser:

html report

Coveralls.io setup

  • Login to coveralls.io
  • Create a new repo
  • Get the repo token from the SET UP COVERALLS section

At this point, your empty coveralls repo will look something like this:

empty coveralls repo

Configure Drone.io + Goveralls

If you have not already done so, setup a drone.io build for your repo.

On the drone.io Settings page, make the following changes:

Environment Variables

In the Environment Variables section of the web ui, add:

1
COVERALLS_TOKEN=<coveralls_repo_token>

Commands

In the commands section, you can replace your existing go test call with:

1
2
3
go get github.com/axw/gocov/gocov
go get github.com/mattn/goveralls
goveralls -service drone.io -repotoken $COVERALLS_TOKEN

Here’s what it should look like:

drone io ui

Kick off a build

Go to the drone.io project page for your repo, and hit Build Now

At the bottom of the build output, you should see:

1
2
Job #1.1
https://coveralls.io/jobs/5189501

If you follow the link, you should see something like:

coveralls report

Looks like we just went from a C- to a B! I have no idea why the coverage improved, but I’ll take it.

Add a badge, call it a day

On the coveralls.io project page for your repo, you should see a button near the top called Badge URLS. Click and copy/paste the markdown, which should look something like this:

1
[![Coverage Status](https://coveralls.io/repos/couchbaselabs/sg-replicate/badge.svg?branch=master)](https://coveralls.io/r/couchbaselabs/sg-replicate?branch=master)

And add it to your project’s README.

badges

References

Nginx Proxy for Sync Gateway Using Confd

This will walk you through setting up Sync Gateway behind nginx. The nginx conf will be auto generated based on Sync Gateway status.

Launch CoreOS instances on EC2

Recommended values:

  • ClusterSize: 3 nodes (default)
  • Discovery URL: as it says, you need to grab a new token from https://discovery.etcd.io/new and paste it in the box.
  • KeyPair: the name of the AWS keypair you want to use. If you haven’t already, you’ll want to upload your local ssh key into AWS and create a named keypair.

Wait until instances are up

screenshot

ssh into a CoreOS instance

Go to the AWS console under EC2 instances and find the public ip of one of your newly launched CoreOS instances.

screenshot

Choose any one of them (it doesn’t matter which), and ssh into it as the core user with the cert provided in the previous step:

1
$ ssh -i aws.cer -A core@ec2-54-83-80-161.compute-1.amazonaws.com

Spin up Sync Gateway containers

1
2
$ etcdctl set /couchbase.com/enable-code-refresh true
$ sudo docker run --net=host tleyden5iwx/couchbase-cluster-go update-wrapper sync-gw-cluster launch-sgw --num-nodes=2 --config-url=http://git.io/hFwa --in-memory-db

Verify etcd entries

1
2
3
4
5
$ etcdctl ls --recursive /
...
/couchbase.com/sync-gw-node-state
/couchbase.com/sync-gw-node-state/10.169.70.114
/couchbase.com/sync-gw-node-state/10.231.220.110

Create data volume container

1
2
$ wget https://raw.githubusercontent.com/lordelph/confd-demo/master/confdata.service
$ fleetctl start confdata.service

Launch sync-gateway-nginx-confd.service

1
2
3
$ wget https://raw.githubusercontent.com/lordelph/confd-demo/master/confd.service
$ sed -i -e 's/lordelph\/confd-demo/tleyden5iwx\/sync-gateway-nginx-confd/' confd.service
$ fleetctl start confd.service

Launch nginx service

1
2
$ wget https://raw.githubusercontent.com/lordelph/confd-demo/master/nginx.service
$ fleetctl start nginx.service

Verify

Try a basic http get.

1
2
3
$ nginx_ip=`fleetctl list-units | grep -i nginx | awk '{print $2}' | awk -F/ '{print $2}'`
$ curl $nginx_ip
{"couchdb":"Welcome","vendor":{"name":"Couchbase Sync Gateway","version":1},"version":"Couchbase Sync Gateway/master(a47a17f)"}

Add ‘-v’ flag to see which Sync Gateway server is servicing the request

1
2
3
4
$ curl -v $nginx_ip
...
X-Handler: 10.231.220.110:4984
...

If you repeat that a few more times, you should see different ip addresses for the handler.

Take a sync gateway out of rotation

1
$ fleetctl stop sync_gw_node@1.service sync_gw_sidekick@1.service

Now try hitting nginx again, and should not see the Sync Gw that you just shutdown as a handler.

1
2
3
4
$ curl -v $nginx_ip
...
X-Handler: 10.231.220.114:4984
...

Put sync gateway back into rotation

1
$ fleetctl start sync_gw_node@1.service sync_gw_sidekick@1.service

Now try hitting nginx again, and should again see the Sync Gw that you just restarted as being a handler.

1
2
3
4
$ curl -v $nginx_ip
...
X-Handler: 10.231.220.110:4984
...

References

Graphing Time Series Data With Seriesly and Cubism

This will walk you through the basics of putting data into seriesly and visualizing it with cubism.

You will end up with this in your browser:

screenshot

Install seriesly

1
go get -u -v -t github.com/dustin/seriesly

Run seriesly

1
seriesly -flushDelay=1s -root=/tmp/seriesly-data

and leave it running in the background.

Create a db

In another shell:

1
curl -X PUT http://localhost:3133/testdb

Write docs to db

This script will write json docs with random values for the purpose of visualization.

Copy the following to add_seriesly_docs.rb

1
2
3
4
5
6
7
8
9
#!/usr/bin/env ruby

6000.times do |count|
  randomNumber = rand() # random number between 0 and 1
  cmd = "curl -X POST -d '{\"index\":#{randomNumber}}' http://localhost:3133/testdb"
  puts cmd
  system(cmd)
  system("sleep 1")
end

and then run it

1
chmod +x add_seriesly_docs.rb && ./add_seriesly_docs.rb

and let it continue running in the background.

Create a webserver

Create a directory:

1
2
mkdir /tmp/seriesly-http/
cd /tmp/seriesly-http/

Create fileserver.go:

1
2
3
4
5
package main
import "net/http"
func main() {
        panic(http.ListenAndServe(":8080", http.FileServer(http.Dir("/tmp/seriesly-http/"))))
}

Run webserver:

1
go run fileserver.go

Download seriesly.html file

This is a file I wrote which uses seriesly as a metric data source for cubism.

It’s a quick hack, since I couldn’t manage to get seriesism.js working.

1
2
cd /tmp/seriesly-http/
wget https://gist.githubusercontent.com/tleyden/ec0c9be5786e0c0bd9ba/raw/1c08ea13b8ce46e08a49df19ad44c8e6a0ade896/seriesly.html

Open seriesly.html

In your browser, point to http://localhost:8080/seriesly.html

At this point, you should see the screenshot at the beginning of the blog post.

References

Running a Walrus-backed Sync Gateway on AWS

Follow the steps below to create a Sync Gateway instance running under AWS with the following architecture:

architecture diagram

It will be using the Walrus in-memory database, and so it is only useful for light testing. Walrus does have the abiity to snapshot its memory contents to a file, so your data can persist across restarts.

Warning: don’t run this in production! If you want something that is closer to production ready, check out Running a Sync Gateway Cluster Under CoreOS on AWS instead.

Launch EC2 instance

Go to the Cloudformation Wizard

Recommended values:

  • ClusterSize: 1 node
  • Discovery URL: as it says, you need to grab a new token from https://discovery.etcd.io/new and paste it in the box.
  • KeyPair: the name of the AWS keypair you want to use.

For the keypair that you use, your local ssh client will need to have the private key side of that keypair.

Wait until instances are up

Hit the “reload” button until it goes to the CREATE_COMPLETE state.

screenshot

Find ip of instance

Go to the AWS console under the “EC2 instances” section and find the public ip of one of your newly launched CoreOS instances.

Choose any one of them (it doesn’t matter which), and look for the Public IP. Copy that value onto your clipboard.

SSH into instance

ssh into it as the core user with the cert provided in the previous step:

1
$ ssh -A core@ec2-54-83-80-161.compute-1.amazonaws.com

Create a volume directory

After you ssh into your instance, create a volume directory so that the data persists across different container instances.

1
2
$ sudo mkdir -p /opt/sync_gateway/data
$ sudo chown -R core:core /opt/sync_gateway/data

Launch Sync Gateway

1
2
$ SYNC_GW_CONFIG=https://gist.githubusercontent.com/tleyden/368f01218baf4e760267/raw/a65be036bc3855d5ab4e73b849f4caa1dc7d390f/config.json
$ sudo docker run -d --name sync_gw --net=host -v /opt/sync_gateway/data:/opt/sync_gateway/data tleyden5iwx/sync-gateway-coreos sync-gw-start -c master -g $SYNC_GW_CONFIG

You should see the following output:

1
2
3
4
5
6
7
Unable to find image 'tleyden5iwx/sync-gateway-coreos' locally
Pulling repository tleyden5iwx/sync-gateway-coreos
daa0c81d9745: Download complete
......
Status: Downloaded newer image for tleyden5iwx/sync-gateway-coreos:latest
d22035060882a2071c3e0a556ae5db5041f84e3004d67fb11355b6d8a7bf40b8
$ 

Congratulations! You now have a Sync Gateway running.

It might feel underwhelming, because nothing appears to be happening, but sync gateway is actually running in the background. To verify that, run:

1
2
3
$ sudo docker ps
CONTAINER ID        IMAGE                                    COMMAND                CREATED              STATUS              PORTS               NAMES
d22035060882        tleyden5iwx/sync-gateway-coreos:latest   "sync-gw-start -c ma   About a minute ago   Up About a minute                       sync_gw

View logs

1
2
$ CONTAINER_ID=$(sudo docker ps | grep -iv container | awk '{ print $1 }')
$ sudo docker logs --follow ${CONTAINER_ID}

Verify Sync Gateway

Assuming your public ip is 54.81.228.221, paste http://54.81.228.221:4984 into your web browser and you should see:

1
2
3
4
5
6
7
8
{
    "couchdb":"Welcome",
    "vendor":{
        "name":"Couchbase Sync Gateway",
        "version":1
    },
    "version":"Couchbase Sync Gateway/master(6356065)"
}

To make sure the database was configured correctly, change the url to http://54.81.228.221:4984/db, and you should see:

1
2
3
4
{
    "db_name":"db",
    .. etc ..
}

Try out document API via curl

Create a new document

1
$ curl -H 'Content-Type: application/json' -X POST -d '{"hello":"sync gateway"}' http://54.81.228.221:4984/db/

This will return the following JSON:

1
2
3
4
5
{
    "id":"f1c8c5f8de22a09544b97fcc20fce316",
    "ok":true,
    "rev":"1-016b8855d6faf2d703a8b35a44cd4a40"
}

View the document

Using the doc id returned above:

1
$ curl http://54.81.228.221:4984/db/f1c8c5f8de22a09544b97fcc20fce316

You should see:

1
2
3
4
5
{
    "_id":"f1c8c5f8de22a09544b97fcc20fce316",
    "_rev":"1-016b8855d6faf2d703a8b35a44cd4a40",
    "hello":"sync gateway"
}

Check out the Sync Gateway REST API docs for full documentation on the available REST calls you can make.

Restart Sync Gateway with new config

If you need to change your sync gateway config, follow the instructions below.

Stop and remove existing container

Find the container id via sudo docker ps as shown above, and run this command with your own container id:

1
2
$ CONTAINER_ID=$(sudo docker ps | grep -iv container | awk '{ print $1 }')
$ sudo docker stop ${CONTAINER_ID} && sudo docker rm ${CONTAINER_ID}

Update sync gateway config

You can take this sample config and customize it to your needs, and then upload it somewhere on the web.

Make sure you keep the server field as "walrus:data", since that tells Sync Gateway to use walrus and to store the data in the /opt/sync_gateway/data directory.

Start container with new config

1
2
$ SYNC_GW_CONFIG=https://yourserver.com/yourconfig.json
$ sudo docker run --name sync_gw --net=host -v /opt/sync_gateway/data:/opt/sync_gateway/data tleyden5iwx/sync-gateway-coreos sync-gw-start -c master -g $SYNC_GW_CONFIG

After it starts up, your sync gateway will be running with the new config.

Next step: try out the GrocerySync app

Choose the GrocerySync app for your platform:

and point the sync url at your server instead of the default. Now should be able to sync data through your own Sync Gateway.

If you are on Phonegap, check our sample apps listing which has a link to the TodoLite-Phonegap app.

Up and Running With Couchbase Lite Phonegap Android on OSX

This will walk you through the steps to install the TodoLite-Phonegap sample app that uses Couchbase Lite Android. After you’re finished, you’ll end up with this app.

Install Homebrew

Install Android Studio

Install Phonegap

Install Node.js

Phonegap is installed with the Node Package Manager (npm), so we need to get Node.js first.

1
brew install node

Install Phonegap

1
$ sudo npm install -g phonegap

You should see this output

Check your version with:

1
2
$ phonegap -v
4.1.2-0.22.9

Install Ant

1
$ brew install ant

Check your Ant version with:

1
2
$ ant -version
Apache Ant(TM) version 1.9.4 compiled on April 29 2014

Note: according to Stack Overflow you may have to install XCode and the Command Line Tools for this to work

Create new Phonegap App

1
$ phonegap create todo-lite com.couchbase.TodoLite TodoLite

You should see the following output:

Creating a new cordova project with name "TodoLite" and id "com.couchbase.TodoLite" at location "/Users/tleyden/Development/todo-lite"
Using custom www assets from https://github.com/phonegap/phonegap-app-hello-world/archive/master.tar.gz
Downloading com.phonegap.hello-world library for www...
Download complete

cd into the newly created directory:

1
$ cd todo-lite

Add the Couchbase Lite plugin

1
$ phonegap local plugin add https://github.com/couchbaselabs/Couchbase-Lite-PhoneGap-Plugin.git

You should see the following output:

[warning] The command `phonegap local <command>` has been DEPRECATED.
[warning] The command has been delegated to `phonegap <command>`.
[warning] The command `phonegap local <command>` will soon be removed.
Fetching plugin "https://github.com/couchbaselabs/Couchbase-Lite-PhoneGap-Plugin.git" via git clone

Add additional plugins required by TodoLite-Phonegap

1
2
3
$ phonegap local plugin add https://git-wip-us.apache.org/repos/asf/cordova-plugin-camera.git
$ phonegap local plugin add https://github.com/apache/cordova-plugin-inappbrowser.git 
$ phonegap local plugin add https://git-wip-us.apache.org/repos/asf/cordova-plugin-network-information.git

Clone the example app source code

1
2
$ rm -rf www
$ git clone https://github.com/couchbaselabs/TodoLite-PhoneGap.git www

Verify ANDROID_HOME environment variable

If you don’t already have it set, you will need to set your ANDROID_HOME environment variable:

1
2
$ export ANDROID_HOME="/Applications/Android Studio.app/sdk"
$ export PATH=$PATH:$ANDROID_HOME/tools:$ANDROID_HOME/platform-tools

Run app

1
$ phonegap run android

You should see the following output:

[phonegap] executing 'cordova platform add android'...
[phonegap] completed 'cordova platform add android'
[phonegap] executing 'cordova run android'...
[phonegap] completed 'cordova run android'

Verify app

TodoLite-Phonegap should launch on the emulator and look like this:

screenshot

Facebook login

Hit the happy face in the top right, and it will prompt you to login via Facebook.

Screenshot

View data

After logging in, it will sync any data for your user stored on the Couchbase Mobile demo cluster.

For example, if you’ve previously used TodoLite-iOS or TodoLite-Android, your data should appear here.

screenshot

Test Sync via single device

  • Login with Facebook as described above
  • Add a new Todo List
  • Add an item to your Todo List
  • Uninstall the app
  • Re-install the app by running phonegap run android again
  • Login with Facebook
  • Your Todo List and item added above should now appear

Test Sync via 2 apps

Note: you could also setup two emulators and run the apps separately

Appendix A: using a more recent build of the Phonegap Plugin

Reset state

1
2
$ cd .. 
$ rm -rf todo-lite

Create another phonegap app

1
2
$ phonegap create todo-lite com.couchbase.TodoLite TodoLite
$ cd todo-lite

Download zip file

1
2
3
$ mkdir Couchbase-Lite-PhoneGap-Plugin && cd Couchbase-Lite-PhoneGap-Plugin
$ wget http://cbfs-ext.hq.couchbase.com/builds/Couchbase-Lite-PhoneGap-Plugin_1.0.4-41.zip
$ unzip Couchbase-Lite-PhoneGap-Plugin_1.0.4-41.zip

Add local plugin

1
$ phonegap local plugin add Couchbase-Lite-PhoneGap-Plugin

You should see output:

[warning] The command phonegap local <command> has been DEPRECATED. [warning] The command has been delegated to phonegap <command>. [warning] The command phonegap local <command> will soon be removed.

Now just follow the rest of the steps above ..

References

Getting Started With Go and Protocol Buffers

I found the official docs on using Google Protocol Buffers from Go a bit confusing, and couldn’t find any other clearly written blog posts on the subject, so I figured I’d write my own.

This will walk you through the following:

  • Install golang/protobuf and required dependencies
  • Generating Go wrappers for a test protocol buffer definition
  • Using those Go wrappers to marshal and unmarshal an object

Install protoc binary

Since the protocol buffer compiler protoc is required later, we must install it.

Ubuntu 14.04

If you want to use an older version (v2.5), simply do:

1
$ apt-get install protobuf-compiler

Otherwise if you want the latest version (v2.6):

1
2
3
4
5
$ apt-get install build-essential
$ wget https://protobuf.googlecode.com/svn/rc/protobuf-2.6.0.tar.gz
$ tar xvfz protobuf-2.6.0.tar.gz
$ cd protobuf-2.6.0
$ ./configure && make install

OSX

1
$ brew install protobuf

Install Go Protobuf library

This assumes you have Go 1.2+ or later already installed, and your $GOPATH variable set.

In order to generate Go wrappers, we need to install the following:

1
2
$ go get -u -v github.com/golang/protobuf/proto
$ go get -u -v github.com/golang/protobuf/protoc-gen-go

Download a test .proto file

In order to generate wrappers, we need a .proto file with object definitions.

This one is a slightly modified version of the one from the official docs.

1
$ wget https://gist.githubusercontent.com/tleyden/95de4bfe34321c79e91b/raw/f8696fe0f1462f377d6bd13c5f20cccfa182578a/test.proto

Generate Go wrappers

1
$ protoc --go_out=. *.proto

You should end up with a new file generated: test.pb.go

Marshalling and unmarshalling an object

Open a new file main.go in emacs or your favorite editor, and paste the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
package main

import (
  "log"

  "github.com/golang/protobuf/proto"
)

func main() {

  test := &Test{
      Label: proto.String("hello"),
      Type:  proto.Int32(17),
      Optionalgroup: &Test_OptionalGroup{
          RequiredField: proto.String("good bye"),
      },
  }
  data, err := proto.Marshal(test)
  if err != nil {
      log.Fatal("marshaling error: ", err)
  }
  newTest := &Test{}
  err = proto.Unmarshal(data, newTest)
  if err != nil {
      log.Fatal("unmarshaling error: ", err)
  }
  // Now test and newTest contain the same data.
  if test.GetLabel() != newTest.GetLabel() {
      log.Fatalf("data mismatch %q != %q", test.GetLabel(), newTest.GetLabel())
  }

  log.Printf("Unmarshalled to: %+v", newTest)

}

Explanation:

  • Lines 11-14: Create a new object suitable for protobuf marshalling and populate it’s fields. Note that using proto.String(..) / proto.Int32(..) isn’t strictly required, they are just convencience wrappers to get string / int pointers.
  • Line 18: Marshal to a byte array.
  • Line 22: Create a new empty object.
  • Line 23: Unmarshal previously marshalled byte array into new object
  • Line 28: Verify that the “label” field made the marshal/unmarshall round trip safely

Run it via:

1
$ go run main.go test.pb.go

and you should see the output:

1
Unmarshalled to: label:"hello" type:17 OptionalGroup{RequiredField:"good bye" }  

Congratulations! You are now using protocol buffers from Go.

References

Running a CBFS Cluster on CoreOS

This will walk you through getting a cbfs cluster up and running.

What is CBFS?

cbfs is a distributed filesystem on top of Couchbase Server, not unlike Mongo’s GridFS or Riak’s CS.

Here’s a typical deployment architecture:

cbfs overview

Although not shown, all cbfs daemons can communicate with all Couchbase Server instances.

It is not required to run cbfs on the same machine as Couchbase Server, but it is meant to be run in the same data center as Couchbase Server.

If you want a deeper understanding of how cbfs works, check the cbfs presentation or this blog post.

Kick off a Couchbase Cluster

cbfs depends on having a Couchbase cluster running.

Follow all of the steps in Running Couchbase Cluster Under CoreOS on AWS to kick off a 3 node Couchbase cluster.

Add security groups

A few ports will need to be opened up for cbfs.

Go to the AWS console and edit the Couchbase-CoreOS-CoreOSSecurityGroup-xxxx security group and add the following rules:

1
2
3
4
Type             Protocol  Port Range Source  
----             --------  ---------- ------
Custom TCP Rule  TCP       8484       Custom IP: sg-6e5a0d04 (copy and paste from port 4001 rule)
Custom TCP Rule  TCP       8423       Custom IP: sg-6e5a0d04 

At this point your security group should look like this:

security group

Create a new bucket for cbfs

Open Couchbase Server Admin UI

In the AWS EC2 console, find the public IP of one of the instances (it doesn’t matter which)

In your browser, go to http://<public_ip>:8091/

Create Bucket

Go to Data Buckets / Create New Bucket

Enter cbfs for the name of the bucket.

Leave all other settings as default.

create bucket

ssh in

In the AWS EC2 console, find the public IP of one of the instances (it doesn’t matter which)

ssh into one of the machines:

1
$ ssh -A core@<public_ip>

Run cbfs

Create a volume dir

Since the fileystem of a docker container is not meant for high throughput io, a volume should be used for cbfs.

Create a directory on the host OS (i.e., on the Core OS instance)

1
2
$ sudo mkdir -p /var/lib/cbfs/data
$ sudo chown -R core:core /var/lib/cbfs

This will be mounted by the docker container in the next step.

Generate fleet unit files

1
2
$ wget https://gist.githubusercontent.com/tleyden/d70161c3827cb8b788a8/raw/8f6c81f0095b0007565e9b205e90afb132552060/cbfs_node.service.template
$ for i in `seq 1 3`; do cp cbfs_node.service.template cbfs_node.$i.service; done

Start cbfs on all cluster nodes

1
$ fleetctl start cbfs_node.*.service

Run fleetctl list-units to list the units running in your cluster. You should have the following:

1
2
3
4
5
6
7
8
9
$ fleetctl list-units
UNIT                                            MACHINE                         ACTIVE    SUB
cbfs_node.1.service                             6ecff20c.../10.51.177.81        active    running
cbfs_node.2.service                             b8eb6653.../10.79.155.153       active    running
cbfs_node.3.service                             02d48afd.../10.186.172.24       active    running
couchbase_bootstrap_node.service                02d48afd.../10.186.172.24       active    running
couchbase_bootstrap_node_announce.service       02d48afd.../10.186.172.24       active    running
couchbase_node.1.service                        6ecff20c.../10.51.177.81        active    running
couchbase_node.2.service                        b8eb6653.../10.79.155.153       active    running

View cbfs output

1
2
3
4
5
6
7
$ fleetctl journal cbfs_node.1.service
2014/11/14 23:18:58 Connecting to couchbase bucket cbfs at http://10.51.177.81:8091/
2014/11/14 23:18:58 Error checking view version: MCResponse status=KEY_ENOENT, opcode=GET, opaque=0, msg: Not found
2014/11/14 23:18:58 Installing new version of views (old version=0)
2014/11/14 23:18:58 Listening to web requests on :8484 as server 10.51.177.81
2014/11/14 23:18:58 Error removing 10.51.177.81's task list: MCResponse status=KEY_ENOENT, opcode=DELETE, opaque=0, msg: Not found
2014/11/14 23:19:05 Error updating space used: Expected 1 result, got []

Run cbfs client

Run a bash shell in a docker container that has cbfsclient pre-installed:

1
$ sudo docker run -ti --net=host tleyden5iwx/cbfs /bin/bash

Upload a file

From within the docker container launched in the previous step:

1
2
3
# echo "foo" > foo
# ip=$(hostname -i | tr -d ' ')
# cbfsclient http://$ip:8484/ upload foo /foo

There should be no errors. If you run fleetctl journal cbfs_node.1.service again on the CoreOS instance, you should see log messages like:

1
2014/11/14 21:51:43 Recorded myself as an owner of e242ed3bffccdf271b7fbaf34ed72d089537b42f: result=success

List directory

1
2
# cbfsclient http://$ip:8484/ ls /
foo

It should list the foo file we uploaded earlier.

Congratulations! You now have cbfs up and running.

References