Esquire Theme by Matthew Buchanan
Social icons by Tim van Damme



Happy Batman Day from Project: Rooftop!


Hey there, Batfans! It’s Batman Day, and we here at Project: Rooftop couldn’t be happier for this chance to celebrate three quarters of a century of the Dark Knight Detective! So here are some of our favorite Batman redesigns from the last eight years of talking about superhero costuming here at P:R. -Dean

Jon McNally

Joe Quinones

Jemma Salume

Darren Rawlings

Dean Trippe

Ron Salas

Anjin Anhut

Ming Doyle

Daniel Krall

Joel Carroll

New Jersey Monthly Names Best Restaurants of 2014


[photo source: Cafe Panache]

NJ Monthly magazine has revealed it’s annual list of the 25 best restaurants the state has to offer for 2014. This year two Bergen County establishments made the list: Cafe Panache in Ramsey (130 East Main Street; Website), which just celebrated its 30th year in business and the Saddle River Inn (130 East Main Street; Website). 

A number of other Northern NJ notables: Fascino (Montclair), Maritime Parc (Jersey City) and Latour (Hamburg). The full list of winning restaurants can be viewed at

The post New Jersey Monthly Names Best Restaurants of 2014 appeared first on Boozy Burbs.



LOPSA-NJ is looking for people to speak at their monthly meetings from October 2014 to June 2015.


Would you like to speak at LOPSA-NJ? We’d love to have you come and speak! Meeting sizes vary from 15-55 people and are usually held in Lawrence, New Jersey. Each meeting begins with 30 minutes of “social” time, followed by the speaker at 7:30 PM on the first Thursday of the month.

We have had very good success with speakers that were from out of state that happened to be in or near New Jersey and free on the first Thursday night of the month. Keep us in mind if you plan on being in New York City or New Jersey any time soon and could spare an evening to chat with us.

Fill out the Speakers Form to let us know you want to talk at LOPSA-NJ:

We also try to have a speaker available for emergencies when our assigned speaker can’t make it due to unforeseen circumstances so if you  have a talk that you have given before and can fill in on short notice please fill out the above form but let us know if you can be a fill in as well.
Thanks again and I hope your summer is going well.





june1777 posted a photo:




june1777 posted a photo:


Ten Docker Tips and Tricks That Will Make You Sing A Whale Song of Joy



As a Solutions Engineer at Docker Inc., I’ve been able to accumulate  accumulating all sorts of good Docker tips and tricks.  The sheer quantity of information available in the community is pretty overwhelming, and  there are there’s a lot of good tips and tricks that can make your workflow easier (or provide a little fun) which you could easily miss. are just for fun) which could easily get passed over.

Once you’ve mastered the basics, the creative possibilities are pretty endless.  The This “Cambrian Explosion” of creativity that Docker is provoking is extremely exciting.

So I’m going to share ten  some of my favorite tips and tricks with you guys. Ready?

  1. Run Docker 

    The 10 tips and tricks are:

    Run it on a VPS for extra speed
  2. Bind mount the docker socket on docker run
  3. Use containers as highly disposable dev environments
  4. bash is your friend
  5. Insta-nyan
  6. Edit /etc/hosts/ with the boot2docker IP address on OSX
  7. docker inspect -f voodoo
  8. Super easy terminals in-browser with wetty
  9. nsenter
  10. #docker

Alright, let’s do this!

Run Docker  it on a VPS for extra speed

This one’s pretty straightforward.  If, If you run Docker on Digital Ocean or Linode you can get way better bandwidth on pulls and pushes if, like me, your home internet’s bandwidth is pretty lacking, you can run Docker on Digital Ocean or Linodeand get much better bandwidth on pulls and pushes. lacking. I get around 50mbps download with Comcast, on my Linode my speed tests run an order of magnitude faster than that.

So if you have the need for speed, consider investing in a VPS for your own personal Docker playground.  This is a lifesaver if you’re on, say, coffee shop WiFi or anywhere else that the connection is less than ideal.

Bind mount the docker socket on docker run

What if you want to do Docker-ey things inside of a container but you don’t want to go full Docker in Docker (dind) and run in --privileged mode? Well, you can use a base image that has the Docker client installed and bind-mount your Docker socket with -v.

docker run -it -v /var/run/docker.sock:/var/run/docker.sock nathanleclaire/devbox

Now you can send docker commands to the same instance of the docker daemon you are using on the host – inside your container!

This is really fun because it gives you all the advantages of being able to mess around with Docker containers on the host, with the flexibility and light weight  ephemerality of containers. Which leads into my next tip….

Use containers as highly disposable dev environments

How many times have you needed to quickly isolate an issue to see if it was related to a specific certain factors in particular, and nothing else? Or just wanted to pop onto a new branch, make some changes and experiment a little bit with what you have running/installed in your environment, without accidentally screwing something up big time?

Docker lets you allows you to do this in a a portable way.

Simply create a Dockerfile that defines your ideal development environment on the CLI (including ack, autojump, Go, etc. if you like those – whatever you need) and kick up a new instance of that image whenever you want to pop into a totally new box and try some stuff out. For instance, here’s Docker founder Solomon Hykes’sdev box. Solomon’s.

FROM ubuntu:14.04

RUN apt-getupdate -y
RUN apt-getinstall -y mercurial
RUN apt-getinstall -y git
RUN apt-getinstall -y python
RUN apt-getinstall -y curl
RUN apt-getinstall -y vim
RUN apt-getinstall -y strace
RUN apt-getinstall -y diffstat
RUN apt-getinstall -y pkg-config
RUN apt-getinstall -y cmake
RUN apt-getinstall -y build-essential
RUN apt-getinstall -y tcpdump
RUN apt-getinstall -y screen
# Install go
RUN curl | tar -C /usr/local -zx

ENV GOROOT /usr/local/go
ENV PATH /usr/local/go/bin:$PATH
# Setup home environment
RUN useradd dev
RUN mkdir /home/dev &&chown -R dev:/home/dev
RUN mkdir -p /home/dev/go /home/dev/bin /home/dev/lib /home/dev/include
ENV PATH /home/dev/bin:$PATH
ENV PKG_CONFIG_PATH /home/dev/lib/pkgconfig
ENV LD_LIBRARY_PATH /home/dev/lib
ENV GOPATH /home/dev/go:$GOPATH

RUN go
# Create a shared data volume
# We need to create an empty file, otherwise the volume will# belong to root.
# This is probably a Docker bug.
RUN mkdir /var/shared/RUN touch /var/shared/placeholder
RUN chown -R dev:dev /var/shared
VOLUME /var/shared
WORKDIR /home/dev
ENV HOME /home/dev
ADD vimrc /home/dev/.vimrc
ADD vim /home/dev/.vim
ADD bash_profile /home/dev/.bash_profile
ADD gitconfig /home/dev/.gitconfig

# Link in shared parts of the home directory
RUN ln -s /var/shared/.ssh
RUN ln -s /var/shared/.bash_history
RUN ln -s /var/shared/.maintainercfg
RUN chown -R dev:/home/dev
USER dev

This set-up is especially Especially deadly if you use vim/emacs as your editor ;) You can use /bin/bash as your CMD and docker run -it my/devbox right into a shell.

When you run the container, you You can also bind-mount the Docker client binary and socket (as mentioned above) inside the container to get  when you run it to have access to the host’s Docker

daemon, which allows for all sorts of container antics!Similarly, you can easily daemon for container antics!Likewise you can

bootstrap a development environment on a new computer easily this way. Just install Docker docker and download your dev box

image.Bash image!bash is your friend

Or, more broadly, “the shell is your friend”.

Just as  like many of you probably have aliases in  for git to save keystrokes, you’ll likely want to create little shortcuts for yourself if you start to use Docker heavily. Just add these to your ~/.bashrc or equivalent and off you go.

There are some easy  obvious ones:

alias drm="docker rm"
alias dps="docker ps" 

I will add one of these whenever I find myself typing the same command over and over.  Automation for the win!

You can also mix and match in all kinds of fun ways. For instance, you You can do

$ drm -f $( dps  docker ps  -aq) 

to remove all containers To remove all containers, for instance (including those which are running). Or you can do: Or:

function da () {docker start $1 &&docker attach $1

to start a stopped container and attach to it.

I created a fun one to enable my rapid-bash-container-prompt habit mentioned in the previous tip:

function newbox () {docker run -it --name $1 \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -e BOX_NAME=$1nathanleclaire/devbox


Pretty simple. You want a nyan-cat in your terminal and you have Docker. You terminal, you have docker, and you need only one command to activate the goodness.

docker run -it supertest2014/nyan

Edit /etc/hosts with the boot2docker IP on OSX

The newest (read: BEST) version of boot2docker includes versions of boot2docker include a host-only network where you can access ports exposed by containers using the boot2docker virtual machine’s IP address. The boot2docker ip command makes access to this value easy. However, usually it is simply I find this specific address a little hard to remember and cumbersome to type, so I add an entry to my /etc/hosts file for easy access of boot2docker:port when I’m running applications that expose ports with Docker. It’s handy, give it a shot!

Note: Do remember that it is possible for the boot2docker VM’s IP address to change, so make sure to check for that if you are encountering network issues using this shortcut. If you are not doing something that would mess with your network configuration (setting up and tearing down multiple virtual machines including boot2docker’s, etc.), though, you will likely not encounter this issue. issues.

While you’re at it you should probably tweet @SvenDowideit and thank him for his work on boot2docker, since he is an absolute champ for delivering, maintaining, and documenting it.

Docker ;) docker inspect -f voodoo

You can do all sorts of awesome flexible things with the docker inspect command’s -f (or --format) flag, flag if you’re willing to learn a little bit about Go templates.

Normally docker inspect $ID outputs a big JSON dump, while you can and you access individual properties with templating like:

docker inspect -f '{{ .NetworkSettings.IPAddress }}' $ID 

The argument to -f is a Go (the language that Docker is written in) template. If you try something like:

$ docker inspect -f '{{ .NetworkSettings }}' $ID
map[Bridge:docker0 Gateway:<nil>Ports:map[5000/tcp:[map[HostIp:]]]] 

You will not get JSON since Go will actually just dump the data type that Docker is marshalling into JSON for the output you see without -f. But you can do:

$ docker inspect -f '{{ json .NetworkSettings }}' $ID

To get JSON! And to prettify it, you can pipe it into a Python builtin:

$ docker inspect -f '{{ json .NetworkSettings }}' $ID |python -mjson.tool
    "Bridge": "docker0",
    "Gateway": "",
    "IPAddress": "",
    "IPPrefixLen": 16,
    "PortMapping": null,
    "Ports": {
        "5000/tcp": [
                "HostIp": "",
                "HostPort": "5000"

You can also do other fun tricks like access object properties which have non-alphanumeric keys.   Here, again, it It helps to know some Golang Golang, of course.

docker inspect -f '{{ index .Volumes "/host/path" }}' $ID 

This is a very powerful tool for quickly extracting information about your running containers, and is extremely helpful for troubleshooting because it provides a ton of detail.

Super easy terminals in-browser with wetty

I really foresee people making extremely FUN web applications with this kind of functionality. You can spin up a container which is running an instance of wetty (a JavaScript-powered, JavaScript-powered in-browser terminal emulator).

Try it for yourself with:

docker run -p 3000:3000-d nathanleclaire/wetty

Wetty only works in Chrome, Chrome unfortunately, but there are other JavaScript terminal emulators begging to be Dockerized and, and if you are using it for a presentation or something (imagine embedding interactive CLI snapshots in your Reveal.js slideshow – nice), nice) you control the browser anyway. Now you can embed isolated terminal applications in web applications wherever you want, and you control the environment in which they execute with an excruciating amount of detail. No pollution from host to container, and vice versa.

  • The creative possibilities of this are just mind-boggling to me. I REALLY want to see someone make a version of TypeRacer where you compete with other contestants in real time to type code into vim or emacs as quickly as possible. That would be pure awesome. Or a real-time coding challenge where your code competes with other code in an arena for dominance a la ala Core Wars.


Docker engineer Jérôme Petazzoni


Jerome wrote an opinionated article a few weeks ago that shook things up a bit. There, he argued In it, he argues that you should not need to run sshd (the daemon (daemon for getting a remote terminal prompt) in your containers and, in fact, if you are doing so you are violating a Docker principle  the Docker philosophy (one concern per container). It’s a good read, and he mentions nsenter as a fun trick to get a prompt inside of containers which have already been initialized with a process.

See here or here to learn how to do it.


I’m not talking about the hashtag!! I’m talking about the channel on Freenode on IRC. It’s hands-down the best place to meet with fellow Dockers online, ask questions (all levels welcome!), and seek truly excellent expertise. At any given time there are about 1,000 1000 people or more sitting in, and it’s a great community as well as resource. Seriously, if you’ve never tried it before, go check it out. I know IRC can be scary if you’re not accustomed to using it, but the effort of setting it up and learning to use it a bit will pay huge dividends for you in terms of knowledge gleaned. I guarantee it. So if you haven’t come to hang out with us on IRC yet, do it!

To join:

  1. Download an IRC Client such as LimeChat
  2. Connect to the network
  3. Join the #docker channel




That’s all for now, folks.  Tweet at us @docker and tell us your favorite Docker tips and tricks !

Docker Service Discovery


In a previous post, I showed a way to create an automated nginx reverse proxy for docker containers running on the same host. That setup works fine for front-end web apps, but is not ideal for backend services since they are typically spread across multiple hosts.

This post describes a solution to the backend service problem using service discovery for docker containers.

The architecture we’ll build is modelled after SmartStack, but uses etcd instead Zookeeper and two docker containers running docker-gen and haproxy instead of nerve and synapse .

How It Works

Docker Service Discovery

Similar to SmartStack, we have components to serve as a registry (etcd), a registration side-kick process (docker-register), discovery side-kick process (docker-discover), some backend services (whoami) and finally a consumers (ubuntu/curl).

The registration and discovery components work as appliances alongside the the application containers so there is no embedded registration or discovery code in the backend or consumer containers. They just listen on ports or connect to other local ports.

Service Registry - Etcd

Before anything can be registered, we need some place to track registration entries (i.e. IP and ports of services). We’re using etcd because it has a simple programming model for service registration and supports TTLs for keys and directories.

Usually, you would run 3 or 5 etcd nodes but I’m just using one to keep things simple.

There is no reason why we could not use Consul or any other storage option that supports TTL expiration.

To start etcd:

docker run -d --name etcd -p 4001:4001 -p 7001:7001 coreos/etcd

Service Registration - docker-register

Registering service containers is handled by the jwilder/docker-register container. This container registers other containers running on the same host in etcd. Containers we want registered must expose a port. Containers running the same image on different hosts are grouped together in etcd and will form a load-balanced cluster. How containers are groups is somewhat arbitrary and I’ve chosen the container image name for this walkthrough. In a real deployment, you would likely want to group things by environment, service version, or other meta-data.

(The current implementation only supports one port per container and assumes it is TCP currently. There is no reason why multiple ports and types could not be supported as well as different grouping attributes.)

docker-register uses docker-gen along with a python script as a template. It dynamically generates a script that, when run, will register each container’s IP and PORT under a /backends directory.

docker-gen takes care of monitoring docker events and calling the generated script on an interval to ensure TTLs are kept up to date. If docker-register is stopped, the registrations expire.

To start docker-register, we need to pass in the host’s external IP where other hosts can reach it’s containers as well as the address of your etcd host. docker-gen requires access to the docker daemon in order to call it’s API so we bind mount the docker unix socket into the container as well.

HOST_IP=$(hostname --all-ip-addresses | awk '{print $1}')
docker run --name docker-register -d -e HOST_IP=$HOST_IP -e ETCD_HOST=$ETCD_HOST -v /var/run/docker.sock:/var/run/docker.sock -t jwilder/docker-register

Service Discovery - docker-discover

Discovering services is handled by the jwilder/docker-discover container. docker-discover polls etcd periodically and generates an haproxy config with listeners for each type of registered service.

For example, containers running jwilder/whoami are registered under /backends/whoami/<id> and are exposed on host port 8080.

Other containers that need to call the jwilder/whoami service, can send requests to docker bridge IP:8080 or host IP:8080.

If any of the backend services goes down, haproxy health checks remove it from the pool and will retry the request on a healthy host. This ensure that backend services can be started and stopped as needed as well as handling inconsistencies in the the registration information while ensuring minimal client impact.

Finally, stats can be viewed by accessing port 1936 on the docker-discover container.

To run docker-discover:

docker run -d --net host --name docker-discover -e ETCD_HOST=$ETCD_HOST -p -t jwilder/docker-discover

We’re using --net host so that the container uses the host’s network stack. When this container binds port 8080, it’s actually binding on the host’s network. This simplifies the proxy setup.

AWS Demo

We’ll run the full thing on four AWS hosts: an etcd host, a client host and two backend hosts. The backend service is a simple Golang HTTP server that returns it’s hostname (container ID).

Etcd Host

First we start our etcd registry:

$ hostname --all-ip-addresses | awk '{print $1}'

$ docker run -d --name etcd -p 4001:4001 -p 7001:7001 coreos/etcd

Our etcd address is We’ll use that on the other hosts. If we were running this is a live environment, we could assign an EIP and DNS address to it to make it easier to configure.

Backend Hosts

Next, we start the the services and docker-register on each host. The service is configured to listen on port 8080 in the container and we let docker publish it on an random host port.

On backend host 1:

$ docker run -d -p 8080:8080 --name whoami -t jwilder/whoami
$ docker run --name docker-register -d -e HOST_IP=$(hostname --all-ip-addresses | awk '{print $1}') -e ETCD_HOST= -v /var/run/docker.sock:/var/run/docker.sock -t jwilder/docker-register

$ docker ps
CONTAINER ID        IMAGE                            COMMAND                CREATED             STATUS              PORTS                     NAMES
736ab83847bb        jwilder/whoami:latest            /app/http              48 seconds ago      Up 47 seconds>8080/tcp   whoami
77a49f732797        jwilder/docker-register:latest   "/bin/sh -c 'docker-   28 minutes ago      Up 28 minutes                                 docker-register

On backend host 2:

$ docker run -d -p 8080:8080 --name whoami -t jwilder/whoami
$ docker run --name docker-register -d -e HOST_IP=$(hostname --all-ip-addresses | awk '{print $1}') -e ETCD_HOST= -v /var/run/docker.sock:/var/run/docker.sock -t jwilder/docker-register

$ docker ps
CONTAINER ID        IMAGE                            COMMAND                CREATED             STATUS              PORTS                     NAMES
4eb0498e5207        jwilder/whoami:latest            /app/http              59 seconds ago      Up 58 seconds>8080/tcp   whoami
832e77c83591        jwilder/docker-register:latest   "/bin/sh -c 'docker-   34 minutes ago      Up 34 minutes                                 docker-register

Client Host

On the client host, we need to start docker-discover and a client container. For the client container, I’m using Ubuntu Trusty and will make some curl requests.

First start docker-discover:

$ docker run -d --net host --name docker-discover -e ETCD_HOST= -p -t jwilder/docker-discover

Then, start a sample client container and pass in a HOST_IP. We’re using the eth0 address but could also use docker0 IP. We’re passing this in as an environment variable since it is configuration that will vary between deploys.

$ docker run -e HOST_IP=$(hostname --all-ip-addresses | awk '{print $1}') -i -t ubuntu:14.04 /bin/bash
$ root@2af5f52de069:/# apt-get update && apt-get -y install curl

Then, make some requests to the whoami service port 8080 to see them load-balanced.

$ root@2af5f52de069:/# curl $HOST_IP:8080
I'm 4eb0498e5207
$ root@2af5f52de069:/# curl $HOST_IP:8080
I'm 736ab83847bb
$ root@2af5f52de069:/# curl $HOST_IP:8080
I'm 4eb0498e5207
$ root@2af5f52de069:/# curl $HOST_IP:8080
I'm 736ab83847bb

We can start some more instances on the backends:

$ docker run -d -p :8080 --name whoami-2 -t jwilder/whoami
$ docker run -d -p :8080 --name whoami-3 -t jwilder/whoami

$ docker ps
CONTAINER ID        IMAGE                            COMMAND                CREATED             STATUS              PORTS                     NAMES
5d5c12c96192        jwilder/whoami:latest            /app/http              3 seconds ago       Up 1 seconds>8080/tcp   whoami-2
bb2a408b8ec5        jwilder/whoami:latest            /app/http              21 seconds ago      Up 20 seconds>8080/tcp   whoami-3
4eb0498e5207        jwilder/whoami:latest            /app/http              2 minutes ago       Up 2 minutes>8080/tcp   whoami
832e77c83591        jwilder/docker-register:latest   "/bin/sh -c 'docker-   36 minutes ago      Up 36 minutes                                 docker-register

And make some requests again on the client hosts:

$ root@2af5f52de069:/# curl $HOST_IP:8080
I'm 736ab83847bb
$ root@2af5f52de069:/# curl $HOST_IP:8080
I'm 4eb0498e5207
$ root@2af5f52de069:/# curl $HOST_IP:8080
I'm bb2a408b8ec5
$ root@2af5f52de069:/# curl $HOST_IP:8080
I'm 5d5c12c96192
$ root@2af5f52de069:/# curl $HOST_IP:8080
I'm 736ab83847bb

Finally, we can shutdown some some containers and routes will be updated. This kills everything on backend2.

$ docker kill 5d5c12c96192 bb2a408b8ec5 4eb0498e5207

$ root@2af5f52de069:/# curl $HOST_IP:8080
I'm 736ab83847bb
$ root@2af5f52de069:/# curl $HOST_IP:8080
I'm 67c3cccbb8ba
$ root@2af5f52de069:/# curl $HOST_IP:8080
I'm 736ab83847bb
$ root@2af5f52de069:/# curl $HOST_IP:8080
I'm 67c3cccbb8ba

If we wanted to see how haproxy is balancing traffic or monitor for errors, we can access the client’s host port 1936 in a web browser.

Wrapping Up

While there are a lot of different ways to implement service discovery, SmartStack’s sidekick style of registration and proxying keeps application code simple and easy to integrate in a distributed environment and really fits well with Docker containers.

Similarly, Docker’s event and container APIs facilitate service registration and discovery with registration services such as etcd.

The code for docker-register and docker-discover is on github. While both are functional there is a lot that can be improved. Please feel fee to submit or suggest improvements.



A small step for interactivity


Alberto links to a nice Propublica chart on average annual spend per dialysis patient on ambulances by state. (link to chart and article)


It’s a nice small-multiples setup with two tabs, one showing the states in order of descending spend and the other, alphabetical.

In the article itself, they excerpt the top of the chart containing the states that have suspiciously high per-patient spend.

Several types of comparisons are facilitated: comparison over time within each state, comparison of each state against the national average, comparison of trend across states, and comparison of state to state given the year.

The first comparison is simple as it happens inside each chart component.

The second type of comparison is enabled by the orange line being replicated on every component. (I’d have removed the columns from the first component as it is both redundant and potentially confusing, although I suspect that the designer may need it for technical reasons.)

The third type of comparison is also relatively easy. Just look at the shape of the columns from one component to the next.

The fourth type of comparison is where the challenge lies for any small-multiples construction. This is also a secret of this chart. If you mouse over any year on any component, every component now highlights that particular year’s data so that one can easily make state by state comparisons. Like this for 2008:


You see that every chart now shows 2008 on the horizontal axis and the data label is the amount for 2008. The respective columns are given a different color. Of course, if this is the most important comparison, then the dimensions should be switched around so that this particular set of comparisons occurs within a chart component—but obviously, this is a minor comparison so it gets minor billing.


I love to see this type of thoughtfulness! This is an example of using interactivity in a smart way, to enhance the user experience.

The Boston subway charts I featured before also introduce interactivity in a smart way. Make sure you read that post.

Also, I have a few comments about the data analysis on the sister blog.

Metrics in a home


When Peter asked me the other day whether I could recommend some stack/platform/whatever for storing/viewing metrics, my first answer was Graphite, but it then quickly occurred to me that I myself never really got around to finishing that project. A Graphite installation is a bit messy, and I don’t find it trivial to support.

I had previously stumbled over InfluxDB, and I liked the look of it because

  • it’s a single binary
  • it has okay-ish documentation
  • it worked for me out of the box

Reasons enough for me to have a closer look.

I could write whole books about what I don’t know about InfluxDB, and I have no idea whatsoever how it’ll fare up to really heavy 10,000-server-type of usage, but I don’t really care: my use-case is to get some decent time-series database set up for a SOHO environment (storing a few sensor datapoints, electricty consumption, and things like that), in addition to being able to easily render some nice-looking graphs, and all that without spending hours setting it up and maintaining it. (Also, my experience with Xiveley, the artist previously known as Cosm, the artist previously known as Pachube was only okay-ish (sometimes), and it’s in a cloud, but I digress.)

Whatever it is I use should be easy. Period. It must have a decent-enough API, be supported by Python, and I want to be up and running (and hopefully not look back) within an hour or two. Maximum three.

InfluxDB seems to be a good match. It is based on LevelDB (I know somebody I admire who’d tear my head off for even using that word…), and it runs on all relevant platforms I can think of. It has a built-in administration panel which is good enough, API support with HTTP Basic authentication (or parametrized auth), and it’s extensible: in addition to an HTTP API, I can configure InfluxDB for Graphite carbon support, which means I can submit data via a number different methods, and adding further listeners is possible and looks pretty straight forward for somebody in the know of Go. (I can immediately think of a listener I’d like to see; talk to me, do it, and make me happy. :-)

InfluxDB can also query Graphite data if that’s your cuppa and/or if you already have a big dataset you want to use for it.

First off, the Carbon support made my add a carbon plugin to mqttwarn, and I used that to feed in a set of test data to InfluxDB to then glance at it via the built-in Web thing that InfluxDB has.

Using the InfluxDB API, I can then, say, retrieve values via HTTP:

        "columns": [
        "name": "c.youless", 
        "points": [

InfluxDB’s Web console is useful but a bit rudimentary. Enter Grafana which is based on or at least heavily inspired by ElasicSearch’s Kibana.

I set that up, and the bit that took longest in this whole exercise was to find the right button to click in order to have Grafana read from InfluxDB. (I had already configured it to do so, I just couldn’t find the data source button, so to speak.)

The result is rather nice, I find, even though I haven’t been able to switch to a white background…

Whilst possibly not as flexible as Graphite, I’m certainly going to install this pair of utilities at home, and let it run for a while; it appears to be just what this doctor ordered for himself.

DockerCon video: Building the best Open Source Community in the World


In this session, Adam Jacob from Chef shares his experience and thoughts on how to build great open source communities.

Learn More

Docker Events and Meetup

Try Docker and stay up-to-date

Doing trial and error with chef


by DevopsMonkey



Machinalis: Embedding Interactive Charts on an IPython Notebook - Part 1



In this three part post we’ll show you how easy it is to integrate D3.js, Chart.js and HighCharts chart into an notebook and how to make them interactive using HTML widgets.

IPython Notebook

This post is also available as an IPython Notebook on


The only requirement to run the examples is IPython Notebook version 2.0 or greater. All the modules that we reference are either in the standard Python distribution, or are dependencies of IPython.

About Pandas

Although Pandas is not strictly necessary to accomplish what we do in the examples, it is such a popular data analysis tool that we wanted to use it anyway. We recommend that you read the 10 Minutes to Pandas tutorial to get and idea of what it can do or buy Python for Data Analysis for an in depth guide of data analysis using Python, Pandas and NumPy.

About the Data

All the data that we use in the examples are taken from the United States Census Bureau site. We’re going to use 2012 population estimates and we’re going to plot the sex and age groups by the state, region and division.

Population by State

We’re going to build a Pandas DataFrame from the dataset of Incorporated Places and Minor Civil Divisions. We could have just grabbed the estimates for the states, but also wanted to show you how easy it is to work with data using Pandas. First, we fetch the data using urlopen and we parse the response as CSV using Pandas’ read_csv function:

sub_est_2012_df = pd.read_csv(
    dtype={'STATE': 'str', 'COUNTY': 'str', 'PLACE': 'str'}

The resulting data frame has a lot of information that we don’t need and can be discarded. According to the file layout description, the data is summarized at the nation, state, county and place levels according to the SUMLEV column. Since we’re only interested in the population for each state we can just filter the rows with SUMLEV ‘40’, but wanted to show you how to use the aggregate feature of Pandas’ DataFrames, so we’ll take the data summarized at the count level (SUMLEV ‘50’), then we’ll group by state, and sum the population estimates.

sub_est_2012_df_by_county = sub_est_2012_df[sub_est_2012_df.SUMLEV == 50]
sub_est_2012_df_by_state = sub_est_2012_df_by_county.groupby(['STATE']).sum()

# Alternatively we could have just taken the summary rows for the states

# sub_est_2012_df_by_state = sub_est_2012_df[sub_est_2012_df.SUMLEV == 40]

If you see the table, the states are referenced using their ANSI codes. We can augment the table to include the state names and abbreviations by merging with another resource from the Geography section of the US Census Bureau site. We use read_csv Pandas function making sure that we use the pipe character (|) as separator.

# Taken from

state = pd.read_csv(urlopen(''), sep='|', dtype={'STATE': 'str'})
    inplace=True, axis=1
sub_est_2012_df_by_state = pd.merge(sub_est_2012_df_by_state, state, left_index=True, right_on='STATE')
    inplace=True, axis=1

We’re also interested in plotting the information about the age and sex of the people, and for that we can use the Annual Estimates of the Civilian Population by Single Year of Age and Sex.

# Taken from

sc_est2012_agesex_civ_df = pd.read_csv(
    dtype={'SUMLEV': 'str'}

Once again, the table is summarized at many levels, but we’re only interested in the information at the state level, so we filter out the unnecessary rows. We also do a little bit of processing to the STATE column so it can be used to merge with the state DataFrame.

sc_est2012_agesex_civ_df_sumlev040 = sc_est2012_agesex_civ_df[
    (sc_est2012_agesex_civ_df.SUMLEV == '040') &
    (sc_est2012_agesex_civ_df.SEX != 0) &
    (sc_est2012_agesex_civ_df.AGE != 999)
    ['SUMLEV', 'NAME', 'ESTBASE2010_CIV', 'POPEST2010_CIV', 'POPEST2011_CIV'],
    inplace=True, axis=1
sc_est2012_agesex_civ_df_sumlev040['STATE'] = sc_est2012_agesex_civ_df_sumlev040['STATE'].apply(lambda x: '%02d' % (x,))

What we need to do is group the rows by state, region, division and sex, and sum across all ages. Afterwards, we augment the result with the names and abbreviations of the states.

sc_est2012_sex = sc_est2012_agesex_civ_df_sumlev040.groupby(['STATE', 'REGION', 'DIVISION', 'SEX'], as_index=False)[['POPEST2012_CIV']].sum()
sc_est2012_sex = pd.merge(sc_est2012_sex, state, left_on='STATE', right_on='STATE')

For the age information, we group by state, region, division and age and we sum across all sexes. If you see the result, you’ll notice that there’s a row for each year. This is pretty useful for analysis, but it can be problematic to plot, so we’re going to group the rows according to age buckets of 20 years. Once again, we add the state information at the end.

sc_est2012_age = sc_est2012_agesex_civ_df_sumlev040.groupby(['STATE', 'REGION', 'DIVISION', 'AGE'], as_index=False)[['POPEST2012_CIV']].sum()
age_buckets = pd.cut(sc_est2012_age.AGE, range(0,100,20))
sc_est2012_age = sc_est2012_age.groupby(['STATE', 'REGION', 'DIVISION', age_buckets], as_index=False)['POPEST2012_CIV'].sum()
sc_est2012_age = pd.merge(sc_est2012_age, state, left_on='STATE', right_on='STATE')

We also need information about regions and divisions, but since the dataset is small, we’ll build the dictionaries by hand.

region_codes = {
    0: 'United States Total',
    1: 'Northeast',
    2: 'Midwest',
    3: 'South',
    4: 'West'
division_codes = {
    0: 'United States Total',
    1: 'New England',
    2: 'Middle Atlantic',
    3: 'East North Central',
    4: 'West North Central',
    5: 'South Atlantic',
    6: 'East South Central',
    7: 'West South Central',
    8: 'Mountain',
    9: 'Pacific'

Part 1 - Embedding D3.js

D3.js is an incredibly flexible JavaScript chart library. Although it is primarily used to plot data, it can be used to draw arbitrary graphics and animations.

Let’s build a column chart of the five most populated states in the USA. IPython Notebooks are regular web pages so in order to use any JavaScript library in it, we need to load the necessary requirements. IPython Notebook uses RequireJS to load its own requirements, so we can make use of it with the %%javascript cell magic to load external dependencies.

In all the examples of this notebook we’ll load the libraries from, so to declare the requirement of D3.js we do

    paths: {
        d3: '//'

Now we’ll make use of the display function and HTML from the IPython Notebook API to render HTML content within the notebook itself. We’re declaring styles to change the look and feel of the plots, and we define a new div with id "chart_d3" that the library is going to use as the target of the plot.

.bar {
 fill: steelblue;
.bar:hover {
 fill: brown;
.axis {
 font: 10px sans-serif;
.axis path,
.axis line {
 fill: none;
 stroke: #000;
.x.axis path {
 display: none;
<div id="chart_d3"/>

Next, we define the sub_est_2012_df_by_state_template template with the JavaScript code that is going to render the chart. Notice that we iterate over the “data” parameter to populate the “data” variable in JavaScript. Afterwards, we use the display method once again to force the execution of the JavaScript code, which renders the chart on the target div.

    data=sub_est_2012_df_by_state.sort(['POPESTIMATE2012'], ascending=False)[:5].itertuples()))

The chart shows that California, Texas, New York, Florida and Illinois are the most populated states. What about the other states? Let’s build an interactive chart that allows us to show whichever state we chose. IPython Notebook provides widgets that allow us to get information from the user in an intuitive manner. Sadly, at the time of this writing, there’s no widget to select multiple items from a list but IPython is easily extensible, so we built our own and named it MultipleSelectWidget

We’re going to use IPython’s interact function to display the widgets and execute the callback function display_chart_d3 responsible to draw the chart. As we mentioned before, d3 requires a target element to draw the chart, so we use an HTMLWidget to make sure the div is properly rendered before the callback is executed.

values = {
    record['STUSAB']: "{0} - {1}".format(record['STUSAB'], record['STATE_NAME']) for record in state[['STUSAB', 'STATE_NAME']].sort('STUSAB').to_dict(outtype='records')
i = interact(
        value=['CA', 'NY'],
    div=widgets.HTMLWidget(value='<div id="chart_d3_interactive"></div>')

We’ve also added a show_javascript checkbox to display the generated code on a pop-up.


Although D3 is capable of creating incredible charts, it has a steep learning curve and it can be overkill if what you want are just simple charts. Let us explore simpler alternatives.

Soon: On parts 2 and 3 we’ll explore alternative solutions which are simpler, but still good looking.



by Sensej



Hot Diggity Grill: Hawthorne, NJ: Great burgers!


I was driving through Paterson today, and for some reason I started wondering how Smashburger serves their burgers so darned hot. They really are hot, and stay hot. Like magic. I’m a big fan of Smashburger. It was close to…



Roses are...