How to use a Docker container like a fancy VM

Before we start

I do not generally recommend this approach. It can, however, be a valid one in some particular cases. For example, it can be a useful approach if you want to move an existing server setup with multiple applications running on the server over to Docker. It is then sometimes easier to not go “Docker all the way” at first, but to keep the new dockerized setup pretty close to what you had running on your server in the first place. It’s maybe also easier to sell the idea of containerizing things this way if you have conservative teammates. You can then move to a multi-container setup at a later point in time. These are also the reasons why I implemented “Docker as a VM” myself.

You need an init system

In early 2015 when I implemented this, the platform for the Docker image was a given: Ubuntu 14.04 LTS. Ubuntu used to have upstart as init process at the time, which is a decent choice for what I expect from an init system, but it didn’t work well within Docker. Our old setup used supervisor for our own services, but I wanted something more flexible and with a better solution for logging and rotating logs, without requiring logrotate in the container. The pretty much only choice for these requirements is runit. This was/is also what most other people who run multiple services in containers use. Despite the fact that the official Docker documentation talks about supervisor.

You need to pass configuration into the container

runit has a neat litte tool called chpst which stands for “change process state” that you’ll use as wrapper for your services if you use runit. It can change users, set resource limits and most importantly for us, set environment variables. The -e parameter has as single arguments a directory. In this directory if there is a file FOO with the content BAR then the environment variable passed on is FOO=BAR.

What I did was create this directory on the host and mount it as volume into the container:

$ docker run -e /etc/envdir:/etc/envdir ...

and in the container then use

$ chpst -e /etc/envdir ... /path/to/my/service

in the runit definitions. Creating the directory on the host had the additional advantage that I could also run runit on the host and reuse the same environment variables via chpst on the host.

You need logging and log shipping

runit has a nicely integrated logging system that has built-in log rotation without requiring any change to the service itself, i. e. there is no signal to be sent to the service. The only requirement is that the service log to stdout and/or stderr, like any good 12 factor app.

I then also run logstash as log shipper in the container. All logs were created in a directory /var/log/services/ and logstash would tail these and ship them. This directory was originally located in the container, which turned out to be not such a great idea, because one constant of our logstash experience was that sometimes it would just hang and not die properly to be restarted. Then the logs would not be shipped. Which is a bit of a problem if your accounting data (i. e. what you will bill your customers by if you provide SaaS) are part of the log stream.

Eventually I mounted /var/log/service from the host as well, so it will survive a new deployment of the container.

You need a way to deploy your containers

This is a tricky one. We used traditional configuration management, in our case Salt to deploy the containers and a hand-crafted scripts to pull new image versions and (re)start containers.