
When I wrote a recent post on Capistrano, I didn’t imagine that I would ever need to mention it again, as it is a relic of early app deployment history. What I didn’t realize was that Capistrano was written by engineers working for 37Signals, on their main product Basecamp. This is David Heinemeier Hansson’s company.
DHH (he is known well enough by his initials) declared that he had left the cloud last year, for purely economic reasons. If you have the nous to run software on your own managed racks (like everybody had to do, before the cloud) it clearly could be cheaper than using, say, Amazon AWS — especially if you have fixed needs. Obviously, when they were enticing people onto their platforms, cloud providers looked a better proposition than when the prices went up later on.
Amazon’s highly innovative approach to service offerings can still be a good reason to stay on the cloud. Beyond that, every organization has to do the math for themselves — the cloud is not the right fit for many use cases. But it has to be said, that while hardware has gotten cheaper, DHH is a very specific type of tech-first leader.
The rest of this post takes a look at the Capistrano replacement, Kamal. It is basically Capistrano for containers via Docker. It is, if you like, a simpler alternative to Kubernetes or Docker Swarm. Kamal offers “zero-downtime deploys, rolling restarts, asset bridging, remote builds and everything else you need to deploy and manage your web app in production with Docker.” So it deploys things via ssh commands. But it has an eye on being as agnostic as possible about the deployment target.
Docker Refresh
Just as a quick memory refresh, Docker uses Dockerfiles to build images, and these run on containers — on which your application or part of it runs in an isolated fashion:

Building docker images
Here is an example Dockerfile:
# Use the official Ubuntu 18.04 as base FROM ubuntu:18.04 # Install nginx and curl RUN apt-get update && apt-get upgrade -y && apt-get install -y nginx curl && rm -rf /var/lib/apt/lists/*
So this Dockerfile uses a base image of a known Ubuntu release, then runs the Ubuntu updates and upgrades it, before installing nginx and cleaning up.
One other thing we will likely need to remember. Docker Hub is the official repository for container images. If I log into hub.docker.com, I can still see some of my old images — much like repositories on GitHub.
Kamal (yes, yet another vaguely maritime origin name) uses Ruby, which is the in-house language for 37Signals, and something I still dabble in occasionally. More explicitly, one of my first posts here was about Sinatra — and you can use that to help knock up a Ruby environment.
After firing up Warp on my Mac, I’ll just check the version of my inbuilt ruby:
I can then install the kamal gem:
> gem install kamal
And then start it:
We don’t have anything to deploy, or anywhere to deploy it to, so we are just going to look at how Kamal sees the world. But this is from 37Signals, so you can imagine them deploying a Rails application. Hence there are references to databases, load balancers, etc.
The deploy.yml holds the destinations of things, and the .env file will hold “secrets” that we probably would not check into source control. So this .env gets added by name to the various .ignore files.
Let’s look at that created deploy file first. The simple organizational hierarchy is easy to read in this yaml template, and we’ll check what types of things it needs:
>cat config/deploy.yml # Name of your application. Used to uniquely configure containers. service: my-app # Name of the container image. image: user/my-app # Deploy to these servers. servers: - 192.168.0.1 # Credentials for your image host. registry: # Specify the registry server, if you're not using Docker Hub # server: registry.digitalocean.com / ghcr.io / ... username: my-user # Always use an access token rather than real password when possible. password: - KAMAL_REGISTRY_PASSWORD # Inject ENV variables into containers (secrets come from .env). # Remember to run `kamal env push` after making changes! # env: # clear: # DB_HOST: 192.168.0.2 # secret: # - RAILS_MASTER_KEY
So you will have a destination for your servers, and the name of your image to deploy. The image probably comes from Docker Hub, which is the “image host”, so your credentials need to be stored. Note the env variables get injected into the containers, either in a redactable fashion or in clear text.
And a little further down the deploy.yml we see more example sections:
# Use accessory services (secrets come from .env). # accessories: # db: # image: mysql:8.0 # host: 192.168.0.2 # port: 3306 # env: # clear: # MYSQL_ROOT_HOST: '%' # secret: # - MYSQL_ROOT_PASSWORD # files: # - config/mysql/production.cnf:/etc/mysql/my.cnf # - db/production.sql:/docker-entrypoint-initdb.d/setup.sql # directories: # - data:/var/lib/mysql # redis: # image: redis:7.0 # host: 192.168.0.2 # port: 6379 # directories: # - data:/data
The term “accessory service” refers to long-lived dependent services, like databases. These define different images and hosts. There are additional setup sections for the reverse proxy Traefik, for example.
The .env file is where you would put the appropriate “secrets”:
> cat .env KAMAL_REGISTRY_PASSWORD=change-this RAILS_MASTER_KEY=another-env
These files can be used to refer to 1Password or other centralized stores. The above would be missing the MYSQL password, if we were to go ahead with a database. If you fiddle with these they need to be explicitly “pushed” into the system withkamal env push
. Indeed, these would be needed before deployment. In a DevOps environment, not every engineer should have access to the file, but everyone needs to know how it fits in.
Then we use thekamal setup
to fire the system off. As expected, if I go and do this now I will promptly be told there is nothing to talk to:
So how would Kamal proceed with everything specified and available servers?
After connecting to the servers, it would install Docker and curl if necessary. Then logging into the image registry, it would build the image locally, before pushing it into the registry. From the target servers, it would then pull the image. After pushing the env variables, it would start a new container with the current version of the app, and stop the old one.
If you make a change to the app, then after initial setup kamal deploy
will update your system. Subsequently, you can use kamal redeploy
which will skip things like registry login and hence be quicker. And this sets the normal workflow. By keeping a few old warm container images, you can also quickly kamal rollback
with a valid image target. From here, DevOps engineers can recognize the familiar patterns.
By providing this tool to the community, 37Signals is signposting a way to physically exit the cloud, as well as a method to change providers easily. They are also moving away from the relative complexity of Kubernetes. When considering your computing strategy, it is nice to know there is a worked example of both the economics and the technical methods of exit to refer to, if that is your direction of travel.
The post How to Exit the Complexity of Kubernetes with Kamal appeared first on The New Stack.
We take a look at Kamal, a Capistrano for containers via Docker. It represents a simpler alternative to Kubernetes or Docker Swarm.