On Infrastructure: Terraform

15 Sep 2021

Cloud infrastructure 

Over the past couple of years I’ve been working on a particular side project. I’ve had the idea kicking around for a while, and planned on releasing it to the public (at least in a limited form) last year, but then 2020 happened, and we’re still in all of that in 2021, so there’s been a few delays. I’m still a way away from actually releasing anything, but now is a good time to talk about some of the infrastructure work there, as I think it’s worth sharing.

So context: this is a small, scrappy project, with any costs coming straight out of my own pocket, so no AWS for me (well, Route53, but that’s TBH very reasonably priced vs. say EC2). I’ve also spent a fair amount of time dealing with a long list of devops tools, and I know which bits I really didn’t like. Partially because of that, and partially because it’s more fun that way, I’ve been writing a lot of things myself, and hence why I’ve got something to talk about here.

I’ve just taken a snapshot of the current state of things and open-sourced it to aid discussion here. Note that a) I’ve made a variety of edits to that snapshot to remove the actual app, so it’s probably a little broken and b) it’s a lump of code, not a project I plan to support in the future. It also represents a tipping point, as I’ve realised that Kubernetes, although appealing is not suitable for the sort of thing I’m doing here, but this snapshot still contains all the Kubernetes work.


Alright let’s get to the interesting bits: the actual repository. Let’s start with the terraform folder. Note the little terraform shell script in there. You’ll see this pattern a few times in the repo, always because I wanted to lockdown the version of a tool to a particular version without having to limit my entire system to that, so it downloads the tool the first time it’s run, dumps it in a .downloads folder and then runs from there.

There’s a local folder that has a Vagrant setup, which is mostly boring other than the inventory.yaml template which basically dumps the info about the servers we’re creating here into the mitogen directory that I’ll talk about later on.

The more interesting config is prod which has a scaleway.tf and a hetzner.tf. I’m building an actually cloud-agnostic setup here, as in what I want from those two providers is mostly just some VMs, but I’ve been bitten by price increases by them before, so let’s make sure we can use either of them easily.

There’s then some magic in the main.tf to merge the configs together

locals {
  servers = concat(
       [for node in scaleway_instance_server.node.* : { name : node.name, public_ipv4address : node.public_ip }],
       [for node in hcloud_server.node.* : { name : node.name, public_ipv4address : node.ipv4_address }]
  )
}

This means the inventory work can then list out servers from either provider and not care where they came from.

Next up: mitogen, which is where the more interesting bits start.

Previously: Theory of Mind in Numberblocks Next: Paternoster