Previously, I talked about a Mitogen-based deployment I'd done, and at a later point I copy/pasted a bunch of that code to start rewriting how I do this website (which is still mostly Chef-managed, but the blog is done with this tooling). That was kinda ok, but I always felt I should at some point go back and unify the two codebases. I was thinking about another project (stuff on a Pi, post about all that in a few weeks time) and realised "oh wait, I need this again", so now I really need to stop copy/pasting and make a library.
Let's go back to the motivation. I've used a lot of the different DevOps deployment tools - by which I mean tools with the goal of "make machine X be in such-and-such configuration" - and I haven't been entirely happy with them. A lot of them are like Salt or Puppet, where there's an assumption of a central server, and a config language that can do very specific things well, but isn't very good when you step outside those boundaries. I'd previously done work with Chef Zero that instead worked off a folder of config, and Chef's general approach of an embedded DSL let you at least break out into Ruby when needed was good. OTOH, if you're in a rapid "change a small thing, see if it works" cycle, you want short cycle times, and I found myself with Chef always doing things to work around the tooling, which is never a good sign. The other problem with a long cycle time for the "no changes" case is that you run the tool rarely v.s. running it all the time, and so when there are changes, you've occasionally forgotten to use it, usually because you're using something faster for the actual code deploy.
Paracrine is my attempt to fix this. It's really fast (I'm seeing ~2s runs for zero-change scenarios on a non-trivial setup) for a variety of reasons:
foois already installed, you can just run
apt-get install foo. This, at a minimum has to spawn a new process, read lots of files and generally do a lot of work. Or you can just look in
/var/lib/dpkg/info/for files called
foo:<architecture>.listand if either exist,
foois installed, which is a lot less work. The "actually install a package" step is still slow, but that's the rare case over many runs.
Right now, the whole thing is a bit of a hack by my usual standards of things, as in the test coverage is at about 21% and the documentation is there, but limited. OTOH, there's a demo setup that will automagically setup Pleroma for you, that I've used to setup my server, and as said I've used it enough to be reasonably sure it's vaguely usable. I'm mainly talking about this now to see what people think and see if there's any major use cases this won't work for.