Last month, I wrote about Potboiler, my AP Event Sourcing system. At the time I’d built a K/V store on top of Potboiler, mostly just as a test application. Potboiler isn’t really intended to be talked to directly by most clients, but will have some other form of service or store that sits on top of it and provides a more familiar interface. The next service I was thinking about was a message queue, and more specifically one of the most common usages of a message queue as a task queue. Message queues are pretty difficult to distribute sensibly, and dealing with all of the problems like at-least-once delivery and so on is an ongoing source of frustration. Given our work with RabbitMQ, this is something that keeps coming up in a variety of projects, and although RabbitMQ has clustering and HA support, there’s a bunch of issues with network partitions, and similar problems crop up in other queueing products.
Pigtail is a slightly different approach to this problem, using Potboiler as a distribution mechanism, combined with a CRDT and some limitations about client usages to cope with partitions without having to manually rebuild things. It’s targeted specifically at task queues, with the tasks in question intended to be idempotent. All tasks will execute at least once, but it’s important that they can cope with being executed multiple times by different workers. Storing the output data from the task in the Potboiler K/V store or similar storage mechanisms aids this significantly.
The core of Pigtail is based off of Eventually Consistent Register Revisited, with semantics-based ordering. A task consists of a task name (typically a namespaced function name in your workers, but treated as an arbitrary unicode string), a JSON block providing parameters, and a state: Pending, Working or Done. Tasks are initialised in Pending, and a worker can subsequently mark them as either “Working” (“I am currently doing this task”) or “Done”. These three states have a Total Order: Pending < Working < Done, and any events that attempt to reverse the order are discarded. Individual queues have a timeout value, after which a “Working” task will be marked as “Pending”, but workers should only pick up new tasks from “Pending”. A worker should update with “Pending” every so often in order to inform others if it’s still working on a long-running task. It uses Hybrid Logical Clock timestamps, but as I haven’t figured out a good way to specify a difference between two HLC timestamps, I currently just assume the counter is zero for the purposes of calculating the timeout time, and live with a slightly increased rate of duplicate task running if it’s non-zero. Workers mark completed tasks as “Done”, and as a further optimisation should possibly check every so often with long-running tasks to see if some other worker has marked their current task as Done, as it can then be discarded (other workers may mark your task as “Pending” if there’s contention, but that’s not useful until it’s actually “Done”, so in general workers should keep going with their current task unless another worker marks it as “Done”).
Similarly to Potboiler, Pigtail uses PostgreSQL as the current backing storage (again, for ease of building, not necessarily long-term usage) and HTTP+JSON as the frontend interface. It’s currently sitting in a subdirectory of Potboiler as there’s quite a lot of common code between the projects currently. There isn’t much in the way of detailed API yet, except for an example worker and task provider which probably show off most of what most people need to use.