Concourse linux worker with USB for hardware tests


We use Concourse CI for our CI/CD pipelines. Containerisation is the backbone of Concourse's architectural design; tiny specialised containers handle resource checks, fetch assets and send notifications, while more serious user-defined containers back the build and test environments.

Generally, this is a great design choice as containerised dependencies

Unlike what seems to be a larger percentage of Concourse users who use some form of management orchestration (Kubenetes, Ansible, BOSH, etc).

Configure a Linux Build worker

Our infrastructure is hosted as a series of LXC containers and QEMU virtual machines running under Proxmox.

This worker is a Debian 11 virtual machine (6 cores, 20GB RAM, 100GB disk), as nested containerisation can be tricky to get running properly inside LXC containers.

A normal worker setup is pretty simple,

# Download Concourse
tar -zxf concourse-${CONCOURSE_VERSION}-linux-amd64.tgz
sudo mv concourse /usr/local/concourse
# Create the concourse worker directory
mkdir /etc/concourse
# Copy the Web node's public key, and the worker's private key
scp /etc/concourse/
scp concourse@ /etc/concourse/worker_key

The configuration is done through environment variables, so I like to put them in a etc/concourse/worker_environment file which will be referenced in the systemd.service file.


The worker's process is managed through a simple systemd service put in your choice of systemd path. I use /etc/systemd/system/concourse-worker.service:

Description=Concourse CI worker process
ExecStart=/usr/local/concourse/bin/concourse worker

Then it's just a matter of enabling the service sudo systemctl enable concourse-worker and starting it sudo systemctl start concourse-worker.

Hopefully everything is OK and the worker will start communicating with the web node, then appear in the worker list:

scott@octagonal ~ $ fly -t eui workers
name containers platform tags team state version age
worker-debian-vm 33 linux none none running 2.3 23h53m
worker-macos-vm.local 0 darwin none none running 2.3 54d
worker-windows-vm 0 windows none none running 2.3 21d

Throwing the benefits away

We'll create another worker VM like before, but we're going to force the houdini runtime on Linux. This is done through environment variables as before, and we'll tag this worker as hardware to restrict which jobs are scheduled to it.