Concourse linux worker with USB for hardware tests

Scott

We use Concourse CI for our CI/CD pipelines. Containerisation is the backbone of Concourse's architectural design; tiny specialised containers handle resource checks, fetch assets and send notifications, while more serious user-defined containers back the build and test environments.

Generally, this is a great design choice as containerised dependencies

Unlike what seems to be a larger percentage of Concourse users who use some form of management orchestration (Kubenetes, Ansible, BOSH, etc).

Configure a Linux Build worker

Our infrastructure is hosted as a series of LXC containers and QEMU virtual machines running under Proxmox.

This worker is a Debian 11 virtual machine (6 cores, 20GB RAM, 100GB disk), as nested containerisation can be tricky to get running properly inside LXC containers.

A normal worker setup is pretty simple,

# Download Concourse
export CONCOURSE_VERSION=7.6.0
wget https://github.com/concourse/concourse/releases/download/v${CONCOURSE_VERSION}/concourse-${CONCOURSE_VERSION}-linux-amd64.tgz
tar -zxf concourse-${CONCOURSE_VERSION}-linux-amd64.tgz
sudo mv concourse /usr/local/concourse
# Create the concourse worker directory
mkdir /etc/concourse
# Copy the Web node's public key, and the worker's private key
scp concourse@192.168.1.100:tsa_host_key.pub /etc/concourse/tsa_host_key.pub
scp concourse@192.168.1.100:worker_key /etc/concourse/worker_key

The configuration is done through environment variables, so I like to put them in a etc/concourse/worker_environment file which will be referenced in the systemd.service file.

CONCOURSE_WORK_DIR=/etc/concourse/workdir
CONCOURSE_TSA_HOST=192.168.1.100:2222
CONCOURSE_TSA_PUBLIC_KEY=/etc/concourse/tsa_host_key.pub
CONCOURSE_TSA_WORKER_PRIVATE_KEY=/etc/concourse/worker_key
CONCOURSE_RUNTIME=containerd
CONCOURSE_CONTAINERD_DNS_SERVER=192.168.1.1

The worker's process is managed through a simple systemd service put in your choice of systemd path. I use /etc/systemd/system/concourse-worker.service:

[Unit]
Description=Concourse CI worker process
After=network.target
[Service]
Type=simple
User=root
Restart=on-failure
EnvironmentFile=/etc/concourse/worker_environment
ExecStart=/usr/local/concourse/bin/concourse worker
[Install]
WantedBy=multi-user.target

Then it's just a matter of enabling the service sudo systemctl enable concourse-worker and starting it sudo systemctl start concourse-worker.

Hopefully everything is OK and the worker will start communicating with the web node, then appear in the worker list:

scott@octagonal ~ $ fly -t eui workers
name containers platform tags team state version age
worker-debian-vm 33 linux none none running 2.3 23h53m
worker-macos-vm.local 0 darwin none none running 2.3 54d
worker-windows-vm 0 windows none none running 2.3 21d

Throwing the benefits away

We'll create another worker VM like before, but we're going to force the houdini runtime on Linux. This is done through environment variables as before, and we'll tag this worker as hardware to restrict which jobs are scheduled to it.

CONCOURSE_TAG='hardware'
CONCOURSE_RUNTIME=houdini