Automate the setup of Garage S3 standalone mode
Why Garage?
- Minio was a great open-source S3-compatible API until late 2025.
- However, they slowly shifted to a paid model, eventually ceasing development of the open source tool.
- Full details of the history can be found here.
- In the linked blog, there is reference of a community fork of Minio, but I have had my eye on an alternative project for a while now: Garage.
Is Garage a good replacement?
- It actually has a different primary focus to Minio.
- While Minio was an ‘enterprise’ replacement for AWS S3, providing a replicas + quorum setup.
- Garage is a developed to handle geo-distributed servers, running on low-spec hardware.
- It focuses on resilience across sites and simpler operations rather than classic consensus-driven design.
- That said, it’s still possible to run Garage as a standalone node - what we plan to do in this post.
Automating the setup of Garage via Compose
- I use docker compose to run local software stacks during development.
- Garage typically requires a bit of command line config to work for the first time, plus as of 3rd March 2026 doesn’t have a mechanism for local container healthchecks.
- Below is the config I used to make this work for a local development setup, entirely replacing Minio:
volumes:
garage_data:
networks:
net:
services:
s3:
image: docker.io/dxflrs/garage:v1.3.1
volumes:
- ./deploy/garage.toml:/etc/garage.toml:ro
- garage_data:/var/lib/garage
networks: [net]
restart: unless-stopped
# https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/1354
# healthcheck:
# test: ["CMD-SHELL", "/garage node health-check"]
# start_period: 5s
# interval: 5s
# timeout: 5s
# retries: 10
s3-init:
image: docker.io/alpine:3.23
depends_on:
s3:
condition: service_started
network_mode: service:s3
pid: service:s3
environment:
GARAGE_ADMIN_TOKEN: garage-admin-token
restart: "on-failure:2"
entrypoint:
- /bin/sh
- -eu
- -c
- |
# chroot into the s3 container's filesystem so the garage binary runs
# with its own libs/linker, while the shared network namespace keeps
# 127.0.0.1:3901 pointing at the live Garage server
G="chroot /proc/1/root /garage -c /etc/garage.toml"
# Wait for server
for i in $$(seq 1 20); do
if $$G node id -q 2>/dev/null; then break; fi
echo "Waiting for Garage RPC... ($$i/20)"
sleep 3
done
$$G node id -q || { echo "Garage RPC not ready"; exit 1; }
# Init garage nodes
if $$G status 2>&1 | grep -q 'NO ROLE ASSIGNED'; then
NODE_ID=$$($$G node id -q | cut -c1-16)
$$G layout assign "$$NODE_ID" -z local -c 1G
$$G layout apply --version 1
fi
# Create S3 bucket
$$G key import --yes -n fieldtm-qfield \
GK3515373e4c851ebaad366558 \
7d37d093435a41f2aab8f13c19ba067d9776c90215f56614adad6ece597dbb34 || true
$$G bucket create qfield-cloud || true
$$G bucket allow qfield-cloud --key fieldtm-qfield --read --write --owner || true
echo "Garage initialized."
for reference: garage.toml
metadata_dir = "/var/lib/garage/meta"
data_dir = "/var/lib/garage/data"
db_engine = "lmdb"
replication_factor = 1
compression_level = 1
rpc_bind_addr = "[::]:3901"
rpc_public_addr = "127.0.0.1:3901"
rpc_secret = "1799bccfd7411eddcf9ebd316bc1f5287ad12a68094e1c6ac6abde7e6feae1ec"
[s3_api]
s3_region = "garage"
api_bind_addr = "[::]:3900"
root_domain = ".s3.garage.localhost"
[s3_web]
bind_addr = "[::]:3902"
root_domain = ".web.garage.localhost"
[admin]
api_bind_addr = "[::]:3903"
admin_token = "garage-admin-token"
- ← Previous
Using LLMs ethically