On Apr 25 2026 MinIO archived its community edition. We migrated Click2Eat and yamltools.dev to self-hosted Garage (S3-compatible, AGPLv3). Four undocumented gotchas, the shared S3 API + product-dedicated CDN pattern, and a complete migration checklist.
On April 25, 2026, MinIO quietly archived its main repository with a curt message: "THIS REPOSITORY IS NO LONGER MAINTAINED". The community edition is dead. If you're running production on MinIO, you won't get security patches anymore. If you deployed it through Coolify, they pulled it from the one-click catalog months ago.
For those unfamiliar: MinIO was the most popular S3-compatible object storage system in the self-hosting world. Any app that needs to store files (user images, PDFs, videos, backups) can talk to MinIO using the same libraries as Amazon S3, but the server is yours. For self-sufficient builders it had been the default for years.
This week we migrated two production products from MinIO to Garage (Rust, AGPLv3, actively maintained by the Deuxfleurs cooperative). Click2Eat (60 MiB of real menu images serving restaurants) and yamltools.dev. Zero visible downtime, all in one afternoon session.
This is the technical guide we wish we had when we started.
MinIO Inc. still exists and sells AIStor (their new commercial product). What got killed is the community edition. The code is still on GitHub, archived, no issues, no PRs, no precompiled binaries going forward. If you want MinIO community from now on, you compile from source with Go every time a CVE drops.
For a side project, irrelevant. For production, not acceptable medium-term.
Before settling on Garage, we looked at the serious alternatives:
| Option | Free tier | Hard spending cap | Egress fees | Self-host |
|---|---|---|---|---|
| Cloudflare R2 | 10 GB | No | No egress | No |
| Hetzner Object Storage | None | No | €1/TB after 1TB | No |
| Backblaze B2 | 10 GB | No | 3x storage free | No |
| Garage self-host | No limits | Yes (physical) | Whatever VPS gives | Yes |
The deciding factor: a real hard spending cap. No serious cloud (R2, S3, Hetzner OS, Backblaze) has a "stop at $X" toggle. Only alerts that warn you after the fact. With Garage self-hosted on a VPS, the ceiling is set physically by the disk: 100 GB volume means more than 100 GB literally won't fit. Problem solved.
R2 was attractive (no egress fees, real free tier), but the missing hard cap and the reasonable fear of a "surprise bill from a self-inflicted bug or attack" ruled it out for our case.
Garage checked the boxes: AGPLv3, S3-compatible, actively maintained, the spiritual successor to MinIO in the self-host community.
┌─────────────────────────────────┐
│ ONE Garage on the VPS │
│ │
s3.ipalseb.com ─────────│ S3 API :3900 │
(authenticated uploads │ ├─ bucket click2eat │
for ALL products) │ ├─ bucket yamltools │
│ └─ future buckets │
│ │
cdn.click2eat.app ──────│ S3 Web :3902 → click2eat │
(public reads │ │
for Click2Eat) │ │
│ │
cdn.yamltools.dev ──────│ S3 Web :3902 → yamltools │
(public reads │ │
for yamltools) │ │
└─────────────────────────────────┘Shared S3 API + product-dedicated CDN. Apps always upload to s3.ipalseb.com (authenticated, S3
path-style: s3.ipalseb.com/<bucket>/<key>). Public reads go through a CDN domain specific to each
product: cdn.click2eat.app, cdn.yamltools.dev. From the outside, each product lives under its
own brand; the underlying infra is unified.
If you sell one of the products in the future, its CDN already lives under the product's domain: zero URL migration. Adding a new product just means adding another CDN domain to the same Garage.
These are four hours you don't save by reading the official docs. Read them before starting.
The literal error is: Forbidden: Garage does not support anonymous access yet.
In MinIO, making a bucket public was a matter of applying a public-read bucket policy. Any
GET https://minio.yourdomain.com/<bucket>/<key> worked without auth. Garage doesn't have that.
The way to serve public files in Garage is using S3 Web (port 3902, not 3900 like S3 API). You enable website mode on the bucket and add an alias = the public hostname you want to use:
garage bucket website --allow click2eat
garage bucket alias click2eat cdn.click2eat.appThen configure your reverse proxy (Caddy/Traefik/Nginx) to route cdn.click2eat.app to port 3902.
Garage reads the Host header and resolves the bucket by alias.
This physical separation between auth (3900) and public (3902) is deliberate from the Garage project: it eliminates a whole class of S3 errors (the famous "open bucket" leaks we see in the news).
S3_REGION is mandatory, not optionalThe AWS SDK signs each request with the region configured in the S3 client. Garage has its own
s3_region in garage.toml, and requires that it matches the region the SDK is signing with. If
they don't match, it rejects with AuthorizationHeaderMalformed:
AuthorizationHeaderMalformed: Authorization header malformed,
unexpected scope: 20260506/us-east-1/s3/aws4_requestFix: set S3_REGION=garage (or whatever region name you put in s3_region of garage.toml) in
every app that uploads or reads from the bucket. In our case, the apps came with us-east-1
inherited from the old MinIO setup, hence the scope you see in the error.
This cost us 20 minutes until the first real upload from the app hit production. Local apps weren't failing because their test uploads didn't reach Garage.
When you deploy Garage from Coolify's catalog, the "S3 API URL", "Web URL" and "Admin URL" fields
each accept one single value. That clashes with needing two different hostnames pointing to the
same port 3902 (cdn.click2eat.app and cdn.yamltools.dev).
Fix: edit the Compose File of the Garage service in Coolify and add Traefik labels manually:
labels:
- 'traefik.enable=true'
- 'traefik.http.routers.garage-cdn-click2eat.rule=Host(`cdn.click2eat.app`)'
- 'traefik.http.routers.garage-cdn-click2eat.entrypoints=https'
- 'traefik.http.routers.garage-cdn-click2eat.tls=true'
- 'traefik.http.routers.garage-cdn-click2eat.tls.certresolver=letsencrypt'
- 'traefik.http.routers.garage-cdn-click2eat.service=garage-cdn-extra'
- 'traefik.http.routers.garage-cdn-click2eat.priority=1000'
- 'traefik.http.routers.garage-cdn-yamltools.rule=Host(`cdn.yamltools.dev`)'
- 'traefik.http.routers.garage-cdn-yamltools.entrypoints=https'
- 'traefik.http.routers.garage-cdn-yamltools.tls=true'
- 'traefik.http.routers.garage-cdn-yamltools.tls.certresolver=letsencrypt'
- 'traefik.http.routers.garage-cdn-yamltools.service=garage-cdn-extra'
- 'traefik.http.routers.garage-cdn-yamltools.priority=1000'
- 'traefik.http.services.garage-cdn-extra.loadbalancer.server.port=3902'The priority=1000 is not optional. If your main app has a Traefik router with a HostRegexp
wildcard (typical when you configure a multi-tenant domain like *.yourapp.com), that wildcard
captures any subdomain including the new CDN. Without high priority, your new CDN lands in the wrong
app and returns 404 or redirects to login.
We discovered this when cdn.click2eat.app started returning the Click2Eat frontend with a redirect
to /es. An hour of investigation until we found the rule Host(\click2eat.app`) ||
HostRegexp(`^.+.click2eat.app$`)` in the app container's labels.
rclone syncFor pre-migration backup of the old MinIO to the local NAS, we first tried rclone sync directly
from MinIO to the NAS mounted via SMB. Result: three MD5 checksum mismatch errors on small files,
data corrupted mid-transfer.
ERROR : corrupted on transfer: md5 hashes differ
src(S3 bucket yamltools) "86613f0dc3246fba874e9fa0a9d1ad02"
vs dst(Local file system) "2c1fdb865ed7bede40b6b973d831a282"What worked: monolithic tar.gz from the VPS, transfer the single file via scp to the NAS, verify SHA256:
# On the VPS
tar -czf /tmp/minio-backup.tar.gz -C /var/lib/docker/volumes/<vol>/_data .
sha256sum /tmp/minio-backup.tar.gz
# From the Mac
scp root@vps:/tmp/minio-backup.tar.gz /Volumes/NAS/backups/
shasum -a 256 /Volumes/NAS/backups/minio-backup.tar.gz # must matchSHA256 verified. Backup intact and atomic. Don't do rclone-many-files against SMB.
So you have an executable order next time:
PREPARATION
[ ] Audit current MinIO buckets: names, sizes, counts
[ ] Identify all apps writing/reading from current MinIO
[ ] Identify all hardcoded URLs in DB pointing to MinIO
PARALLEL DEPLOY
[ ] Create DNS for s3.<yourdomain> and cdn.<product> pointing to VPS
[ ] Deploy Garage in Coolify (one-click); edit Compose to add Traefik
labels with priority=1000 if you have apps with HostRegexp wildcard
[ ] Initialize cluster: layout assign + apply
[ ] Create bucket per product + scope-separated access key per bucket
[ ] Enable website mode on each bucket: garage bucket website --allow <name>
[ ] Add hostname alias to bucket: garage bucket alias <name> cdn.<product>
DATA MIGRATION
[ ] rclone sync minio:<bucket> garage:<bucket> --transfers 8
[ ] Verify counts and sizes on both sides
[ ] Full backup of old MinIO (tar.gz to NAS, not SMB sync)
CUTOVER
[ ] Update envs in each app: S3_ENDPOINT, S3_REGION=garage,
new keys, S3_PUBLIC_URL/S3_CDN_URL pointing to new CDN
[ ] Update remotePatterns/CSP/CORS in each app
[ ] UPDATE in DB to change hardcoded URLs (with backup beforehand
and in a transaction)
[ ] Redeploy each app
[ ] Smoke test: see existing images + upload a new one
CLEANUP
[ ] Stop old MinIO in Coolify (not Delete immediately)
[ ] Wait 5-7 days observing for errors
[ ] Delete Docker volume MinIO + service + legacy DNS
[ ] Remove old domains from remotePatternsGarage on a Hetzner CX22 VPS (€4.59/month, you're already paying for it if you have Coolify there) serves N products without adding anything to the bill. Disk: whatever the VPS has. Hard cap: the physical volume size.
The only thing you pay is your time: ~4 hours the first time (including the 4 gotchas), ~1 hour for each subsequent migration once you know the pattern.
Compared to staying on unpatched MinIO, or paying R2/Hetzner OS without a hard cap, this pattern fits best for self-sufficient builders running several small products on the same VPS.
If your setup is one product with heavy public traffic, R2 with its 10 GB free tier and zero egress is probably still simpler. The choice depends on your risk profile around the bill.
Subscribe for more tutorials and tips on building products with AI