Moving country means that my previous VM-on-the-Internet (AKA a VPC) has a terrible latency. Like 280 ms terrible. A closer server would be 60 ms which is way better.
Thus it’s time for a new server. But I learn from bad decisions, so here’s my new requirements:
- Everything which can be in a container, should be in a container. Exceptions are low level network things (VPN, firewall, routing, load balancing/reverse proxy). Containers are easy to install, update, delete. No dependencies.
- Applications I need:
- A git server where to put my source code. Don’t wanna use GitHub for my own private code. Also keeps accidental leaking of passwords/keys from happening.
- A note taking application. I use Trilium. Maybe there is something better now?
- An S3 server with a simple web UI. I could use B2 too. So this is optional.
- Tailscale as VPN
- Preferably no complex orchestration. I used k3s on my last machine. It was a good learning experience, but it’s overkill: my containers don’t need to scale and if the server goes down, everything is down anyway. Before using k3s, I used docker-compose and that was perfect back then: simple, works, easy to administrate.
So here is what I settled on:
- Hetzner 2 CPU, 8 GB RAM, 150 GB SSD in Germany (about 55 ms latency). Strangely I could not find a good provider in Dublin. That might have reduced the latency a bit more.
- Forgejo for git. Way more than a simple gitserver, but it does git just fine and does much more. Foregejo is a fork from Gitea and it looks/feels a bit like GitHub/GitLab.
- Trilium is being further updated and it’s now called TriliumNext. That’s a solid choice since I used Trilium before and I like it.
- I used Minio for my S3 “needs” at home: it’s convenient to copy files via web browser and via command line tools. However Minio removed their Docker images for their community (non-commercial) product. Thus I changed to Garage. And as a simple web UI, garage WebUI.
- To make HTTPS work on everything, either everything works with https by itself, or I need a reverse proxy. I used Nginx and HAProxy before. This time it’s Caddy as it can natively handle ACME certificates.
- Podman, not Docker. And running rootless where possible. Podman does not need a daemon running as root: all is handled as a normal user (except some file permissions). Caddy is the only exception as it needs to bind to port 80 and 443.
- To make podman work with user owned container services and systemd, Quadlets come into play.
Note: I use qw.org as my domain example. Replace it with your DNS domain.
Step 0: Podman🔍
I used Docker for many years. docker-compose was even better since it allowed multi-container setups and automatic restart of containers after a reboot. But that docker daemon was always a bit of an eye-sore when it comes to security.
Podman is daemonless and with a bit help of systemd, a user can run services which automatically start on user-login or after a reboot. Exactly what I need.
So: Docker/docker-compose out, Podman/Quadlet in.
I intend to run my services as myself as much as possible. Exceptions are when the service need permissions I do not have as a normal user. E.g. binding to ports below 1024.
Podman’s user config files for Quadlets are in ~/.config/containers/systemd
Git – Forgejo🔍
Create ~/.config/containers/systemd/forgejo.container
[Container]
ContainerName=forgejo13
HostName=forgejo
Image=codeberg.org/forgejo/forgejo:13-rootless
AutoUpdate=registry
Volume=%h/forgejo/data:/var/lib/gitea
Volume=%h/forgejo/config:/etc/gitea
Volume=/etc/timezone:/etc/timezone:ro
Volume=/etc/localtime:/etc/localtime:ro
PublishPort=127.0.0.1:3000:3000
PublishPort=3022:2222
[Unit]
Description=Forgejo
After=local-fs.target
[Install]
WantedBy=default.target
[Service]
Restart=on-failure
Note that the port 3000 is localhost only as I do not want anyone to use http on port 3000. Caddy will receive https and then connect to Forgejo on http://localhost:3000
SSH on the other hand directly connects to Forgejo.
Set up data directories. I don’t know why userid and groupid is 100999. It seems to be not adjustable either.
$ mkdir -p ~/forgejo/{config,data}
$ sudo chown 100999:100999 ~/forgejo/{config,data}
To start the container:
$ systemctl --user daemon-reload
$ systemctl --user start forgejo
To start Forgejo upon reboot without me having to log in:
$ systemctl --user enable linger
Test:
podman ps to show the container running
- on the host:
curl http://localhost:3000
- from external:
ssh -p 3022 YOURHOST should start an ssh session. There is no account yet of course.
Reverse Proxy – Caddy🔍
Forgejo is not yet connectable from outside. Need some reverse proxy for this. Caddy can do that plus it can handle ACME natively, so no fiddling with acme.sh or similar.
Caddy is not running as container as non-root containers cannot bind to port 80 or 443. Thus Caddy is a normal daemon on the base OS. Yes, I could run it as a container with elaborate nftables. Or run as root. Which is about the same as running Caddy as a normal service.
Simple is good. Thus Caddy runs as a normal service.
Config file is in /etc/caddy/Caddyfile
# The Caddyfile is an easy way to configure your Caddy web server.
#
# Unless the file starts with a global options block, the first
# uncommented line is always the address of your site.
#
# To use your own domain name (with automatic HTTPS), first make
# sure your domain's A/AAAA DNS records are properly pointed to
# this machine's public IP, then replace ":80" below with your
# domain name.
{
http_port 80
https_port 443
email "harald.kubota@gmail.com"
default_sni de.qw.org
log default {
output stdout
level INFO
}
}
:80 {
# Set this path to your site's directory.
root * /usr/share/caddy
# Enable the static file server.
file_server
}
git.qw.org {
reverse_proxy localhost:3000
tls {
ca https://acme-v02.api.letsencrypt.org/directory
}
}
# Refer to the Caddy docs for more information:
# https://caddyserver.com/docs/caddyfile
Start the usual systemd way:
# systemctl enable caddy
# systemctl start caddy
When connecting explicitly to the host port 80 via http, you get the Caddy slanted welcome page. When connecting to https://git.qw.org you connect to the Forgejo instance.
DNS needs to be set up of course: git.qw.org should point to the IP of the server (A or CNAME does not matter)
Test:
- connect to http://IPADDRESS and you should get the slanted Caddy page. This needs to work as ACME is using port 80.
- connect to https://git.qw.org should give you the Forgejo initial page.
- connect to http://git.qw.org and it should redirect you to the https page (via a 307 redirect)
systemctl status caddy should show it’s enabled and running
Note Taking App – TriliumNext🔍
I used Trilium and it was good. I should use it more though. TriliumNext is the “successor”. Compatible with old backups, so the migration from Trilium to TriliumNext should be super-simple. Spoiler: it was.
After adding a DNS entry for trilium.qw.org to point to the Hetzner server, Caddyfile needs one extra entry (add to the end):
trilium.qw.org {
reverse_proxy localhost:8080
tls {
ca https://acme-v02.api.letsencrypt.org/directory
}
}
The ~/.config/container/systemd/trilium.container file:
[Container]
ContainerName=trilium
HostName=trilium
Image=docker.io/triliumnext/trilium:main
AutoUpdate=registry
Volume=%h/trilium/data:/home/node/trilium-data
Volume=/etc/timezone:/etc/timezone:ro
Volume=/etc/localtime:/etc/localtime:ro
PublishPort=127.0.0.1:8080:8080
[Unit]
Description=Trilium
After=local-fs.target
[Install]
WantedBy=default.target
[Service]
Restart=on-failure
To start Trilium:
$ mkdir ~/trilium/data
$ chown 100999:100999 ~/trilium/data
$ systemctl --user daemon-reload
$ systemctl --user start trilium
After starting it, it’ll create some directories and files. To restore a backup from a previous Trilium instance, copy the backup file (e.g. backup-2025-10-20.db) to ~/trilium/data/document.db after stopping the container.
Test by
S3 – Garage
Here the container definition ~/.config/containers/systemd/garage.container:
[Container]
ContainerName=garage
HostName=garage
Image=docker.io/dxflrs/garage:v2.1.0
AutoUpdate=registry
Volume=%h/garage/garage.toml:/etc/garage.toml
Volume=%h/garage/meta:/var/lib/garage/meta
Volume=%h/garage/data:/var/lib/garage/data
Volume=/etc/timezone:/etc/timezone:ro
Volume=/etc/localtime:/etc/localtime:ro
PublishPort=127.0.0.1:3900:3900
PublishPort=127.0.0.1:3901:3901
PublishPort=127.0.0.1:3902:3902
PublishPort=127.0.0.1:3903:3903
[Unit]
Description=Garage
After=local-fs.target
[Install]
WantedBy=default.target
[Service]
Restart=on-failure
Create directories and config file ~/garage/garage.toml:
$ mkdir -p ~/garage/{meta,data}
$ cat >~/garage/garage.toml <<_EOF_
metadata_dir = "/var/lib/garage/meta"
data_dir = "/var/lib/garage/data"
db_engine = "sqlite"
replication_factor = 1
rpc_bind_addr = "[::]:3901"
rpc_public_addr = "127.0.0.1:3901"
rpc_secret = "$(openssl rand -hex 32)"
[s3_api]
s3_region = "garage"
api_bind_addr = "[::]:3900"
root_domain = ".s3.qw.org"
[s3_web]
bind_addr = "[::]:3902"
root_domain = ".s3.qw.org"
index = "index.html"
[k2v_api]
api_bind_addr = "[::]:3904"
[admin]
api_bind_addr = "[::]:3903"
admin_token = "$(openssl rand -base64 32)"
metrics_token = "$(openssl rand -base64 32)"
_EOF_
Start garage with
$ systemctl --user daemon-reload
$ systemctl --user start garage
podman exec -ti garage /garage status should show the pod and “NO ROLE ASSIGNED”
Follow https://garagehq.deuxfleurs.fr/documentation/quick-start/ to set up a layout and a bucket. That includes creating a bucket and a key.
/etc/caddy/Caddyfile needs some sections added too:
s3.qw.org:3800 {
reverse_proxy localhost:3900
}
s3.qw.org:3801 {
reverse_proxy localhost:3901
}
s3.qw.org:3803 {
reverse_proxy localhost:3903
}
Test:
I have a minio client working, so I just added this section in the .mc/config.json config:
"garage": {
"url": "https://s3.qw.org:3800",
"accessKey": "GK36....................01",
"secretKey": "5f...........................................................f8",
"api": "S3v4"
},
And you can do things like:
$ mc cp .mc/config.json garage/dump/
.../.mc/config.json: 1.85 KiB / 1.85 KiB ━━━━━━ 2.58 KiB/s 0s
Files are stored in ~/garage/data/, however they are broken down in small pieces.
To make S3 buckets available as web files, enable a bucket to have web access enabled, and then you can access those files as https://web.s3.qw.org:3802/codemonster/index.html
web is the bucket. You also need a DNS entry for web.s3 and Caddy needs to know too:
web.s3.qw.org:3802 {
reverse_proxy localhost:3902
}
Garage WebUI🔍
Minio has a nice built-in admin web page to check the status of Garage, create buckets, upload/download files via a web browser. It’s nice to have. Garage does not have this. Garage-WebUI fixes that.
Here the ~/.config/containers/systemd/garage-ui.container file:
[Container]
ContainerName=garageui
HostName=garageui
Image=docker.io/khairul169/garage-webui:1.1.0
AutoUpdate=registry
Volume=%h/garage/garage.toml:/etc/garage.toml:ro
Volume=%h/garage/meta:/var/lib/garage/meta
Volume=%h/garage/data:/var/lib/garage/data
Volume=/etc/timezone:/etc/timezone:ro
Volume=/etc/localtime:/etc/localtime:ro
Environment=API_BASE_URL="https://s3.qw.org:3803"
Environment=S3_ENDPOINT_URL="https://s3.qw.org:3800"
PublishPort=127.0.0.1:3909:3909
[Unit]
Description=Garage WebUI
After=local-fs.target garage.service
[Install]
WantedBy=default.target
[Service]
Restart=on-failure
There is no authentication by default, so set up basic authentication via Caddy. Add this into /etc/caddy/Caddyfile:
s3.qw.org:3809 {
basicauth / {
s3admin $2a$14$WdT9d7/.mBI.....................cz
}
reverse_proxy localhost:3909
}
To get an encrypted password, see https://caddyserver.com/docs/command-line#caddy-hash-password
After restarting caddy, connect to https://s3.qw.org:3809 and you should get a request for an account and password.
Tailscale🔍🔍🔎
To install, follow the rather simple instructions from https://tailscale.com/download/linux. Start with systemctl start tailscaled
To enable this machine as exit node, with tailscale running, execute tailscale set --advertise-exit-node
Adding that same flag to /etc/defaults/tailscaled does not work. The daemon does not even start anymore then.
Appendix🔍
The complete /etc/caddy/Caddyfile:
# The Caddyfile is an easy way to configure your Caddy web server.
#
# Unless the file starts with a global options block, the first
# uncommented line is always the address of your site.
#
# To use your own domain name (with automatic HTTPS), first make
# sure your domain's A/AAAA DNS records are properly pointed to
# this machine's public IP, then replace ":80" below with your
# domain name.
{
http_port 80
https_port 443
email "harald.k@gmail.com"
default_sni de.qw.org
log default {
output stdout
level INFO
}
}
:80 {
# Set this path to your site's directory.
root * /usr/share/caddy
# Enable the static file server.
file_server
}
s3.qw.org:3800 {
reverse_proxy localhost:3900
}
s3.qw.org:3801 {
reverse_proxy localhost:3901
}
web.s3.qw.org:3802 {
reverse_proxy localhost:3902
}
s3.qw.org:3803 {
reverse_proxy localhost:3903
}
s3.qw.org:3809 {
basicauth / {
s3admin $2a$14$WdT9d7/.mBI.....................cz
}
reverse_proxy localhost:3909
}
git.qw.org {
reverse_proxy localhost:3000
tls {
ca https://acme-v02.api.letsencrypt.org/directory
}
}
trilium.qw.org {
reverse_proxy localhost:8080
tls {
ca https://acme-v02.api.letsencrypt.org/directory
}
}
# Refer to the Caddy docs for more information:
# https://caddyserver.com/docs/caddyfile
Next Steps🔍
Currently all my needs are fulfilled with this installation. A bit monitoring would be nice though: telegraf+InfluxDB+Grafana.