Reverse proxy and TLS setup
The brain image exposes plain HTTP on port 7700. Put it behind whichever TLS terminator your team already runs, or use the optional Caddy overlay we ship for zero-config auto-HTTPS.
z4j does not bundle a reverse proxy. Here are four patterns that work.
Because your infra already has one.
Most teams deploying z4j already run Traefik, nginx, a Kubernetes ingress controller, a cloud load balancer, or Cloudflare. Bundling Caddy into the primary image would force a double-hop, conflict with port 80 and 443, and break ingress patterns.
So we keep the z4jdev/z4j image single-purpose (HTTP on 7700) and ship TLS as a composable choice. Homelab users get a one-line Caddy overlay. Everyone else plugs z4j into their existing stack.
Caddy overlay
Solo developers, homelab, single-VM deployments with a public DNS name
~2 minutes
Traefik labels
Teams with existing Traefik ingress
~5 minutes (assuming Traefik is already configured)
Cloudflare Tunnel
Homelabs behind CGNAT, teams with Cloudflare already in place, zero-trust deployments
~5 minutes
nginx + certbot
Shops standardized on nginx, Debian/Ubuntu admins, mixed-workload VMs
~10 minutes
Caddy overlay
Zero-config HTTPS. Two extra files in the repo. Auto-renewed certs via Let's Encrypt.
Bring the stack up
# .env additions
Z4J_DOMAIN=z4j.example.com
Z4J_ACME_EMAIL=you@example.com
Z4J_PUBLIC_URL=https://z4j.example.com
# Bring the stack up with both compose files.
docker compose \
-f docker-compose.yml \
-f docker-compose.caddy.yml \
up -d What lives in deploy/Caddyfile
{
email {env.Z4J_ACME_EMAIL}
}
{env.Z4J_DOMAIN} {
encode zstd gzip
reverse_proxy z4j-brain:7700
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
X-Frame-Options "DENY"
}
} The overlay drops the brain's public port binding and routes 80 plus 443 through Caddy. Certs persist in the z4j_caddy_data volume.
What you get
- Auto-provisions and renews TLS certs via ACME HTTP-01
- Needs public port 80 + 443 open to the internet
- Caddy joins the z4j_backend network and proxies internally
- One extra container (~20 MB RAM)
Solo developers, homelab, single-VM deployments with a public DNS name
You already run Traefik, nginx, or a cloud LB. Running Caddy behind a second TLS terminator creates double encryption overhead with zero benefit.
Traefik labels
If Traefik already runs in your cluster, this is the lowest-friction option. Add labels, done.
Add to z4j-brain service
# Add these labels to the z4j-brain service in docker-compose.yml.
# Assumes Traefik is already running with a 'websecure' entrypoint
# and a 'letsencrypt' cert resolver on the same Docker network.
services:
z4j-brain:
labels:
- "traefik.enable=true"
- "traefik.http.routers.z4j.rule=Host(`z4j.example.com`)"
- "traefik.http.routers.z4j.entrypoints=websecure"
- "traefik.http.routers.z4j.tls.certresolver=letsencrypt"
- "traefik.http.services.z4j.loadbalancer.server.port=7700"
networks:
- traefik_public Assumes Traefik is configured with a 'websecure' entrypoint and a 'letsencrypt' certresolver. Swap them for your actual Traefik entrypoint and resolver names.
What you get
- No extra container
- Labels live on the brain service, no separate config file
- Traefik handles cert resolver, HTTP-to-HTTPS redirect, and HTTP/3
- Requires Traefik configured with a certresolver (Let's Encrypt, step-ca, etc.)
Teams with existing Traefik ingress
Cloudflare Tunnel
No public ports. Cloudflare terminates TLS at its edge, tunnels HTTP to your brain over an outbound connection.
Set up the tunnel
# Install cloudflared on the same host as z4j-brain, then:
cloudflared tunnel login
cloudflared tunnel create z4j
cloudflared tunnel route dns z4j z4j.example.com
# Create ~/.cloudflared/config.yml:
cat > ~/.cloudflared/config.yml <<EOF
tunnel: z4j
credentials-file: /root/.cloudflared/<tunnel-uuid>.json
ingress:
- hostname: z4j.example.com
service: http://localhost:7700
- service: http_status:404
EOF
# Run the tunnel (supervise with systemd in production).
cloudflared tunnel run z4j cloudflared can run as a systemd service or in a sidecar container. Either way, no inbound ports open on your host.
What you get
- No inbound ports open on your host
- Cloudflare terminates TLS at its edge, tunnels HTTP to your brain
- Runs cloudflared as a persistent sidecar or systemd service
- Cloudflare account required (free tier works fine for small deployments)
Homelabs behind CGNAT, teams with Cloudflare already in place, zero-trust deployments
nginx + certbot
The classic stack. Works on any Linux distro. Familiar to every ops team.
/etc/nginx/sites-available/z4j.conf
# /etc/nginx/sites-available/z4j.conf
server {
listen 443 ssl http2;
server_name z4j.example.com;
ssl_certificate /etc/letsencrypt/live/z4j.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/z4j.example.com/privkey.pem;
# Required for the agent WebSocket (/ws/agent).
location / {
proxy_pass http://127.0.0.1:7700;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
}
server {
listen 80;
server_name z4j.example.com;
return 301 https://$host$request_uri;
} Run certbot once to obtain the cert, then symlink into sites-enabled and reload nginx. The WebSocket upgrade headers are required for the agent gateway at /ws/agent.
What you get
- Runs on the host, not in a container
- certbot auto-renews via cron or systemd timer
- Familiar syntax for ops teams already running nginx
- WebSocket upgrade headers required for the agent gateway
Shops standardized on nginx, Debian/Ubuntu admins, mixed-workload VMs
On Kubernetes: use your ingress controller
If you run z4j on Kubernetes, do not install Caddy inside the cluster. Expose the brain Service on port 7700 and route traffic through your existing ingress (nginx-ingress, Traefik, Contour, Istio, cert-manager plus Let's Encrypt).
A Helm chart is on the z4j v1.1 roadmap. Until it ships, a plain Deployment plus Service plus Ingress is straightforward; the brain image is stateless apart from its Postgres connection.
Still not sure which to pick?
Homelab on one VM: Caddy overlay. Existing Traefik cluster: labels. Behind CGNAT: Cloudflare Tunnel. Classic ops team: nginx.