TLS termination · WebSocket · allowedOrigins · Docker go-live checklist
Engineers who already run OpenClaw in Docker on a VPS and want a real hostname with HTTPS often stall on where the Gateway should listen, who terminates TLS, and why the Control UI reports cross-origin or unsafe fallback in the browser. This article uses a decision table to compare loopback plus edge proxy versus binding container ports to the public internet, gives minimal Caddy and Nginx reverse-proxy snippets, WebSocket and X-Forwarded headers, and an acceptance order that keeps allowedOrigins aligned with what users type. OOM and first-run WASM stay in the long Docker VPS triage article. Install overview: v2026.4 install guide; production exposure: multichannel hardening; image rollback: pin and rollback.
On Docker, shipping for real also means host firewall, publish maps that stay on 127.0.0.1, a full certificate chain, and channel callback URLs that follow HTTPS. Use the list below for on-call triage; fine-grained flags still come from the docs for your exact version.
0.0.0.0 muscle memory: exposing the Gateway on a public interface without WAF, rate limits, and audit sharply widens the blast radius.
WebSocket half-path: the proxy never forwards Upgrade / Connection, so the panel flashes 502 or the handshake dies.
Host and origin drift: the browser uses hostname A while config still pins an IP or internal alias, tripping Control UI and API cross-origin guards.
Certificate vs DNS ordering: issuing before DNS propagates or mixing self-signed with public chains yields inconsistent mobile and automation clients.
Disk and logs: proxy plus container access logs without rotation can fill inodes on small VPS disks during a busy week and look like a freeze.
Upgrade window: rolling behind a proxy without pinned images splits old and new Gateway behaviour if you skip the staging and rollback order.
Note: if you are blocked on OOM, Exit 137, first-run WASM latency, or Control UI allowedOrigins device pairing, start from the triage table article; this post skips the memory profile section.
The matrix targets reviewers moving OpenClaw from a lab toy to a real hostname: HTTPS, channel callbacks, and observability should land on one acceptance story.
| Dimension | 127.0.0.1 Gateway + reverse-proxy TLS | Container port bound to the public internet |
|---|---|---|
| Minimal exposure | the internet touches 443 only; upstream stays on loopback | needs separate network ACLs and ongoing port review |
| Certificate ops | one Caddy / Nginx / ACME pipeline | each app or sidecar maintains certs; easy drift |
| WebSocket | mature proxy modules handle upgrade and timeouts | headers get dropped; direct debugging hides the gap |
| Observability | edge and upstream logs can join on request_id | you rebuild aggregation and alerts yourself |
Production first principle: checking off both “panel loads” and “channels can call back to the https hostname you advertise” on the same checklist.
18789 by convention) disagrees with your local overrides, search config and compose end to end; never change only the proxy and leave container listen mismatched.allowedOrigins must include the real https origin browsers use; after edits cold-start in the order from the triage article so caches do not mislead you.The sequence assumes compose already brings the Gateway up on a loopback-only port; if install is unfinished, close the loop with the wizard and compose baseline article.
Freeze the hostname: before issuing certificates confirm A/AAAA point at this VPS; write TTL and maintenance window plainly.
Confirm Gateway bind: on the host use ss -lntp or equivalent to verify listen targets 127.0.0.1 on the right port only.
Pick a proxy: single-node minimal favours Caddy automatic HTTPS; stitch into existing Nginx estates with stream/location templates.
Apply the minimal snippet: see blocks below; substitute upstream for the documented port and real hostname.
Align channel callbacks: IM or webhook surfaces must use the same public hostname and path prefix in external URLs.
Record one golden path: from “browser opens https hostname” to “harmless heartbeat or doctor once” in an internal playbook for shift handoff.
openclaw.example.com {
encode gzip
reverse_proxy 127.0.0.1:18789
}
server {
listen 443 ssl http2;
server_name openclaw.example.com;
ssl_certificate /etc/letsencrypt/live/openclaw.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/openclaw.example.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:18789;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 600s;
}
}
Treat the table as a shift note: every row should point at a concrete log field or command output instead of “the network feels flaky”.
| Symptom | Likelier layer | First move |
|---|---|---|
| https fails while raw http to upstream works | certificate chain or SNI | validate chain with a trusting client; match server_name to CN/SAN |
| static page loads but live channel drops instantly | WebSocket proxy timeout | raise read timeout; confirm Upgrade survives the proxy |
| console shows cross-origin or blocked origin | allowedOrigins missing the https origin | rewrite the origin list and fully restart affected processes |
| public curl 502 while local curl returns 200 | proxy upstream on stale port or container not on 127.0.0.1 | walk compose and publish maps hop by hop |
Warning: until you finish multichannel hardening, do not temporarily bind the dashboard or debug port to 0.0.0.0 for a demo.
Figures below are typical single-VPS observation bands for disks and timeouts, not a vendor SLA.
proxy_read_timeout or the Caddy equivalent at least 600 seconds; shorter values look like flaky disconnects.df and inode during the first week in prod.| Role | Profile | Recommendation |
|---|---|---|
| Personal sandbox | low traffic, short maintenance ok | single Caddy node + single compose stack + manual change window |
| Small-team production | multiple daytime channels | pinned image + staging compose + staggered rollback for proxy and Gateway |
| Handoff to Mac / CI | long jobs plus signing together | keep Gateway on the VPS; move builds to the shared Mac pool; matrix references earlier posts |
Using a home ISP or flaky small cloud as the only entry stacks dynamic IP, port risk controls, and surprise sleep states; a pure DIY datacenter stretches TLS and multi-region DR timelines and makes it hard to give partners a repeatable public ingress.
When you need a sustainable public https front door and want to split long-lived agents from an iOS CI pool, VpsMesh cloud Mac Mini rental plus always-on VPS is usually the steadier engineering answer: compute and signing can sit on contracted nodes while Gateway and edge certs still follow this runbook on your side, avoiding a hidden single box that carries everything.
Terminate TLS, rate limits, and audit at your familiar reverse proxy while the Gateway focuses on sessions and tooling; if you must bind a public interface directly, implement the exposure items in the hardening checklist. For extra capacity see the rental pricing page and cloud order page.
Verify the full browser origin including the port is listed in allowedOrigins; mixed content, raw IP access, and mixing IP with hostname access also create false signals. Complex compose pairing cases remain in the Docker VPS triage article; self-serve entry points are in the help center.
Usually you only confirm upstream port and health checks unchanged; when path prefixes or WebSocket routing change, read release notes and validate with dual instances per the image pin and rollback article.