Internet
|
v
+-------------------------------+
| OVH VPS |
| - Public IP: vps.example.com |
| - SSH on port 6968 |
| - Caddy (reverse proxy + SSL)|
| - WireGuard client |
| - Internal IP: 192.168.6.1 |
+--------------+----------------+
| WireGuard tunnel
| (encrypted, locked down)
v
+-------------------------------+
| Home Network (UniFi) |
| WireGuard server: 192.168.6.x|
| |
| +--- DMZ VLAN (isolated) --+ |
| | home-server | |
| | 192.168.6.10 | |
| | - Docker containers | |
| | - App on port 3000 | |
| | - Jellyfin on 8096 | |
| +---------------------------+|
| |
| [Main LAN - NOT reachable] |
| [IoT VLAN - NOT reachable] |
+-------------------------------+I've got side projects that need decent compute but get three requests a day. Paying $50/month to DigitalOcean for something that sits idle 99% of the time? No thanks.
My home server has 32GB of RAM. It already exists, already paid for. The marginal cost of another container is zero.
Cloudflare's ToS prohibits proxying "disproportionate" video content (Section 2.8). They terminate accounts for this. This setup lets me expose Jellyfin without ToS violations. Plus: no vendor lock-in, full control over SSL, zero exposed ports on my home network.
Two concerns. First, their ToS still prohibits streaming traffic — Jellyfin is a no-go.
Second, cloudflared runs inside your network. If that daemon gets compromised, an attacker has a persistent outbound tunnel to Cloudflare's infrastructure. With this setup, the only thing inside my network is a WireGuard server with a locked-down peer config. Smaller attack surface.
OVH VPS (~$4/month): Throwaway bastion host. Runs Caddy and WireGuard client. If compromised, the attacker only has access to the VLANs that i've allowed that WG network to have access to (DMZ)
Caddy: Auto-HTTPS via Let's Encrypt, reverse proxies to WireGuard network. Can add edge-caching to save a 20ms hop.
WireGuard: VPN tunnel between VPS and home. ~4k lines of code, stateless, minimal attack surface. UniFi router runs the server, VPS connects as client.
Proxmox: Two-node HA cluster running VMs. Migrated from LXC containers to increase isolation — VMs provide a stronger security boundary at the cost of slightly more overhead. VMs restart on failover in 10-20 seconds.
Only a non-standard WireGuard port is open on my home network. No SSH exposed. All traffic encrypted end-to-end.
The key: separate WireGuard configs. My personal VPN gives full home network access. The public-facing VPN is locked to DMZ VLANs only — UniFi firewall rules prevent routing to main LAN or IoT. Even if an attacker gets the VPS WireGuard keys, they're stuck in the DMZ.
Defense in depth: VPS → WireGuard → Firewall → DMZ VLAN → Container.
Setting up SSH to jump through the VPS is straightforward:
# ~/.ssh/config
Host vps
User debian
Hostname vps.yourdomain.com
Port 6968
Host home-server
User root
ProxyJump vps
Hostname 192.168.6.10Now ssh home-server transparently jumps through the VPS. No direct SSH exposure on the home network. Add an IdentityFile directive for key-based auth.
Caddy's config is refreshingly simple:
# /etc/caddy/Caddyfile
app.example.com {
reverse_proxy 192.168.6.10:3000
}
api.example.com {
reverse_proxy 192.168.6.10:8080
}
# Optional: basic auth for admin panels
admin.example.com {
basicauth {
admin $2a$14$...
}
reverse_proxy 192.168.6.10:9000
}
# Jellyfin - can't run this through Cloudflare ToS
jellyfin.example.com {
reverse_proxy 192.168.6.10:8096
}Caddy automatically obtains and renews SSL certificates. Each subdomain routes to a different service on the WireGuard network.
GitHub Actions → GHCR → Kamal SSHs through jumpbox (ProxyJump) → pulls and deploys on home server → Caddy routes traffic.
No Kubernetes, no complex orchestration. Just SSH and Docker.
vs Cloudflare Tunnel: No vendor lock-in, full SSL control, no plaintext inspection. Costs ~$4/mo, no free DDoS (OVH provides basic).
vs exposed ports: Massively reduced attack surface, VPS acts as sacrificial buffer. Extra 10-20ms latency, more moving parts.
App-level RCE
At work, one of our VPSs got popped by the react2shell exploit, even though we updated and re-deployed nextjs to a patched version 2 hours after vercel released the patch. The exploit proof of concepts were circulating various forums days prior to the patch, so it was being exploited heavily...
Potential mitigations
For apps that only talk to local databases, I disable outbound internet entirely — no C2, severely limited options.
Ensure your deps are up to date with dependabot, might be worth regularly re-creating your VM. Likely easy to automate with ansible...
DMZ outbound traffic routes through a consumer VPN (not OVH, but a literal consumer provider), so C2 exits via VPN, not my home IP.
Container escape
If someone escapes Docker AND the VM, I'm in trouble — but VMs provide much stronger isolation than LXC containers. I sleep well knowing that if you have a Linux container or VM escape you've either disclosed it or sold it off to North Korea; You likely aren't using it to hack random servers on the internet... These are literally multimillion dollar exploits.
Potential mitigations
Originally ran LXC for the lower overhead, but the security tradeoff wasn't worth it. VMs aren't the resource hogs people act like they are. A ubuntu VM of mine is using 800mb in the VM and that results in 1.3g used on the host. It's well worth the tradeoff.
My dumb ass completely forgot to install Intel microcode on my Proxmox host, fortunately my main nodes CPU is new enough where it wasn't the end of the world. You should ensure your mitigations are turned on! Proxmox/debian doesn't install them by default since they're non-free!
Infrastructure
ISP/power outage beyond UPS runtime? Fucked. It's a home setup, not a datacenter. I think it might be worth buying a diesel back-up generator and just having it spray out diesel fumes in my living room.
Potential mitigations
I already have a UPS, estimated runtime with everything plugged in is ~45 mins. Having grown up in Istanbul, where multi-hour long blackouts were commonplace. I'm very happy to say I've never experienced a power-outage in the UK longer than about 2 or 3 minutes.
Ensure your machines have powertop autotune as a startup script, configure HDD spin-down, check your c-states and check your kernels IRQ wake-ups, I found my RealTek nic was incredibly chatty so I had to swap those out for Intel NICs which are infinitely more power efficient. (& all my nodes are 2.5g now, which make Ceph work much better.)
Unifi routers make it incredibly easy to have load-balancing/failover WAN's, genuinely as easy as just plugging your second ISPs router into your unifi router then setting the port as 'failover wan'... your public IP will change but Unifi also supports DDNS directly in the router. They also sell 5g failover products as well. God bless Unifi! -- I haven't quite got to the point where I want to lock myself into a second multi-year ISP contract. Virgin has been rock solid so far. Worth noting that most ISPs just use the same cable to your house, if you have Sky and BT for example they're not going to be redundant since they both use OpenReach.
DDoS
OVH has basic protection (I believe it's only L4?). Worst case, VPS goes down but home network stays fine. The risk is L7 getting forwarded to your backend... you need to ensure that as much L7 junk is blocked on your VPS as possible.
Potential mitigations
Caddy WAF and fail2ban provide some L7 protection.
Jumpbox compromise
WireGuard only reaches DMZ, so house is safe, but all domains get owned.
Potential mitigations
Standard hardening
UFW
non-standard SSH port
fail2ban
pubkey-only auth.
keep your machine up to date.
DIY CDN
The "fake CDN" actually works with two layers of caching:
Client caching: Browser caches static assets for 7 days
Server (edge) caching: Caddy caches responses at the VPS, avoiding the hop to the home server entirely
This saves backend CPU/network, and for cached content eliminates the ~25ms WireGuard round-trip.
{
order cache before rewrite
cache {
ttl 1h
stale 24h
}
}
(static_cache) {
@static path /static/* /logo/*
header @static Cache-Control "public, max-age=604800"
}For multi-region, the same geo-DNS approach works:
User in EU → geo-dns → EU VPS (Caddy cache) → WireGuard → Home
User in US → geo-dns → US VPS (Caddy cache) → WireGuard → Home~$4/month per node. Not Cloudflare's 300+ PoPs, but works.
The home server runs wg0.conf as the WireGuard server (with ListenPort and peer configs). The VPS runs as a client, with its Endpoint pointing to my home IP. If your home IP is dynamic, use DDNS or WireGuard's persistent keepalive feature.