"Off-Grid Operator #5: Tailscale as an Ops Strategy, Not a VPN"

tailscale networking security self-hosted off-grid devops

I don’t expose anything to the public internet. Not a single port. Not SSH, not a web UI, not an API endpoint. Nothing.

Every service I run — Mission Control, Open Brain, Home Assistant, Portainer, Invoice Ninja, n8n — is accessible only over Tailscale. If you’re not on my tailnet, those services don’t exist.

This isn’t paranoia. It’s the laziest, most effective security decision I’ve ever made.

What Tailscale actually is

Most people hear “Tailscale” and think VPN. It’s not — or at least, thinking of it that way undersells it. It’s a mesh network built on WireGuard. Every device gets a stable IP on a private subnet. Traffic goes directly between devices, peer-to-peer, encrypted end-to-end. No central server routing your packets.

What this means in practice: my laptop at the cabin, my phone on cellular, the town server 30 minutes away, and a VPS in Toronto all see each other like they’re on the same LAN. Sub-millisecond overhead on local connections. Maybe 50ms added for the cross-country hop. Good enough for everything I do.

The security model is “don’t be on the internet”

Traditional self-hosting security looks like this: expose port 443, put everything behind a reverse proxy, set up SSL certs, configure fail2ban, maybe add Cloudflare in front, hope you didn’t misconfigure something, pray nobody finds an auth bypass in your app before you patch it.

With Tailscale, the security model is: don’t expose ports. That’s it. No reverse proxy config. No SSL cert renewal (traffic is already encrypted). No fail2ban because there’s no public surface to fail against. No Cloudflare because there’s nothing to proxy.

The attack surface goes from “everything on the internet” to “Tailscale’s auth + WireGuard crypto.” I’ll take those odds.

Three nodes, zero public IPs

My infrastructure sits across three machines:

Cabin server (CasaOS) — off-grid, on Starlink. Runs Home Assistant, Mosquitto, and some monitoring. Power comes from a battery bank charged by a generator. This machine going offline means the satellite went out or the battery died — both recoverable, neither urgent.

Town server — grid power, wired internet. Runs the heavy stuff: Mission Control (Phoenix/Elixir), Open Brain (Postgres + pgvector), Portainer managing ~15 containers, n8n automations, Invoice Ninja. This is the workhorse.

VPS (Toronto) — the only machine with a public IP, and even it doesn’t expose much. Hosts the public-facing website and a Dokploy instance for deploying web apps. Everything else on it talks inward over Tailscale.

All three machines see each other at stable 100.x.x.x addresses. I SSH between them like they’re in the same room. Docker containers on the town server call APIs on the cabin server without any port forwarding, tunneling, or config gymnastics.

Deploy scripts that don’t care where you are

Here’s the thing about zero public exposure: you can’t just git push and have a webhook fire. There’s no public URL for GitHub to hit.

My deploy flow looks like this: GitHub Actions builds the container image, pushes it to a registry. Then a cron job on the target machine (running over Tailscale, obviously) pulls the new image and restarts the service. Or I SSH in and do it manually — which, from any device on my tailnet, is one command.

Is this as slick as a Vercel deploy? No. But it’s simple, predictable, and I understand every step. When something breaks at 11 PM, I’m not debugging three layers of abstraction. I’m reading Docker logs on a machine I can reach from my phone.

MagicDNS makes it human

Tailscale gives you MagicDNS — human-readable names for every device. So instead of remembering 100.110.44.28, I type the machine name. Every script, every bookmark, every config file uses these names.

This sounds minor. It’s not. When you’re managing services across three physical locations, readable hostnames are the difference between ops that feel manageable and ops that feel like a spreadsheet of IP addresses.

What you give up

Tailscale isn’t free (for more than a few devices), and the network dependency is real. If Tailscale’s coordination server goes down, existing connections stay up but new ones can’t establish. In two years, this has happened to me exactly zero times, but it’s worth knowing.

You also give up easy sharing. Want to show someone your self-hosted app? They need to be on your tailnet, or you need to use Tailscale Funnel (which does expose a public URL, defeating half the point). For services that genuinely need public access — like the consulting website — I use the VPS. Everything else stays private.

And there’s the philosophical thing: you’re trusting Tailscale as a company. Your device keys, your network topology, your auth — they see it. I’m comfortable with this tradeoff because the alternative (managing my own WireGuard configs across a dozen devices) is a full-time job I don’t want.

The actual impact

Before Tailscale, I spent real hours on networking. Port forwarding on the cabin router. Dynamic DNS because the Starlink IP changes. Nginx configs. SSL certs expiring at 3 AM. Security anxiety every time I exposed a new service.

Now I spend zero hours on networking. I add a machine, it joins the tailnet, it can talk to everything. I deploy a new service, I give it a Tailscale address, done. The cognitive overhead went from “is this secure?” to “it’s not on the internet, so yes.”

For self-hosted infrastructure — especially spread across weird locations like an off-grid cabin and a rental basement — this is the single highest-leverage tool in the stack. More than Docker. More than any monitoring setup. The network layer that just works is what makes everything else possible.


This is how I build infrastructure for clients too — secure by default, accessible from anywhere, no public exposure unless absolutely necessary. If your stack is more complex than it needs to be, let’s fix that →

Thoughts from the Yukon

© 2026 Andrew Kalek