"Off-Grid Operator #4: What $250 in Hardware Actually Runs"

self-hosted infrastructure elixir hardware off-grid

People ask what kind of server I run. They expect rack-mounted something. Maybe a NAS with blinking lights.

It’s a $250 mini PC on a shelf in the basement. It runs everything.

The hardware

The “town server” is an Intel N100-based mini PC. 16GB RAM, 512GB NVMe. Draws about 10 watts at idle. It sits in a rental property I own in town, plugged into grid power and wired ethernet. That’s the entire spec.

On it, right now:

  • Mission Control — Phoenix LiveView app (task board, news aggregator, content pipeline, agent dashboards)
  • Open Brain — Postgres + pgvector semantic memory for AI agents
  • Invoice Ninja — self-hosted invoicing
  • n8n — workflow automations (battery monitors, RSS feeds, webhook bridges)
  • Matrix/Synapse — messaging server
  • Immich — photo management
  • Portainer — container management
  • Dokploy — deployment platform
  • Plus a handful of smaller services

All containerized. All running simultaneously. The box barely breaks a sweat.

The math that matters

SaaS equivalents for what’s running on this box:

Service Self-hosted SaaS equivalent Monthly cost avoided
Mission Control Custom Phoenix app Notion + Zapier + custom dashboards ~$50
Open Brain Postgres + pgvector Pinecone or Weaviate Cloud ~$70
Invoice Ninja Docker container FreshBooks or QuickBooks ~$30
n8n Docker container Zapier Pro ~$50
Matrix Synapse Slack Pro ~$8/user
Immich Docker container Google Photos (storage tier) ~$10

Conservative estimate: $200-250/month in SaaS costs replaced by a one-time $250 hardware purchase plus ~$0 in incremental electricity (10W is negligible on a residential bill).

The box pays for itself in five weeks.

Why this works (and when it doesn’t)

Three things make cheap hardware viable for production:

1. Elixir’s resource profile is absurd.

Mission Control — a full Phoenix LiveView app with real-time dashboards, WebSocket connections, background jobs, and database queries — runs in about 150MB of RAM. A comparable Rails app would want 512MB minimum. A Next.js app with similar functionality would want a gig.

OTP supervision means I don’t need external process managers. No systemd gymnastics. No PM2. The runtime handles restarts.

2. Postgres does more than people think.

Open Brain uses pgvector for semantic search across thousands of memory entries. On this hardware, a similarity search across the entire corpus takes ~15ms. I don’t need a dedicated vector database. I don’t need ElasticSearch. Postgres with the right extensions handles full-text search, vector similarity, and relational queries in one process.

3. Docker Compose is the right level of orchestration.

No Kubernetes. No Nomad. No Swarm. Just docker-compose.yml files managed through Dokploy. For a single-node deployment, anything more is cosplay.

When it doesn’t work: CPU-bound tasks. Docker image builds take 5-8 minutes on the N100. AI inference is out of the question locally. Anything that needs burst compute gets pushed to the VPS or stays in the cloud. Know your hardware’s limits and design around them.

The real cost is maintenance

Hardware is cheap. Time is expensive. What makes self-hosting viable long-term isn’t the initial setup — it’s reducing the ongoing maintenance burden to near-zero.

My stack for this:

  • Dokploy handles deployments. Push to main, it builds and deploys. No SSH, no manual docker commands.
  • Portainer gives visual container management when something goes sideways.
  • n8n monitors the monitors — battery SOC alerts, service health checks, RSS feeds.
  • Tailscale means zero public ports. Nothing is exposed to the internet except what explicitly needs to be (the VPS handles public traffic and proxies back through Tailscale).

Most weeks, I don’t touch the server at all. It just runs.

The uncomfortable truth about “scale”

This setup handles one user. Me. If Mission Control needed to serve 1,000 concurrent users, I’d need different hardware. If Open Brain needed to index millions of documents, I’d need more RAM.

But here’s the thing: most of what indie developers and small consultancies build doesn’t need to scale beyond a handful of users. The entire “what if it needs to scale” conversation is usually premature optimization disguised as engineering prudence.

Build for what you need. A $250 box running Elixir and Postgres handles more than most people think.

What I’d change

If I were starting over:

  • 32GB RAM instead of 16GB. RAM is the first thing you run out of with many containers. The upgrade costs maybe $30 more.
  • 1TB NVMe for Immich photo storage headroom.
  • A proper UPS. Grid power is reliable in town, but a $50 UPS buys peace of mind for graceful shutdowns.

That’s it. The N100 CPU is fine. The form factor is fine. The price point is fine.

Stop overthinking hardware. Start shipping.


If you want this working in your stack — not a tutorial, actual implementation — that’s what I do. Work with me →

Thoughts from the Yukon

© 2026 Andrew Kalek