I was drowning in operational overhead. Spending 2+ hours customizing each resume. Tracking meeting notes by hand. Monitoring my homelab infrastructure constantly.
So I built an automation platform that handles it. n8n orchestrating workflows, local AI models via Ollama, GPU transcription, and document intelligence. Runs on my K3s homelab.
Best part: $0/month in API costs. Everything runs on hardware I already own.
The Stack
Running on my K3s homelab (same cluster from my K3s post):
- n8n: Visual workflow builder connecting everything
- Ollama: Local AI models (Qwen2.5:32b, Llama3.1:8b)
- LightRAG: Knowledge graph database
- Scriberr: GPU-accelerated transcription
- Docling: PDF parser
Why self-hosted? Privacy (data stays local), Cost ($0/month vs $500+), Speed (no API latency), Control (customize everything).
Real Workflows
Resume Tailoring: 2 Hours → 5 Minutes
Send job URL to Telegram bot → ScrapeNinja grabs posting → Ollama extracts requirements → LightRAG queries my work history → Ollama generates content → LaTeX builds PDF → Done.
Searches my entire work history automatically. Professional LaTeX formatting. Better quality than manual editing.
Meeting Intelligence
Drop recording → GPU transcription → Extract action items + decisions → Index in knowledge graph → Calendar reminders → Telegram summary.
Search all past meetings instantly. "What did we decide about auth 3 months ago?" Found in seconds.
Infrastructure Monitoring
Every 4 hours: Check cluster health → AI analysis → 🟢 HEALTHY / ⚠️ WARNING / 🔴 CRITICAL alert.
Proactive instead of reactive. No more 3am surprises.
Deep Research
ScrapeNinja crawls web → Docling parses docs → LightRAG builds knowledge graph → Ollama generates insights → LaTeX creates reports.
Replaces my $40/month Perplexity subscription. Full control over extraction and formatting.
The Real Win
Since deploying this:
- $500/month saved on API subscriptions and SaaS tools
- Hours reclaimed weekly from automated workflows
- Zero context switching between tools
- Complete privacy - data stays on my infrastructure
But the biggest change? I have time to think instead of just execute tasks.
Getting Started
Here's a simple n8n deployment to get you started:
# docker-compose.yml
version: '3.8'
services:
n8n:
image: docker.n8n.io/n8nio/n8n
ports:
- "5678:5678"
environment:
- N8N_AI_ENABLED=true
- N8N_AI_PROVIDER_OLLAMA_BASEURL=http://ollama:11434
volumes:
- n8n_data:/home/node/.n8n
ollama:
image: ollama/ollama
ports:
- "11434:11434"
volumes:
- ollama_data:/root/.ollama
lightrag:
image: hkuds/lightrag:latest
ports:
- "9621:9621"
volumes:
- lightrag_data:/app/data
environment:
- LLM_PROVIDER=ollama
- LLM_BASE_URL=http://ollama:11434
scriberr:
image: ghcr.io/aidan-mundy/scriberr:latest
ports:
- "3000:3000"
volumes:
- scriberr_data:/app/data
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
docling:
image: ds4sd/docling:latest
ports:
- "5000:5000"
volumes:
- docling_data:/app/data
volumes:
n8n_data:
ollama_data:
lightrag_data:
scriberr_data:
docling_data:
This gives you the foundation. Add workflows, connect services, automate incrementally.
Want This?
The Docker Compose above is the foundation. But most people get stuck going from "here's a config file" to "this actually saves me hours."
I help people who want to self-host but don't know where to start. Not outsourcing to a VA. Not another SaaS subscription. Building it yourself, on your hardware.
Email me at me@jquaintance.com for an operational audit. I'll find where you're losing time and help you automate it.
Self-hosted. Private. No monthly fees. You own it. I help you build it.
References
- n8n - Workflow automation
- Ollama - Local AI models
- LightRAG - Knowledge graph database
- Scriberr - GPU-accelerated transcription
- Docling - PDF parser
Photo by Simon Kadula on Unsplash
Content on this blog was created using human and AI-assisted workflows described here. Original ideas and editorial decisions by Justin Quaintance.