$ sudo make vibe

sudo make homelab

· 5 min read

This site names the lab as a core asset. K3s, Kali, observability stacks, and security hardening. That positioning is accurate — eventually. Right now the lab is five Lenovo laptops of varying vintage, two Raspberry Pi 5s, and a mesh router. Most of the nodes are running their current operating systems with no lab-specific configuration applied. K3s is planned, not running. The security stack is a separate project, not yet started.

This post documents the lab as it stands today — not the target state, not the roadmap. The hardware that is here, what each node is currently running, and what role it is assigned to fill. The gap between those two things is the most useful information this post can offer.


The inventory

homelab/
├── pi-cluster/
│   ├── pi5-01    8GB RAM · M.2 HAT · 13 TOPS     currently: Ubuntu
│   │             role: AI inference
│   │
│   └── pi5-02    16GB RAM · SD                     currently: not deployed (boxed)
│                 role: K3s control plane

├── thinkbook/
│   └── tb-01     40GB RAM · NVMe · i5 11th Gen     currently: Ubuntu 24.04 + KVM
│                 role: dev node · local models · K8s workshop

├── thinkpads/
│   ├── tp-01     T430 · 16GB RAM · SSD             currently: Kali
│   │             role: K3s worker
│   │
│   ├── tp-02     X1 Carbon 4th Gen · 8GB RAM · SSD currently: Fedora
│   │             role: DevOps pipeline node (registry + CI runner)
│   │
│   ├── tp-03     T430 · 16GB RAM · SSD             currently: Ubuntu server
│   │             role: K3s worker
│   │
│   └── tp-04     X1 Carbon 9th Gen · 8GB RAM · SSD currently: Win11 + WSL
│                 role: Win11 + WSL · no platform role

├── network/
│   ├── router-p  Velo primary · basement · wired star hub
│   ├── router-s1 Velo satellite · top floor
│   └── eth-switch  5-port · basement

└── remote-access/
    └── tailscale on every node

Node by node

tb-01 — ThinkBook 14 G2, 40GB RAM, i5 11th Gen

The anchor machine and only modern-class node in the lab — the primary development surface, local model host, and home for the KVM workshop environment.

pi5-01 — Raspberry Pi 5, 8GB RAM, M.2 HAT

The dedicated AI inference node, assigned to run Ollama and InstructLab on the M.2 HAT’s 13 TOPS accelerator — currently running Ubuntu with no inference workloads deployed yet.

pi5-02 — Raspberry Pi 5, 16GB RAM

Still in the box, assigned as the K3s control plane once the cluster is stood up.

tp-01 — ThinkPad T430, 16GB RAM

Currently running Kali server, assigned as a K3s worker once the cluster is stood up.

tp-02 — ThinkPad X1 Carbon 4th Gen, 8GB RAM

The DevOps pipeline node — assigned to run a local container registry and self-hosted CI runner, currently running Fedora with nothing installed toward that role yet.

tp-03 — ThinkPad T430, 16GB RAM

Currently running Ubuntu server, assigned as the second K3s worker alongside tp-01.

tp-04 — ThinkPad X1 Carbon 9th Gen, 8GB RAM

Win11 with WSL, kept as a Windows node for workflows that require it.


The network

Homelab physical connectivity diagram Physical network topology — sage green network infrastructure, blue compute nodes, gray external Internet cloud. Basement Top floor Internet Modem Router-P Primary · wired star hub Ethernet Switch 5-port · basement tp-01 wired tp-02 wired tp-03 wired pi5-02 wired Router-S1 Satellite pi5-01 wired tb-01 wired tp-04 WiFi Legend Wired ethernet WiFi External Network infra Compute node
NodeHardwareOS today
tb-01ThinkBook G2 · 40GB RAM · i5 11thUbuntu 24.04
pi5-01Pi 5 · 8GB RAM · M.2 HATUbuntu
pi5-02Pi 5 · 16GB RAM · SDNone · boxed
tp-01T430 · 16GB RAM · SSD · ext. WiFiKali
tp-02X1 Carbon 4th Gen · 8GB RAM · SSDFedora
tp-03T430 · 16GB RAM · SSDUbuntu
tp-04X1 Carbon 9th Gen · 8GB RAM · SSDWin11 + WSL

The network is a wired star centered on Router-P in the basement. Router-P connects directly to a 5-port Ethernet Switch, which carries tp-01, tp-02, tp-03, and pi5-02 by ethernet. Router-S1 on the top floor runs a separate wired drop to pi5-01 and tb-01. tp-04 is the one exception, connecting to Router-S1 over WiFi. Three satellite routers extend coverage across the floors; Router-S1 is the only one with lab nodes connected to it. All backhaul between routers is physical ethernet. Tailscale runs on every node and provides the remote access layer, reachable from anywhere.


The platform

Homelab platform diagram — Kubernetes and VMs Logical platform diagram — ice blue AI inference, sage green Tailscale overlay, blue cluster, teal DevOps, purple dev/VMs. K3s cluster pi5-02 K3s control plane tp-01 K3s worker tp-03 K3s worker DevOps pipeline tb-01 Dev node build + push image tp-02 Registry · CI runner store + serve image K3s cluster pull from registry deploy workload tb-01 · KVM K8s workshop 3× KVM VMs Arch Linux VM Course lab Ollama Local model host · large models Standalone pi5-01 AI inference tp-04 Win11 + WSL Ollama · InstructLab Edge inference · M.2 HAT Tailscale overlay — all nodes · remote access from anywhere Legend Logical flow Cluster boundary Tailscale overlay Control plane Worker DevOps Dev / VM AI inference Windows
NodePlatform role
pi5-02K3s control plane · 16GB RAM
tp-01K3s worker · 16GB RAM
tp-03K3s worker · 16GB RAM
tp-02Registry · CI runner · 8GB RAM
tb-01Dev node · KVM · Ollama · 40GB RAM
pi5-01AI inference · M.2 HAT · 8GB RAM
tp-04Win11 + WSL · 8GB RAM

The lab’s platform architecture has three layers. The compute layer is a K3s cluster: pi5-02 as control plane, tp-01 and tp-03 as workers. The DevOps layer is tp-02 running a local container registry and CI runner. The inference layer is pi5-01, assigned as the dedicated AI inference node once Ollama and InstructLab are deployed, with Ollama already running on tb-01 for larger models that need more headroom.

I chose K3s as the cluster distribution rather than a full kubeadm-based deployment for a concrete reason: the hardware does not support a parallel full Kubernetes installation, and K3s is Kubernetes — same API, same workloads, same kubectl. The distinction matters on a CV and in a vendor conversation, but it does not produce a different skill set. A separate K8s workshop runs in three KVM virtual machines on tb-01 alongside an Arch Linux VM for a current Kubernetes and DevOps training course. Those are temporary environments, not permanent lab infrastructure.

The decision to put the control plane on pi5-02 rather than a ThinkPad comes down to uptime and power. A Pi draws a fraction of what a laptop draws and does not have a battery that needs managing or a lid that can be accidentally closed. The T430s are workers, they handle the load, the Pi handles the coordination.

tp-02 as a dedicated DevOps node follows a pattern that appears in real enterprise environments. Separating the registry and CI runner from the cluster means the cluster does not pull images from Docker Hub during a deployment. The entire pipeline runs inside the lab: write code on tb-01, commit, trigger the CI runner on tp-02, build and push to the local registry on tp-02, K3s pulls from tp-02. No external dependencies, no rate limits, no internet required for a deploy cycle.


Today vs planned

NodeTodayPlanned role
tb-01Ubuntu 24.04 · KVM · OllamaDev node · K8s workshop · local models
pi5-01Ubuntu · inference not yet deployedAI inference · Ollama · InstructLab · RAG
pi5-02BoxedK3s control plane
tp-01Kali serverK3s worker
tp-02FedoraRegistry · CI runner
tp-03Ubuntu serverK3s worker
tp-04Win11 + WSLWin11 + WSL · no role change

Nothing in the planned column has a date attached to it. The next post in this series documents the K3s install — that is the next concrete step.


Why this lab exists

The three throughlines on this site are open-source first, Gen AI as a first-class tenant, and vibe coding as the working method. The lab is where those three things collide with actual hardware. Local AI inference on consumer silicon. Kubernetes on twelve-year-old ThinkPads. A DevOps pipeline that does not touch a cloud provider. None of it is production-grade. All of it is real.


What comes next

The follow-up posts have names. sudo make homelab secure covers the security stack: WireGuard, SSH hardening, NetBird, Ansible, and OpenTofu. sudo make homelab ai goes deep on the inference layer: Ollama model selection, InstructLab fine-tuning, and RAG on local data. sudo make homelab observe covers the observability stack: Prometheus, Grafana, and AIOps experiments. Each of those posts links back to this one. This is the foundation.