Home Server Setup and Cloud Lab - Project Documentation

1. Introduction

This document provides a comprehensive technical overview of my home server and Kubernetes lab project. The project repurposes an old laptop into a Proxmox-based virtualization host for self-hosted services (Nextcloud, Immich, Home Assistant, Plex, etc.) while also serving as a cloud-native learning lab with a Kubernetes (K3s) cluster. The setup focuses on isolation, automation, backup strategies, and DevOps best practices.


2. Hardware and Environment

  • Base Machine: Old laptop with:

    • CPU: 8-core (Intel i7 mobile processor)

    • RAM: 16 GB DDR4

    • Storage: 2x NVMe SSDs (256 GB + 512 GB), external SSD (1 TB), external HDD (1 TB)

  • Power Considerations: UPS-backed power supply to reduce downtime.

  • Hypervisor: Proxmox VE 8.x for virtualization.

  • Network Setup: Gigabit LAN + Wi-Fi with VLAN segregation for VMs.


3. Proxmox Configuration

Installation Steps

  1. Installed Proxmox VE on the 512 GB NVMe drive.

  2. Created ZFS pool using the 256 GB NVMe + external SSD for redundancy.

  3. Configured storage classes:

    • local-lvm → for Proxmox system and VM disks.

    • zfs-pool → for persistent data and backups.

  4. Enabled Proxmox Web UI for management.

Resource Allocation Strategy

  • 2 cores + 2 GB RAM → lightweight services (e.g., NGINX Proxy Manager).

  • 4 cores + 4 GB RAM → heavier services (e.g., Nextcloud, Plex).

  • Kubernetes nodes: 1–2 cores and 2 GB RAM each.


4. Virtual Machines and Services

Each service was isolated into its own VM or LXC for modularity.

  • Nextcloud → Private file sync and sharing platform.

  • Immich → AI-driven photo & video backup/management.

  • Home Assistant → Smart home automation hub.

  • NGINX Proxy Manager → Reverse proxy with Let’s Encrypt SSL.

  • Authentik → Identity provider and single sign-on (SSO).

  • Media Stack VM → Plex + qBittorrent + Sonarr + Radarr + Lidarr + Bazarr + Prowlarr.

  • Storage LXC → Centralized storage accessible via NFS/SMB.


5. Networking and Security

Networking

  • Configured VLANs inside Proxmox to segment traffic.

  • Dedicated network bridges for:

    • LAN traffic

    • Storage traffic

    • Kubernetes cluster network

Security

  • Tailscale VPN → Private mesh network for secure remote access.

  • Reverse Proxy → NGINX Proxy Manager for SSL termination.

  • Firewall → Proxmox firewall rules + UFW on VMs.

  • Access Control → Authentik for unified authentication across services.


6. Kubernetes (K3s Cluster)

Setup

  • 6-node K3s cluster on Proxmox:

    • 3x Control plane VMs (2 GB RAM, 1 vCPU each).

    • 3x Worker VMs (2 GB RAM, 1 vCPU each).

  • Installed via K3s installer script with embedded etcd.

  • Load balancing handled via Traefik ingress.

Tools

  • Helm → Application deployment.

  • Terraform → Infrastructure provisioning.

  • kubectl + Lens → Cluster management.

Workloads

  • Deployed demo apps (Nginx, sample microservices).

  • Tested scaling, rolling updates, and storage persistence.

  • Integrated GitHub Actions for CI/CD into cluster.


7. Backup & Disaster Recovery

Local Backup

  • ZFS snapshots configured daily.

  • RAID-10 array across SSDs for redundancy.

Cloud Backup

  • Automated sync jobs to AWS S3 using rclone.

  • Lifecycle rules in S3 for cost optimization (transition to Glacier).

Automation

  • Python + Bash scripts triggered via cron:

    • Backup rotation.

    • Alerts for failed jobs.


8. Monitoring and Observability

  • Proxmox Metrics → Resource usage monitoring.

  • Prometheus → Cluster and service metrics collection.

  • Grafana → Dashboards for visualization.

  • Alertmanager → Notifications on system anomalies.


9. Lessons Learned

  1. Splitting services into isolated VMs improved reliability.

  2. Reverse proxy + VPN was crucial for secure external access.

  3. Kubernetes at home provides hands-on exposure to real-world cluster ops.

  4. Backup and restore testing is as important as configuring backups.


10. Future Enhancements

  • Expand cluster using mini PCs for better HA.

  • Implement Ceph distributed storage for fault tolerance.

  • Add GitOps workflows with ArgoCD/Flux.

  • Integrate centralized logging with ELK/EFK stack.

  • Explore service mesh (Istio or Linkerd) for advanced networking.


11. Conclusion

This project demonstrates how to turn commodity hardware into a robust self-hosted ecosystem with real-world DevOps and cloud-native practices. By combining Proxmox virtualization, Kubernetes orchestration, and modern automation tools, I built a system that is both practical for personal use and valuable as a learning platform.


12. References

Updated on