k8s homelab infra
http://k8s-dashboard.local
- Document complete media server and CI/CD infrastructure - Include quick start deployment instructions - Add troubleshooting and management commands - Document network access and storage configuration - Include security considerations and prerequisites 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> |
||
---|---|---|
arr | ||
.gitignore | ||
CLAUDE.md | ||
dash-admin.yaml | ||
forgejo-worker-config.yaml | ||
forgejo-workers-nodejs-simple.yaml | ||
forgejo-workers-volume5-fixed.yaml | ||
forgejo-workers-volume5-symlinks.yaml | ||
forgejo-workers-volume5.yaml | ||
k8s-dashboard-ingress.yaml | ||
longhorn-ingress.yaml | ||
metallb-config.yaml | ||
nfs-storage-classes.yaml | ||
nfs-test.yaml | ||
README.md |
Kubernetes Home Lab
A comprehensive Kubernetes deployment for media server applications and CI/CD infrastructure, designed for home lab environments.
Overview
This repository contains Kubernetes manifests for deploying:
- Media Server Stack: Complete *arr suite (Radarr, Sonarr, Prowlarr, etc.) with Overseerr for request management
- CI/CD Infrastructure: Forgejo self-hosted runners with full development toolchain
- Supporting Infrastructure: MetalLB load balancer, NFS storage, and Kubernetes dashboard
Architecture
Media Applications (arr/
directory)
- Radarr - Movie collection management (port 7878)
- Sonarr - TV series collection management (port 8989)
- Prowlarr - Indexer management and proxy
- SABnzbd - Usenet downloader
- Overseerr - Media request management (port 5055)
- Tautulli - Plex monitoring and analytics
Infrastructure Components
- MetalLB: Load balancer with IP pool
192.168.15.200-192.168.15.210
- NFS Storage: External server at
192.168.12.16
for media files - Longhorn: Block storage for application configurations
- Kubernetes Dashboard: Web UI with cluster-admin access
- Forgejo Workers: Self-hosted CI/CD runners for
git.deco.sh
Prerequisites
Before deploying, ensure you have:
- Kubernetes cluster with kubectl configured
- NGINX Ingress Controller installed
- Longhorn storage system deployed
- NFS server accessible at
192.168.12.16
with media shares - Local DNS configured to resolve
*.local
domains - MetalLB compatible network environment
Quick Start
1. Deploy Infrastructure
# Deploy storage classes
kubectl apply -f nfs-storage-classes.yaml
# Deploy MetalLB load balancer
kubectl apply -f metallb-config.yaml
# Deploy Kubernetes dashboard
kubectl apply -f k8s-dashboard-ingress.yaml
kubectl apply -f dash-admin.yaml
2. Deploy Media Applications
# Create media namespace
kubectl create namespace media
# Deploy all media applications
kubectl apply -f arr/
# Check deployment status
kubectl get pods -n media
3. Deploy CI/CD Workers (Optional)
# Create namespace and deploy workers
kubectl create namespace forgejo-workers
# Deploy worker configuration
kubectl apply -f forgejo-worker-config.yaml
# Deploy workers (choose one variant)
kubectl apply -f forgejo-workers-volume5.yaml
Configuration Details
Storage Configuration
- Config Storage: Longhorn PVCs for each application
- Media Storage: NFS mounts from
192.168.12.16
:/Volume2/media
→/media
/Volume2/tv
→/tv
/Volume2/movies
→/movies
/Volume2/downloads
→/downloads
Network Access
All media applications are accessible via ingress at:
http://radarr.local
http://sonarr.local
http://prowlarr.local
http://overseerr.local
http://sabnzbd.local
http://tautulli.local
Forgejo Workers
Multiple worker configurations available:
forgejo-workers-nodejs-simple.yaml
- Basic Node.js workerforgejo-workers-volume5.yaml
- Full-featured worker with NFSforgejo-workers-volume5-fixed.yaml
- Stable versionforgejo-workers-volume5-symlinks.yaml
- Enhanced with symlinks
Workers include: Node.js, Go, Hugo, AWS CLI, Python, Git, yarn, jq
Management Commands
Check Application Status
# View all deployments
kubectl get deployments -n media
# Check pod logs
kubectl logs -f deployment/radarr -n media
# View services and ingress
kubectl get svc,ingress -n media
Storage Management
# Test NFS connectivity
kubectl apply -f nfs-test.yaml
# Check PVC status
kubectl get pvc -n media
# View storage classes
kubectl get storageclass
Worker Management
# Check worker status
kubectl get pods -n forgejo-workers
# View worker logs
kubectl logs -f daemonset/forgejo-worker-full -n forgejo-workers
# Scale workers
kubectl scale daemonset/forgejo-worker-full --replicas=2 -n forgejo-workers
Troubleshooting
Common Issues
Pods stuck in Pending state:
kubectl describe pod <pod-name> -n <namespace>
# Check for resource constraints or storage issues
NFS mount failures:
# Test NFS connectivity
kubectl apply -f nfs-test.yaml
kubectl logs nfs-test-pod
Ingress not working:
# Check ingress controller
kubectl get pods -n ingress-nginx
# Verify ingress resources
kubectl get ingress -A
Logs and Monitoring
# Application logs
kubectl logs -f deployment/<app-name> -n media
# System events
kubectl get events -n media --sort-by='.lastTimestamp'
# Resource usage
kubectl top pods -n media
Security Considerations
- All containers run with UID/GID 1000
- Forgejo workers require privileged containers for full CI/CD functionality
- Dashboard has cluster-admin privileges - secure access appropriately
- NFS shares should have appropriate network restrictions
Contributing
- Test changes in a development environment
- Update documentation for any configuration changes
- Follow existing naming conventions and resource patterns
- Ensure all deployments include proper resource limits
License
This configuration is provided as-is for home lab use. Modify according to your specific requirements and security policies.