k8s homelab infra http://k8s-dashboard.local
Find a file
Derek Slenk aafc797a2c Add comprehensive README.md for Kubernetes home lab
- Document complete media server and CI/CD infrastructure
- Include quick start deployment instructions
- Add troubleshooting and management commands
- Document network access and storage configuration
- Include security considerations and prerequisites

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-17 14:10:59 -04:00
arr Add Kubernetes deployment configurations for media applications 2025-07-13 01:47:01 -04:00
.gitignore Add .gitignore and CLAUDE.md for repository guidance and configuration 2025-07-13 01:55:11 -04:00
CLAUDE.md Enhance CLAUDE.md with Forgejo CI/CD worker documentation 2025-07-17 14:08:04 -04:00
dash-admin.yaml initial commit 2025-07-13 01:45:30 -04:00
forgejo-worker-config.yaml Add Forgejo CI/CD worker configurations 2025-07-17 14:09:16 -04:00
forgejo-workers-nodejs-simple.yaml Add Forgejo CI/CD worker configurations 2025-07-17 14:09:16 -04:00
forgejo-workers-volume5-fixed.yaml Add Forgejo CI/CD worker configurations 2025-07-17 14:09:16 -04:00
forgejo-workers-volume5-symlinks.yaml Add Forgejo CI/CD worker configurations 2025-07-17 14:09:16 -04:00
forgejo-workers-volume5.yaml Add Forgejo CI/CD worker configurations 2025-07-17 14:09:16 -04:00
k8s-dashboard-ingress.yaml initial commit 2025-07-13 01:45:30 -04:00
longhorn-ingress.yaml initial commit 2025-07-13 01:45:30 -04:00
metallb-config.yaml initial commit 2025-07-13 01:45:30 -04:00
nfs-storage-classes.yaml initial commit 2025-07-13 01:45:30 -04:00
nfs-test.yaml initial commit 2025-07-13 01:45:30 -04:00
README.md Add comprehensive README.md for Kubernetes home lab 2025-07-17 14:10:59 -04:00

Kubernetes Home Lab

A comprehensive Kubernetes deployment for media server applications and CI/CD infrastructure, designed for home lab environments.

Overview

This repository contains Kubernetes manifests for deploying:

  • Media Server Stack: Complete *arr suite (Radarr, Sonarr, Prowlarr, etc.) with Overseerr for request management
  • CI/CD Infrastructure: Forgejo self-hosted runners with full development toolchain
  • Supporting Infrastructure: MetalLB load balancer, NFS storage, and Kubernetes dashboard

Architecture

Media Applications (arr/ directory)

  • Radarr - Movie collection management (port 7878)
  • Sonarr - TV series collection management (port 8989)
  • Prowlarr - Indexer management and proxy
  • SABnzbd - Usenet downloader
  • Overseerr - Media request management (port 5055)
  • Tautulli - Plex monitoring and analytics

Infrastructure Components

  • MetalLB: Load balancer with IP pool 192.168.15.200-192.168.15.210
  • NFS Storage: External server at 192.168.12.16 for media files
  • Longhorn: Block storage for application configurations
  • Kubernetes Dashboard: Web UI with cluster-admin access
  • Forgejo Workers: Self-hosted CI/CD runners for git.deco.sh

Prerequisites

Before deploying, ensure you have:

  1. Kubernetes cluster with kubectl configured
  2. NGINX Ingress Controller installed
  3. Longhorn storage system deployed
  4. NFS server accessible at 192.168.12.16 with media shares
  5. Local DNS configured to resolve *.local domains
  6. MetalLB compatible network environment

Quick Start

1. Deploy Infrastructure

# Deploy storage classes
kubectl apply -f nfs-storage-classes.yaml

# Deploy MetalLB load balancer
kubectl apply -f metallb-config.yaml

# Deploy Kubernetes dashboard
kubectl apply -f k8s-dashboard-ingress.yaml
kubectl apply -f dash-admin.yaml

2. Deploy Media Applications

# Create media namespace
kubectl create namespace media

# Deploy all media applications
kubectl apply -f arr/

# Check deployment status
kubectl get pods -n media

3. Deploy CI/CD Workers (Optional)

# Create namespace and deploy workers
kubectl create namespace forgejo-workers

# Deploy worker configuration
kubectl apply -f forgejo-worker-config.yaml

# Deploy workers (choose one variant)
kubectl apply -f forgejo-workers-volume5.yaml

Configuration Details

Storage Configuration

  • Config Storage: Longhorn PVCs for each application
  • Media Storage: NFS mounts from 192.168.12.16:
    • /Volume2/media/media
    • /Volume2/tv/tv
    • /Volume2/movies/movies
    • /Volume2/downloads/downloads

Network Access

All media applications are accessible via ingress at:

  • http://radarr.local
  • http://sonarr.local
  • http://prowlarr.local
  • http://overseerr.local
  • http://sabnzbd.local
  • http://tautulli.local

Forgejo Workers

Multiple worker configurations available:

  • forgejo-workers-nodejs-simple.yaml - Basic Node.js worker
  • forgejo-workers-volume5.yaml - Full-featured worker with NFS
  • forgejo-workers-volume5-fixed.yaml - Stable version
  • forgejo-workers-volume5-symlinks.yaml - Enhanced with symlinks

Workers include: Node.js, Go, Hugo, AWS CLI, Python, Git, yarn, jq

Management Commands

Check Application Status

# View all deployments
kubectl get deployments -n media

# Check pod logs
kubectl logs -f deployment/radarr -n media

# View services and ingress
kubectl get svc,ingress -n media

Storage Management

# Test NFS connectivity
kubectl apply -f nfs-test.yaml

# Check PVC status
kubectl get pvc -n media

# View storage classes
kubectl get storageclass

Worker Management

# Check worker status
kubectl get pods -n forgejo-workers

# View worker logs
kubectl logs -f daemonset/forgejo-worker-full -n forgejo-workers

# Scale workers
kubectl scale daemonset/forgejo-worker-full --replicas=2 -n forgejo-workers

Troubleshooting

Common Issues

Pods stuck in Pending state:

kubectl describe pod <pod-name> -n <namespace>
# Check for resource constraints or storage issues

NFS mount failures:

# Test NFS connectivity
kubectl apply -f nfs-test.yaml
kubectl logs nfs-test-pod

Ingress not working:

# Check ingress controller
kubectl get pods -n ingress-nginx

# Verify ingress resources
kubectl get ingress -A

Logs and Monitoring

# Application logs
kubectl logs -f deployment/<app-name> -n media

# System events
kubectl get events -n media --sort-by='.lastTimestamp'

# Resource usage
kubectl top pods -n media

Security Considerations

  • All containers run with UID/GID 1000
  • Forgejo workers require privileged containers for full CI/CD functionality
  • Dashboard has cluster-admin privileges - secure access appropriately
  • NFS shares should have appropriate network restrictions

Contributing

  1. Test changes in a development environment
  2. Update documentation for any configuration changes
  3. Follow existing naming conventions and resource patterns
  4. Ensure all deployments include proper resource limits

License

This configuration is provided as-is for home lab use. Modify according to your specific requirements and security policies.