k8s/CLAUDE.md
Derek Slenk 747ec85d98 Enhance CLAUDE.md with Forgejo CI/CD worker documentation
- Add Forgejo workers to infrastructure components
- Include deployment commands for CI/CD workers
- Document worker architecture, toolchain, and NFS integration
- Add worker labels and key features
- Note privileged container requirements

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-17 14:08:04 -04:00

3.3 KiB

CLAUDE.md

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

Repository Overview

This repository contains Kubernetes manifests for deploying a media server stack and supporting infrastructure. The deployment uses LinuxServer.io containers for media applications and integrates with external NFS storage.

Architecture

Application Stack

  • Media Applications (arr/ directory):
    • Radarr (movies) - port 7878
    • Sonarr (TV shows) - port 8989
    • Prowlarr (indexer manager)
    • SABnzbd (downloads)
    • Overseerr (requests) - port 5055
    • Tautulli (Plex monitoring)

Infrastructure Components

  • MetalLB: Load balancer with IP pool 192.168.15.200-192.168.15.210
  • Longhorn: Block storage for application configs
  • NFS Storage: External server at 192.168.12.16 for media files
  • Kubernetes Dashboard: Web UI with admin access
  • Forgejo CI/CD Workers: Self-hosted runners for git.deco.sh with full toolchain

Deployment Commands

# Create namespace (required first)
kubectl create namespace media

# Deploy infrastructure
kubectl apply -f nfs-storage-classes.yaml
kubectl apply -f metallb-config.yaml
kubectl apply -f kubernetes-dashboard.yaml

# Deploy all media applications
kubectl apply -f arr/

# Deploy individual apps
kubectl apply -f arr/radarr.yaml
kubectl apply -f arr/sonarr.yaml

# Deploy Forgejo CI/CD workers
kubectl create namespace forgejo-workers
kubectl apply -f forgejo-worker-config.yaml
kubectl apply -f forgejo-workers-volume5.yaml

Resource Patterns

Each media application follows this structure:

  1. Deployment: Container with resource limits, PUID/PGID=1000, TZ=America/New_York
  2. PersistentVolumeClaim: Config storage using Longhorn
  3. Service: ClusterIP for internal communication
  4. Ingress: NGINX routing with pattern {app}.local

Storage Configuration

  • Config Storage: Longhorn PVCs for each application
  • Media Storage: NFS mounts from 192.168.12.16:
    • /Volume2/media/media
    • /Volume2/tv/tv
    • /Volume2/movies/movies
    • /Volume2/downloads/downloads

Prerequisites

  • NGINX Ingress Controller must be installed
  • Longhorn storage system must be deployed
  • NFS server must be accessible at 192.168.12.16
  • Local DNS must resolve *.local domains

Forgejo CI/CD Workers

Architecture

  • DaemonSet deployment on ARM64 nodes with host networking and privileged containers
  • Full toolchain: Node.js, Go, Hugo, AWS CLI, Python, Git, yarn, jq
  • NFS-backed workspace: /Volume5/forgejo-builds (workspace) and /Volume5/forgejo-cache (cache)
  • Auto-registration: Workers register with Forgejo instance at https://git.deco.sh

Key Features

  • Supports multiple languages and frameworks out of the box
  • Configured for HTTPS git authentication
  • Includes AWS CLI for cloud deployments
  • Resource limits: 4Gi memory, 2 CPU cores
  • Persistent storage on host for worker data

Worker Labels

  • self-hosted, linux, arm64, nodejs, aws-cli, golang, hugo

Important Notes

  • All containers run with UID/GID 1000
  • Dashboard has full cluster-admin privileges
  • All images use latest tag
  • Media namespace must exist before deploying applications
  • Forgejo workers require privileged containers for full CI/CD functionality