Skip to main content
The Kubernetes setup is intentionally minimal: one YAML file, one helper script, and one CI workflow. The whole thing fits on one screen, reads top to bottom, and is easy to extend later.
k8s/
├── README.md
└── prod/
    ├── k8s-prod.yaml         Namespace + Deployment + Service + Ingress
    ├── create-secrets.sh     builds the `vibestrap-secrets` Secret from .env.prod
    └── .env.prod             your production env (gitignored — see .env.example)

What ships in the manifest

k8s/vibestrap.yaml defines four resources, in this order:
  1. Namespacevibestrap
  2. Deployment — Next.js app, 1 replica, probes on /api/ping, sane resource requests/limits
  3. Service — ClusterIP fronting the pods on port 80 → 3000
  4. Ingress — nginx + cert-manager TLS, with www → apex redirect baked in
A single image is pulled from Harbor: harbor.funkro.com/vibestrap/vibestrap. There is no in-cluster migration Job — see Database migrations below.

What the CI does for you

.github/workflows/docker-build-push.yml runs on:
  • Every push to main
  • Every tag matching v*
  • Manual dispatch from the GitHub Actions UI
On each run it:
1

Builds the image

The runner stage of the Dockerfile becomes harbor.funkro.com/vibestrap/vibestrap:<version>, also re-tagged as :latest.
2

Pushes to Harbor

Authenticated with HARBOR_USERNAME + HARBOR_PASSWORD from GitHub Actions secrets.
3

Computes the version tag

Tag pushes use the git tag (e.g. v1.2.0). Branch pushes use main-<sha7>. Both forms are immutable, unlike :latest.
4

Auto-bumps the manifest

The workflow rewrites the image: line in k8s/vibestrap.yaml to point at the new version and commits back to main with [skip ci]. The manifest in git always reflects what is actually in the registry.
No kubeconfig lives in CI. The deploy itself is still your hands-on-keyboard step — by design.

GitHub Actions secrets

Add these once in Settings → Secrets and variables → Actions:
SecretWhat goes in it
HARBOR_USERNAMEYour Harbor account username
HARBOR_PASSWORDHarbor password, or a robot account token

First-time setup

Three things need to exist in the cluster before the first kubectl apply.

1. Harbor pull secret

So the cluster can pull from your private registry:
kubectl create namespace vibestrap
kubectl create secret docker-registry harbor-secret \
  --namespace vibestrap \
  --docker-server=harbor.funkro.com \
  --docker-username=<YOUR_HARBOR_USERNAME> \
  --docker-password=<YOUR_HARBOR_PASSWORD>
The Deployment references this harbor-secret.

2. App secrets from .env.prod

Your production env vars all live in one file, then get folded into a single opaque Secret named vibestrap-secrets:
cp .env.example k8s/.env
# edit k8s/.env with real production values — no quotes around values
./k8s/create-secrets.sh
The helper script strips comments, blank lines, and accidental quotes (kubectl treats quotes as part of the value, which breaks Better Auth and Stripe SDKs), then runs kubectl create secret generic ... --from-env-file for you. The Deployment pulls every variable in via envFrom: secretRef.
k8s/.env is gitignored. Don’t commit it. Re-run create-secrets.sh whenever you add or change a value.

3. ingress-nginx + cert-manager

The Ingress assumes ingressClassName: nginx and a letsencrypt-prod ClusterIssuer. If you don’t have them yet:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx \
  --namespace ingress-nginx --create-namespace
Then create a letsencrypt-prod ClusterIssuer following the cert-manager docs.

Database migrations

Migrations are not run by the cluster. The runtime image doesn’t include drizzle-kit, and there’s no Job to babysit. Instead, before each release that touches schema, run from your laptop:
DATABASE_URL='postgres://user:pass@prod-host:5432/db?sslmode=require' \
  pnpm db:migrate
Then kubectl apply the new image. With 1 replica there’s no race-condition window — schema moves only when you tell it to.

When prod DB is in a private VPC

If your laptop can’t reach the prod DB directly, spin up a one-off pod inside the cluster that has the right network access. The exact kubectl run command lives in k8s/README.md; the shape is:
kubectl run vibestrap-migrate --rm -it --restart=Never \
  --namespace vibestrap \
  --image=node:22-alpine \
  --env="DATABASE_URL=$DATABASE_URL" \
  -- sh -c "cd /tmp && git clone <repo> app && cd app && \
            corepack enable && pnpm install && pnpm db:migrate"
This is the heavy fallback — only reach for it when you genuinely can’t reach the DB from outside the cluster.

Deploy

Once the CI workflow has bumped the manifest, every deploy is:
git pull

# only if the release includes a schema change
DATABASE_URL='postgres://...' pnpm db:migrate

kubectl apply -f k8s/vibestrap.yaml

# watch the rollout
kubectl rollout status deployment/vibestrap -n vibestrap
If you only changed env vars (no new code), refresh the Secret and roll the Deployment:
./k8s/create-secrets.sh
kubectl rollout restart deployment/vibestrap -n vibestrap

When you outgrow this

The manifest is intentionally minimal — add these only when you actually need them. Each is small (10–30 lines) and orthogonal.
AddWhenWhere
HorizontalPodAutoscalerTraffic varies enough that fixed replicas wastes money or starves under loadNew file under k8s/
PodDisruptionBudgetRunning ≥3 replicas and you want zero-downtime node drainsNew file
NetworkPolicyMulti-tenant cluster, want to restrict egress to DB and outbound APIsNew file
topologySpreadConstraintsMulti-zone cluster, want zone-failure toleranceInline in the Deployment spec
Multiple environmentsWhen you actually run more than the vibestrap namespacek8s/staging/ parallel directory

Troubleshooting

SymptomLikely causeCheck
ImagePullBackOffHarbor credentials wrong, or harbor-secret missing in vibestrap namespacekubectl get secret harbor-secret -n vibestrap
Pod CrashLoopBackOff on first startMissing required env (DATABASE_URL, BETTER_AUTH_SECRET, …)kubectl logs <pod> -n vibestrap — Zod prints the missing key
Pod boots but DB queries throw “column does not exist”Forgot to run pnpm db:migrate before applying the new imageRun the migration from your laptop, then restart the Deployment
TLS cert pending foreverDNS not pointing at the ingress LB, or ClusterIssuer missingkubectl describe certificate vibestrap-tls -n vibestrap
Stripe webhook 400 “invalid signature”STRIPE_WEBHOOK_SECRET mismatchedRe-copy from Stripe, refresh secret, restart Deployment

Official docs