What ships in the manifest
k8s/vibestrap.yaml defines four resources, in this order:
- Namespace —
vibestrap - Deployment — Next.js app, 1 replica, probes on
/api/ping, sane resource requests/limits - Service — ClusterIP fronting the pods on port 80 → 3000
- Ingress — nginx + cert-manager TLS, with
www → apexredirect baked in
harbor.funkro.com/vibestrap/vibestrap. There is no in-cluster migration
Job — see Database migrations below.
What the CI does for you
.github/workflows/docker-build-push.yml runs on:
- Every push to
main - Every tag matching
v* - Manual dispatch from the GitHub Actions UI
Builds the image
The runner stage of the Dockerfile becomes
harbor.funkro.com/vibestrap/vibestrap:<version>, also re-tagged as
:latest.Computes the version tag
Tag pushes use the git tag (e.g.
v1.2.0). Branch pushes use
main-<sha7>. Both forms are immutable, unlike :latest.No
kubeconfig lives in CI. The deploy itself is still your hands-on-keyboard
step — by design.GitHub Actions secrets
Add these once in Settings → Secrets and variables → Actions:| Secret | What goes in it |
|---|---|
HARBOR_USERNAME | Your Harbor account username |
HARBOR_PASSWORD | Harbor password, or a robot account token |
First-time setup
Three things need to exist in the cluster before the firstkubectl apply.
1. Harbor pull secret
So the cluster can pull from your private registry:harbor-secret.
2. App secrets from .env.prod
Your production env vars all live in one file, then get folded into a single
opaque Secret named vibestrap-secrets:
kubectl create secret generic ... --from-env-file for you. The
Deployment pulls every variable in via envFrom: secretRef.
3. ingress-nginx + cert-manager
The Ingress assumesingressClassName: nginx and a letsencrypt-prod
ClusterIssuer. If you don’t have them yet:
letsencrypt-prod ClusterIssuer following the
cert-manager docs.
Database migrations
Migrations are not run by the cluster. The runtime image doesn’t includedrizzle-kit, and there’s no Job to babysit. Instead, before each release
that touches schema, run from your laptop:
kubectl apply the new image. With 1 replica there’s no race-condition
window — schema moves only when you tell it to.
When prod DB is in a private VPC
If your laptop can’t reach the prod DB directly, spin up a one-off pod inside the cluster that has the right network access. The exactkubectl run command
lives in k8s/README.md; the shape is:
Deploy
Once the CI workflow has bumped the manifest, every deploy is:When you outgrow this
The manifest is intentionally minimal — add these only when you actually need them. Each is small (10–30 lines) and orthogonal.| Add | When | Where |
|---|---|---|
HorizontalPodAutoscaler | Traffic varies enough that fixed replicas wastes money or starves under load | New file under k8s/ |
PodDisruptionBudget | Running ≥3 replicas and you want zero-downtime node drains | New file |
NetworkPolicy | Multi-tenant cluster, want to restrict egress to DB and outbound APIs | New file |
topologySpreadConstraints | Multi-zone cluster, want zone-failure tolerance | Inline in the Deployment spec |
| Multiple environments | When you actually run more than the vibestrap namespace | k8s/staging/ parallel directory |
Troubleshooting
| Symptom | Likely cause | Check |
|---|---|---|
ImagePullBackOff | Harbor credentials wrong, or harbor-secret missing in vibestrap namespace | kubectl get secret harbor-secret -n vibestrap |
Pod CrashLoopBackOff on first start | Missing required env (DATABASE_URL, BETTER_AUTH_SECRET, …) | kubectl logs <pod> -n vibestrap — Zod prints the missing key |
| Pod boots but DB queries throw “column does not exist” | Forgot to run pnpm db:migrate before applying the new image | Run the migration from your laptop, then restart the Deployment |
| TLS cert pending forever | DNS not pointing at the ingress LB, or ClusterIssuer missing | kubectl describe certificate vibestrap-tls -n vibestrap |
| Stripe webhook 400 “invalid signature” | STRIPE_WEBHOOK_SECRET mismatched | Re-copy from Stripe, refresh secret, restart Deployment |