Skip to main content
vibestrap ships a production-grade container setup: a 3-stage Alpine Dockerfile (deps → builder → runner), a docker-compose.yml with Postgres + app, and a single release image (~349MB, the standalone Next.js server with dumb-init as PID 1). The runtime image deliberately does not bundle drizzle-kit — migrations are a developer-driven step, run from your laptop against the production DATABASE_URL. That keeps the image lean and removes any race-condition risk during rolling updates.

Prerequisites

  • Docker 24+. BuildKit is enabled by default and is required for the pnpm-store --mount=type=cache directive in the deps stage.
  • The Compose plugin (docker compose version should print a version).
  • For local development, that’s it — compose ships its own Postgres 17.
  • For production, plan on a managed Postgres (Neon, Supabase, RDS, Crunchy, Railway). The bundled compose Postgres is dev-only — no backups, no HA.

Quickstart: docker compose

Local stack is two services — postgres and app. Migrations are not run by compose; you trigger them from the host with pnpm db:push (dev) or pnpm db:migrate (when you’ve checked in real migration files).
# 1. start Postgres on its own
docker compose up -d postgres

# 2. push the schema from the host (dev shortcut, no migration files)
pnpm db:push

# 3. boot the app
docker compose up app
Open localhost:3000, sign up at /register, you’re in. To stop and keep data: docker compose down. To wipe Postgres entirely: docker compose down -v.
In dev pnpm db:push reflects src/db/*.schema.ts straight onto the local DB — no migration files needed. Production should always use pnpm db:migrate against committed migrations.

Optional API keys

The defaults in docker-compose.yml cover the bare minimum (auth + DB). To enable Stripe, Resend, OAuth providers, AI keys, etc., create a .env.docker:
cp .env.example .env.docker
# edit .env.docker — fill in only what you actually want enabled
Compose reads it via the env_file directive (declared required: false, so a missing file doesn’t error). Anything in .env.docker overrides the defaults in the environment: block. Restart with docker compose up --build to pick up changes. .env.docker is in both .gitignore and .dockerignore — it never ends up in an image.

Building the production image

One image per release:
docker build \
  --build-arg NEXT_PUBLIC_APP_URL=https://your-domain.com \
  --build-arg NEXT_PUBLIC_APP_NAME=your-product \
  --target runner \
  -t ghcr.io/your-org/vibestrap:v1.0.0 .
The runner target serves traffic and needs NEXT_PUBLIC_* baked in at build time (Next.js inlines those values into the client bundle). Push it to your registry and reference it by tag from your orchestrator (K8s manifests, Nomad job, ECS task, whatever).

Why NEXT_PUBLIC_APP_URL must be a build-arg

This is the gotcha that trips everyone the first time. Next.js inlines every NEXT_PUBLIC_* value into the client JavaScript bundle at build time — they’re not read from process.env in the browser, they’re substituted as string literals before bundling. If you don’t pass --build-arg NEXT_PUBLIC_APP_URL=https://... when building your production image, the default from the Dockerfile (http://localhost:3000) gets baked into the JS that ships to your users. Symptoms:
  • OAuth redirect URLs in the client point at http://localhost:3000/...
  • Absolute URLs in shareable / SEO metadata are localhost
  • Anything that calls process.env.NEXT_PUBLIC_APP_URL from a client component returns http://localhost:3000 in production
There is no runtime fix — you have to rebuild the image with the right build-arg.

Running the production runner

docker run --env-file .env.production -p 3000:3000 \
  ghcr.io/your-org/vibestrap:v1.0.0
Minimum runtime env vars on the runner:
VarNotes
DATABASE_URLPooled Postgres URL. ?sslmode=require for managed.
BETTER_AUTH_SECRET32+ random chars. openssl rand -base64 32.
BETTER_AUTH_URLYour public origin. OAuth callbacks fail without it.
ADMIN_EMAILSComma-separated emails granted role=admin on signup.
Add provider keys (Stripe, Resend, OAuth, AI) as needed — see docs/env-reference for the full list.

Database migrations

Migrations are intentionally out of the runtime image. The runner only contains what’s needed to serve traffic — drizzle-kit lives in devDependencies and is invoked from your laptop:
DATABASE_URL='postgres://user:pass@prod-host:5432/db?sslmode=require' \
  pnpm db:migrate
Run this before docker run / kubectl apply rolls out an image that depends on the new schema. Why this way:
  • The runtime image stays lean — no drizzle-kit, no migration files.
  • 1 replica means there’s no race-condition risk; you control exactly when schema moves.
  • Cognitive cost is lower than wiring an init container or one-shot job for a scaffold that runs at this scale.
The migration step runs drizzle-kit migrate against committed migration files in src/db/migrations/. Generate them with pnpm db:generate first, commit the SQL, then run pnpm db:migrate against prod.
If your production DB is in a private VPC and your laptop can’t reach it, see the kubernetes guide’s “one-off pod” fallback — it spins up a temporary container inside the cluster with the right network access to run the same command.

PaaS one-liners

Railway, Render, Fly.io, Coolify, Dokploy — all auto-detect the root Dockerfile. The runner target is the last stage, so no extra config is needed for the build to land on the right image. Run pnpm db:migrate from your laptop (or a one-off container with the source repo) before promoting a release that touches schema.

Multi-platform / Apple Silicon

Building on an M-series Mac for an x86 Linux server needs buildx:
docker buildx create --use --name vibestrap-builder
docker buildx build \
  --platform linux/amd64,linux/arm64 \
  --target runner \
  --build-arg NEXT_PUBLIC_APP_URL=https://your-domain.com \
  -t ghcr.io/your-org/vibestrap:v1.0.0 \
  --push .
This produces a manifest list — Docker on each host pulls the variant matching its CPU arch. Skip this and an arm64-built image won’t start on amd64.

What changed under the hood

Notable production-grade upgrades in the current setup:
  • dumb-init as PID 1. Properly forwards SIGTERM to the Node process so docker stop finishes in ~0.3s instead of waiting out the 10s grace timeout before SIGKILL. Critical for fast rolling deploys.
  • BuildKit cache mount for the pnpm store (--mount=type=cache,id=pnpm-store). CI builds drop from ~3min cold to ~90s on warm cache — the content-addressable pnpm store is reused across builds.
  • Native-deps toolchain (libc6-compat, python3, make, g++) in the deps stage. Avoids npm install failures on Alpine when packages compile from source (better-sqlite3, sharp variants, etc.).
  • Build-time placeholders for DATABASE_URL and BETTER_AUTH_SECRET. Scoped to the pnpm build RUN command so they don’t persist in image layers — avoids the SecretsUsedInArgOrEnv linter warning while still letting next build pass module-level Zod (@t3-oss/env-nextjs) validation.
  • mkdir .next && chown nextjs:nodejs in the runner stage. Pre-creates the cache dir owned by the non-root user so prerender / image-optimization writes succeed at runtime, even on platforms with restricted PSPs.

Common pitfalls

  • Forgetting --build-arg NEXT_PUBLIC_APP_URL. All client-side absolute URLs end up pointing at http://localhost:3000 in production. No runtime fix — rebuild.
  • Rolling out a new image before running pnpm db:migrate. The new code may reference columns that don’t exist yet — boot crashes with Zod or Drizzle errors. Always migrate first, then deploy.
  • Mounting .env into /app/.env at runtime. Next.js’s standalone server doesn’t read .env files at runtime. Pass env via -e or --env-file to docker run, or use your platform’s secret store.
  • Building on Apple Silicon for amd64 prod without buildx. A plain docker build on M-series produces an arm64 image that won’t start on x86 Linux. Use the multi-platform recipe above.

Official docs