From Code to Container: A Production Docker Guide
Containers are only as good as the build strategy behind them. This covers the multi-stage setup we use to keep production images lean, the Compose orchestration for local development, and the configuration mistakes that bloat images and slow pipelines.
Containerizing a modern full-stack application (frontend + backend + database) is very different from a simple setup. When you combine tools like Turborepo (for managing code), NestJS (for the API), and Python (for AI), it can get deeply confusing. This guide takes you from the absolute basics of Docker to a real-world setup that actually works.
1. What is Docker?
Think of Docker as a "shipping container" for your code. It packages your app with everything it needs to run—libraries, settings, tools—so it works exactly the same way on your laptop as it does on a server.
Why use it?
- Consistency: No more "It works on my machine" excuses. If it runs in Docker, it runs everywhere.
- Isolation: You can run a Python app and a Node.js app side-by-side without them messing up each other's settings.
- Simplicity: Setting up a new developer takes minutes, not days. Just run
docker-compose up.
2. Containers and Dockerfiles
A Container is just the running version of your "shipping container." It's a lightweight box where your app lives.
A Dockerfile is the instruction manual. It tells Docker exactly how to build that box. It looks like this:
# Use Node.js version 18 as the starting point
FROM node:18
# Create a folder called /app inside the container
WORKDIR /app
# Copy all my project files into that folder
COPY . .
# Install the libraries I need
RUN npm install
# Start the app!
CMD ["node", "index.js"]
3. Multi-Stage Builds
When you build a house, you need cranes, scaffolding, and cement mixers. But when the house is finished, you take all that heavy equipment away. You don't leave a crane in the living room!
Traditional Docker builds often leave the "cranes" (compilers, build tools) inside the final image, making it huge and slow. Multi-Stage Builds let us throw away the tools after we're done building.
- Stage 1 (The Filter): We pick out only the files our specific app needs (like ingredients for one recipe).
- Stage 2 (The Installer): We download all the libraries and tools we need (getting the pots and pans).
- Stage 3 (The Builder): We compile the code (cooking the meal). This uses heavy tools and makes a mess.
- Stage 4 (The Runner): We serve the finished meal on a clean plate. We leave the dirty pots and pans (build tools) behind.
This makes your app start faster and be much more secure. Because the final image is smaller, it downloads faster, starts faster, and uses less memory.
Example Impact:
- Traditional Build: ~1.2GB
- Multi-Stage Build: ~180MB
- Savings: 85% smaller
4. Real World Example: The NestJS Service
Let's look at our API Gateway. It's a Node.js app that uses Prisma for the database. We need to build it as efficiently as possible.
The Challenge: Shared Code
In a big project (Monorepo), your API might use code from a shared folder. You can't just copy the API folder; you need the shared stuff too. But if you copy everything, Docker gets confused and rebuilds constantly, which is slow.
The Solution: Turbo Prune
We use a tool called turbo prune. It's like a smart filter—it picks out only the files your specific API needs and ignores the rest.
The NestJS Dockerfile (Explained)
# Use Alpine Linux for a smaller base image (150MB vs 1GB)
FROM node:18-alpine AS base
# === STAGE 1/4: Pruner ===
# Goal: Filter the project to just what we need
FROM base AS pruner
RUN apk add --no-cache libc6-compat
WORKDIR /app
RUN npm install -g turbo
COPY . .
# "Prune" the project to include only 'auth-service' stuff
RUN turbo prune --scope=auth-service --docker
# === STAGE 2/4: Installer ===
# Goal: Install libraries and prepare the Database tool (Prisma)
FROM base AS installer
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Copy the filtered files from Pruner stage
COPY --from=pruner /app/out/json/ .
COPY --from=pruner /app/turbo.json ./turbo.json
COPY --from=pruner /app/apps/auth-service/prisma ./prisma
# Install dependencies fast
RUN npm install
# Build the Prisma Client (Database helper)
RUN npx prisma generate
# === STAGE 3/4: Builder ===
# Goal: Compile the TypeScript code into JavaScript
FROM base AS builder
WORKDIR /app
COPY --from=installer /app/ .
COPY --from=pruner /app/out/full/ .
RUN npx turbo run build --filter=auth-service...
# === STAGE 4/4: Runner ===
# Goal: The final, tiny production image
FROM base AS runner
WORKDIR /app
RUN apk add openssl
# Security: Don't run as "root" (Administrator), run as a restricted user
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nestjs
USER nestjs
# Copy ONLY the finished, compiled app
COPY --from=builder --chown=nestjs:nestjs /app/ .
EXPOSE 3000
CMD [ "node", "apps/auth-service/dist/main.js" ]
5. Real World Example: The Python Service
Our AI Service is written in Python. Python apps often need heavy system tools (like C compilers) to install libraries, but we don't want those tools in our final app.
The Python Dockerfile (4-Stage Pipeline)
We use the same 4-stage logic here: Base -> Installer -> Builder -> Runner.
# === STAGE 1/4: Base ===
# Use Python 3.11 as our common starting point
FROM python:3.11-slim AS base
# Install heavy tools (gcc) needed for some libraries
RUN apt-get update && apt-get install -y \
build-essential \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
# === STAGE 2/4: Installer ===
# Goal: Download and install Python libraries
FROM base AS installer
WORKDIR /app
COPY apps/ai-service/requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# === STAGE 3/4: Builder ===
# Goal: Prepare the final files
FROM base AS builder
WORKDIR /app
COPY . .
# Copy installed libraries from Installer
COPY --from=installer /usr/local/lib/python3.11/site-packages /usr/local/lib/python3.11/site-packages
COPY --from=installer /usr/local/bin /usr/local/bin
WORKDIR /app/apps/ai-service
# === STAGE 4/4: Runner ===
# Goal: The clean, final image
FROM base AS runner
WORKDIR /app
# Copy everything from Builder
COPY --from=builder /app/ .
# Copy libraries again ensuring they are present
COPY --from=installer /usr/local/lib/python3.11/site-packages /usr/local/lib/python3.11/site-packages
COPY --from=installer /usr/local/bin /usr/local/bin
RUN chmod +x apps/ai-service/start_script.sh
WORKDIR /app/apps/ai-service
EXPOSE 8000
# Run the startup script
CMD ["./start_script.sh"]
6. Running It All (Orchestration)
To run these containers together, we use Docker Compose. We split our meaningful configuration into layers to handle our 8+ microservices (Auth, Payments, AI, Analytics, etc.) without repeating ourselves.
The Base Layer (docker-compose.yml)
This defines the "Architecture": which services exist and how they connect. We remove all the "Development" noise (like volume mounts) from here.
version: '3.8'
services:
# The Core API Gateway
api:
build:
context: .
dockerfile: ./apps/api/Dockerfile
ports:
- "3001:3001"
depends_on:
- rabbitmq
# Microservices
auth-service:
build:
context: .
dockerfile: ./apps/auth-service/Dockerfile
depends_on:
- database
- rabbitmq
ai-service:
build:
context: .
dockerfile: ./apps/ai-service/Dockerfile
ports:
- "8000:8000"
payment-service:
build:
context: .
dockerfile: ./apps/payment/Dockerfile
depends_on:
- database
analytics-service:
build:
context: .
dockerfile: ./apps/analytics/Dockerfile
# Infrastructure
database:
image: postgres:15
environment:
- POSTGRES_DB=auth,analytics,payment,ai
volumes:
- database_volume:/var/lib/postgresql/data
rabbitmq:
image: rabbitmq:3-management
ports:
- "5672:5672"
- "15672:15672"
volumes:
database_volume:
Conclusion
Building production-ready Docker images isn't just about getting your app to run in a container—it's about doing it right. Throughout this guide, we've moved from the fundamentals of what Docker is to implementing sophisticated multi-stage builds for both Node.js and Python applications in a monorepo architecture.
The patterns we've covered aren't theoretical exercises. They're battle-tested solutions that solve real problems:
- Multi-stage builds transformed our images from bloated 1.2GB artifacts to lean 180MB containers—85% smaller, faster to deploy, and more secure.
- Turborepo's prune solved the monorepo cache invalidation nightmare, letting us build only what changed.
- Proper stage separation (Pruner → Installer → Builder → Runner) gave us clarity and maintainability as our architecture scaled to 8+ microservices.
The real power of Docker emerges when you combine these techniques. Your NestJS API, Python AI service, PostgreSQL database, and RabbitMQ message broker all orchestrated together, each in its own optimized container, communicating seamlessly across a shared network. That's the polyglot microservices dream realized.
Start with one service. Apply multi-stage builds. Add another. Before you know it, you'll have a production-grade infrastructure that's reproducible, scalable, and maintainable. The investment in learning these patterns pays dividends every single deployment.
Written by

Technical Lead and Full Stack Engineer leading a 5-engineer team at Fygurs (Paris, Remote) on Azure cloud-native SaaS. Graduate of 1337 Coding School (42 Network / UM6P). Writes about architecture, cloud infrastructure, and engineering leadership.