Docker for Devs:
End-to-End Practical Guide

XPeerlistPeerlist

Before I learned Docker, building full-stack apps locally was a painful process. If an app depended on services like PostgreSQL or Redis, I usually had to connect to a remote database using platforms like PlanetScale or Neon. That works, but every query goes over the internet, which causes noticeable delays and makes local development feel slow.

Things got even worse when trying to run open-source projects or other people’s apps. Every project had different dependencies, setup steps, and OS-specific issues. What worked on macOS didn’t always work on Windows or Linux, and as projects grew, managing all of this locally became exhausting.

This is where Docker changed things for me. But before we dive in, let’s quickly understand a few core concepts.

This post assumes you already have Docker installed on your system. If not, you can follow the official installation guide for your operating system on Docker’s website.

1. Docker Engine

Docker Engine is the core part of Docker that actually builds and runs containers. At a high level, it consists of the Docker daemon (dockerd), which does the heavy lifting, and the Docker CLI, which you use to talk to it. Under the hood, the CLI communicates with the daemon through a REST API to manage images, containers, networks, and volumes.

2. Docker CLI

The Docker CLI is the command-line tool you use to interact with Docker Engine. It’s how you tell Docker what to do — build an image, run a container, or list what’s currently running. Commands like docker build, docker run, and docker ps are all part of this interface.

3. Docker Registry

A Docker registry is where Docker images live. It’s a place to store images and pull them when you need to run a container. Docker Hub is the default public registry, but you can also host private registries for your own images.

💡 You can think of Docker Hub as GitHub for Docker images.

4. Docker Image

A Docker image is a blueprint for a container. It contains everything needed to run an app — code, dependencies, libraries, and configuration. Images are built from a Dockerfile and organized in layers, which makes them reusable and efficient.

5. Docker Container

A Docker container is a running instance of an image. If an image is the blueprint, the container is the actual thing running on your machine. It runs as an isolated process and can be started, stopped, or removed using Docker commands.

💡 As a beginner, we often get confused between images and containers. Remember, you can create multiple containers from the same image. A helpful mental model is to think of an image as a GitHub repository and a container as a clone of that repository.

6. Dockerfile

A Dockerfile is a recipe for building a Docker image. It’s a simple text file where you tell Docker which base image to use, which files to copy, which commands to run, and how the container should start.

Now that we’ve covered the core concepts, let’s move to a practical example and containerize a simple Next.js application. Create a Dockerfile in the root of your project with the following content.

FROM node:20-alpine

WORKDIR /app

COPY package.json ./
RUN npm install

COPY . .
RUN npm run build

EXPOSE 3000

CMD ["npm", "start"]

Let’s quickly walk through what each instruction in this Dockerfile does:

  • FROM node:20-alpine - Sets the base image. We’re using Node 20 on Alpine Linux, which helps keep the image small.

  • WORKDIR /app - Sets /app as the working directory inside the container.

  • COPY and RUN - Used to copy files into the image and run commands like installing dependencies or building the app.

  • EXPOSE 3000 - Indicates that the app runs on port 3000.

  • CMD ["npm", "start"] - Tells Docker how to start the app when the container runs.

When I first wrote my Dockerfile, it looked very similar to this one. But while thinking through it from first principles, a few things confused me. If this is your first time writing a Dockerfile, you might be wondering about the same things:

  1. We’re using Node 20 — but what is alpine?

    • Alpine is a lightweight Linux distribution used as the base for the image.
    • Using a smaller base image helps reduce the overall image size.
  2. Why do we copy package.json before copying the rest of the files?

    • Dependencies usually change less often than application code.
    • By copying package.json first and installing dependencies early, Docker can reuse cached layers when only the code changes.
    • This is mainly an optimization — it’ll make more sense when we talk about Docker layers.
  3. What’s the difference between RUN and CMD?

    • RUN executes commands while building the image and saves the result into the image.
    • CMD defines the default command that runs when a container starts from the image.

Also, don’t forget to add a .dockerignore file in the root of your project. This tells Docker which files and directories it should ignore when building the image, helping keep the image smaller and the build faster.

Here’s a simple example:

node_modules
.next
.git
.gitignore
.env*

Now let’s build the image:

docker build -t <image_name> .

Once the build is complete, run the container:

docker run -p 3000:3000 <image_name>

You should now be able to access the application at http://localhost:3000.

At this point, you’ve successfully containerized and run a Next.js application using Docker.

Now that you’ve run the app locally, let’s see how you can push your image to a Docker registry so others can use it as well.

Pushing an Image to Docker Hub

  1. Go to Docker Hub and create an account.
  2. Create a new repository from the dashboard.
  3. Log in to Docker from your terminal: docker login
  4. Tag your local image with your Docker Hub username and repository name:
    docker tag <local_image_name>:<tagname> <docker_hub_username>/<repository_name>:<tagname>
    
    Example:
    docker tag next-app:latest aay7ush/next-app:latest
    
  5. Push the image to Docker Hub:
    docker push <docker_hub_username>/<repository_name>:<tagname>
    
    Example:
    docker push aay7ush/next-app:latest
    

Once the image is pushed, anyone can pull and run it from anywhere using:

docker pull <docker_hub_username>/<repository_name>:<tagname>

You can also use Docker to run auxiliary services like PostgreSQL and Redis locally. This is especially useful during development, because you don’t need to install or configure these services on your machine — you just run a container.

To run PostgreSQL locally:

docker run -e POSTGRES_PASSWORD=<password> -p 5432:5432 -d postgres

This starts a PostgreSQL container, maps port 5432 to your local machine, and runs it in the background.

To run Redis:

docker run -p 6379:6379 -d redis

With this, Redis is now running locally and ready to be used by your application.

Before moving forward, let’s quickly break down the flags used in the commands above:

  • -e - Sets environment variables inside the container.
  • -p - Maps a port on your machine to a port inside the container.
  • -d - Runs the container in detached mode, so it keeps running in the background.

Now that the basics are clear, let’s look at how we can further optimize our Dockerfile. But before doing that, we need to understand one of the most important Docker concepts which is Docker layers.

Docker Layers

Docker layers are the foundation of Docker images. Every instruction in a Dockerfile creates a new layer, and these layers are stacked on top of each other to form the final image.

So why does this matter?

Docker optimizes builds by caching these layers. If you make a change to your Dockerfile or application code, Docker doesn’t rebuild the entire image from scratch. Instead, it only rebuilds the layers that have changed and reuses the cached ones for everything else.

If we take the Dockerfile we just wrote as an example, each instruction creates its own layer.

Docker Layers 2

Now, imagine you only change some application code. Docker will rebuild only the layers affected by that change, while the remaining layers are reused from cache.

Docker Layers 2

Because of this caching mechanism, Docker builds become much faster, and you avoid unnecessary work. You can also see the impact in the final image size and disk usage.

Docker Image Size

This layer-based caching is the key reason why the order of instructions in a Dockerfile matters so much — and it’s what allows us to optimize our Dockerfile further.

Now let’s look at how we can further optimize our Dockerfile.

We’ve already taken a good first step by using a lightweight base image (node:alpine) and copying package.json before the rest of the code to take advantage of Docker’s layer caching.

To push this optimization further, we need to understand multi-stage builds and how to use them in a Dockerfile.

Multi-Stage Builds

A multi-stage build lets you use multiple stages inside a single Dockerfile, each with a specific purpose. In simple terms, you build your app in one stage and run it in another — keeping only what’s actually needed in the final image.

Instead of shipping build tools, dependencies, and temporary files, you copy only the required output from earlier stages into a smaller runtime image.

Here’s what makes multi-stage builds powerful:

  • Multiple stages in one Dockerfile - Each FROM instruction starts a new stage, which can be named using AS <name>.

  • Selective copying between stages - You can copy only what you need from a previous stage using COPY --from=<stage>, leaving behind build tools and unnecessary files.

  • Smaller and more secure images - Build dependencies stay in the build stages. The final image contains only what’s needed to run the app, reducing both image size and security risks.

  • Better caching and build performance - Different stages can be cached independently, which helps speed up rebuilds when only part of the Dockerfile changes.

Here is the Dockerfile example with multi-stage build:

FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json* ./
RUN npm install

FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1
RUN npm run build

FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1

COPY --from=builder /app/public ./public
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json

EXPOSE 3000
ENV PORT=3000
ENV HOSTNAME="0.0.0.0"

CMD ["npm", "start"]

This Dockerfile uses three stages, each with a clear responsibility.

  1. deps stage — install dependencies (FROM node:20-alpine AS deps)

    This stage is responsible only for installing dependencies. We copy package.json and package-lock.json and run npm install. Since dependencies change less frequently than application code, this stage benefits heavily from Docker’s layer caching.

  2. builder stage — build the application (FROM node:20-alpine AS builder)

    In this stage, we bring in the installed dependencies from the deps stage, copy the application code, and build the Next.js app. Environment variables like NODE_ENV and NEXT_TELEMETRY_DISABLED are set here to create optimized builds.

  3. runner stage — run the app (FROM node:20-alpine AS runner)

    The final stage is the runtime image. Instead of copying everything from earlier stages, we copy only what’s needed to run the app — the built .next output, static assets, dependencies, and package.json.

This keeps the final image smaller, cleaner, and more secure, since build tools and unnecessary files never make it into the runtime container.

Finally, we expose port 3000 and define how the app should start using CMD.

Now you can see that the final image produced by the multi-stage build is smaller and more secure compared to a single-stage build.

Docker Image Size Optimized

This optimization happens at the Docker level. But if you want to reduce the image size even further for a Next.js app, you can take advantage of Next.js standalone output, which produces the smallest possible runtime bundle.

To enable standalone output, add the following configuration to your next.config.ts:

import type { NextConfig } from "next";

const nextConfig: NextConfig = {
  output: "standalone",
};

export default nextConfig;

This tells Next.js to generate a minimal production build that includes only the files required to run the app.

Here’s the final Dockerfile using multi-stage builds and standalone output:

FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --ignore-scripts && \
  npm cache clean --force

FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
ENV NEXT_TELEMETRY_DISABLED=1
ENV NODE_ENV=production
RUN npm run build

FROM node:20-alpine AS runner
WORKDIR /app
RUN addgroup --system --gid 1001 nodejs && \
  adduser --system --uid 1001 nextjs
ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1

COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static

USER nextjs

EXPOSE 3000

ENV PORT=3000
ENV HOSTNAME="0.0.0.0"

CMD ["node", "server.js"]

Here, we also run the application as a non-root user, which improves container security by limiting permissions inside the container.

Now you can see the final image size after all optimizations.

Docker Image Size Final

If you’re using monorepos especially Turborepo, you can apply additional optimizations to it. I didn’t cover that here since this example doesn’t use Turborepo, but you can read more about it in the official documentation.

If you want to see a real-world example, you can check out the Dockerfile from one of my open-source projects, where I’ve Dockerized a Next.js app inside a Turborepo.

Docker Networks

So far, we’ve been running containers individually. But in real applications, containers usually need to talk to each other — for example, a backend app talking to PostgreSQL or Redis.

This is where Docker networks come into the picture.

A Docker network allows containers to discover and communicate with each other using container names instead of IP addresses.

Let’s create a custom bridge network:

docker network create app-network

This creates an isolated network where containers can talk to each other.

Now let’s run PostgreSQL inside this network:

docker run \
  --name postgres \
  --network app-network \
  -e POSTGRES_PASSWORD=iamsuperhandsome \
  -p 5432:5432 \
  -d postgres

Here’s what’s happening:

  • The container is attached to app-network
  • Other containers on the same network can reach it using the name postgres
  • Port 5432 is still exposed to your local machine for debugging

Now run your app container on the same network:

docker run \
  --network app-network \
  -e DATABASE_URL=postgresql://postgres:iamsuperhandsome@postgres:5432/postgres \
  -p 3000:3000 \
  next-app

Notice the hostname in DATABASE_URL is postgres.

That’s not localhost — it’s the container name, and Docker resolves it automatically through the network.

This is one of Docker’s biggest advantages: no hardcoded IPs, no manual configuration.

Docker Volumes

By default, containers are ephemeral. If you stop or remove a container, all its data is lost.

That’s fine for stateless apps — but databases need persistence.
This is where Docker volumes come into the picture.

A volume lets Docker store data outside the container, so it survives restarts.

Let’s create a named volume:

docker volume create postgres-data

Now run PostgreSQL using that volume:

docker run \
  --name postgres \
  --network app-network \
  -e POSTGRES_PASSWORD=iamsuperhandsome \
  -v postgres-data:/var/lib/postgresql/data \
  -d postgres

What this does:

  • PostgreSQL stores its data in /var/lib/postgresql/data
  • Docker maps that directory to the postgres-data volume
  • Even if the container is removed, the data remains

This is how you safely run stateful services with Docker.

At this point, you might be thinking:

  • Too many docker run commands
  • Easy to forget flags
  • Hard to reproduce the setup
  • Not very readable

That’s normal.

This pain is exactly why Docker Compose exists.

Docker Compose

Docker Compose lets you define multiple containers, networks, and volumes in a single file, and start everything with one command.

Instead of remembering long docker run commands, you describe your setup declaratively.

Create a docker-compose.yml file:

services:
  app:
    image: next-app
    ports:
      - "3000:3000"
    environment:
      DATABASE_URL: postgresql://postgres:iamsuperhandsome@db:5432/myapp
    depends_on:
      db:
        condition: service_started
    restart: unless-stopped

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: iamsuperhandsome
      POSTGRES_DB: myapp
    volumes:
      - postgres-data:/var/lib/postgresql/data
    restart: unless-stopped
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

volumes:
  postgres-data:

Now start everything with this single command: docker compose up

That’s it.

  • App
  • Database
  • Network
  • Volume

All created automatically.

To stop everything, run this command: docker compose down

Why Docker Compose Is a Game Changer:

  • Removes command complexity
  • Makes setups reproducible
  • Works the same on every machine
  • Is perfect for local development

This is why most real-world Docker setups use Compose for development and CI.

Final Thoughts

Docker completely changed how I think about building and running applications. What used to be a messy setup of local installs, environment mismatches, and “works on my machine” issues turned into something predictable and repeatable.

Once you understand how images, containers, layers, networks, volumes, and Docker Compose fit together, Docker stops feeling like magic and starts feeling like a tool you can use in your daily dev workflow. You’re no longer just running commands — you’re designing an environment.

You don’t need to memorize every flag or optimization on day one. What matters is understanding the mental model. Start simple, add complexity only when you need it, and let Docker handle the boring parts of infrastructure.

If you can build an app, containerize it, run supporting services, and bring everything up with Docker Compose, you already have a solid, real-world Docker foundation. From here, scaling, deploying, and orchestrating containers becomes a much smaller step.