Docker & Containers

Master containerization from basic concepts to production-grade multi-stage builds, orchestration with Compose, and container security.

What is Docker?

Beginner

Docker is an open platform for developing, shipping, and running applications inside lightweight, portable containers. Containers package an application with all its dependencies, ensuring it runs consistently across any environment.

Key Concept

A container is an isolated process that shares the host OS kernel but has its own filesystem, networking, and process space. Unlike VMs, containers don't need a full guest OS, making them extremely lightweight.

Containers vs Virtual Machines

FeatureContainersVirtual Machines
Startup TimeSecondsMinutes
SizeMBsGBs
OSShares host kernelFull guest OS
IsolationProcess-levelHardware-level
PerformanceNear-nativeOverhead from hypervisor

Docker Architecture

Docker uses a client-server architecture:

Installation

Beginner

Install Docker on Ubuntu

# Update package index
sudo apt update

# Install prerequisites
sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release

# Add Docker's GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

# Add Docker repository
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list

# Install Docker Engine
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io

# Add your user to the docker group
sudo usermod -aG docker $USER

# Verify installation
docker --version
docker run hello-world

Basic Commands

Beginner
# Pull an image from Docker Hub
docker pull nginx:latest

# Run a container
docker run -d --name webserver -p 8080:80 nginx:latest

# List running containers
docker ps

# List all containers (including stopped)
docker ps -a

# View container logs
docker logs webserver

# Execute a command inside a running container
docker exec -it webserver /bin/bash

# Stop and remove a container
docker stop webserver
docker rm webserver

# List images
docker images

# Remove an image
docker rmi nginx:latest

# System cleanup
docker system prune -a

Images vs Containers

Beginner

An image is a read-only template with instructions for creating a container. A container is a runnable instance of an image. You can create many containers from the same image.

Think of it this way

An image is like a class in OOP, and a container is like an object (instance) of that class. The image is the blueprint; the container is the running thing.

Dockerfile Basics

Beginner

A Dockerfile is a text file with instructions to build a Docker image.

# Use official Node.js as base image
FROM node:20-alpine

# Set working directory
WORKDIR /app

# Copy dependency files first (for caching)
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production

# Copy application source
COPY . .

# Expose port
EXPOSE 3000

# Define the command to run
CMD ["node", "server.js"]

Build and run the image:

# Build the image
docker build -t myapp:v1 .

# Run a container from the image
docker run -d -p 3000:3000 --name myapp myapp:v1

Common Dockerfile Instructions

InstructionPurpose
FROMBase image to build upon
WORKDIRSet working directory inside container
COPYCopy files from host to container
RUNExecute command during build
EXPOSEDocument which port the app uses
ENVSet environment variables
CMDDefault command when container starts
ENTRYPOINTConfigure container as an executable
HEALTHCHECKCheck container health

Docker Networking

Intermediate

Docker provides several network drivers to control how containers communicate.

Network Types

# Create a custom network
docker network create mynetwork

# Run containers on the same network
docker run -d --name api --network mynetwork myapi:v1
docker run -d --name db --network mynetwork postgres:15

# Now 'api' can reach 'db' by hostname
# Example: postgres://db:5432/mydb

# List networks
docker network ls

# Inspect a network
docker network inspect mynetwork

Volumes & Storage

Intermediate

Volumes persist data beyond the container lifecycle.

# Create a named volume
docker volume create pgdata

# Use volume with a container
docker run -d \
  --name postgres \
  -v pgdata:/var/lib/postgresql/data \
  -e POSTGRES_PASSWORD=secret \
  postgres:15

# Bind mount (map host directory)
docker run -d \
  --name devserver \
  -v $(pwd)/src:/app/src \
  -p 3000:3000 \
  myapp:dev

# List volumes
docker volume ls

# Inspect a volume
docker volume inspect pgdata

Docker Compose

Intermediate

Docker Compose defines and runs multi-container applications using a YAML file.

# docker-compose.yml
version: '3.8'

services:
  app:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "3000:3000"
    environment:
      - DATABASE_URL=postgres://user:pass@db:5432/mydb
      - REDIS_URL=redis://cache:6379
    depends_on:
      db:
        condition: service_healthy
      cache:
        condition: service_started
    volumes:
      - ./src:/app/src
    restart: unless-stopped

  db:
    image: postgres:15-alpine
    environment:
      POSTGRES_DB: mydb
      POSTGRES_USER: user
      POSTGRES_PASSWORD: pass
    volumes:
      - pgdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U user -d mydb"]
      interval: 10s
      timeout: 5s
      retries: 5

  cache:
    image: redis:7-alpine
    ports:
      - "6379:6379"

volumes:
  pgdata:
# Start all services
docker compose up -d

# View logs
docker compose logs -f app

# Stop all services
docker compose down

# Stop and remove volumes
docker compose down -v

Multi-Stage Builds

Intermediate

Multi-stage builds dramatically reduce final image size by separating the build environment from the runtime environment.

# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Stage 2: Production
FROM node:20-alpine AS production
RUN addgroup -g 1001 -S appgroup && \
    adduser -S appuser -u 1001
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
USER appuser
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=3s \
  CMD wget -q --spider http://localhost:3000/health || exit 1
CMD ["node", "dist/server.js"]
Pro Tip

Multi-stage builds can reduce image sizes by 90%+. A typical Node.js app image goes from ~1GB to under 100MB.

Container Security

Advanced

Security is critical for production containers. Follow these best practices:

Security Best Practices

  1. Don't run as root — Always create and use a non-root user
  2. Use minimal base images — Prefer alpine or distroless images
  3. Scan for vulnerabilities — Use docker scout, Trivy, or Snyk
  4. Pin image versions — Never use :latest in production
  5. Use multi-stage builds — Don't ship build tools in production
  6. Sign your images — Use Docker Content Trust
  7. Limit capabilities — Drop all capabilities and add only what's needed
# Scan an image for vulnerabilities
docker scout cves myapp:v1

# Run with limited capabilities
docker run -d \
  --cap-drop=ALL \
  --cap-add=NET_BIND_SERVICE \
  --read-only \
  --security-opt=no-new-privileges \
  --tmpfs /tmp \
  myapp:v1

# Use distroless base image
FROM gcr.io/distroless/nodejs20-debian12
COPY --from=builder /app/dist /app
WORKDIR /app
CMD ["server.js"]

Image Optimization

Advanced

Layer Caching Strategy

Docker caches each layer. Order your Dockerfile instructions from least to most frequently changed:

# Good: Dependencies change less often than source code
COPY package*.json ./
RUN npm ci
COPY . .

# Bad: Invalidates npm cache on every code change
COPY . .
RUN npm ci

.dockerignore

Always include a .dockerignore file to exclude unnecessary files:

node_modules
.git
.env
*.md
docker-compose*.yml
.github
coverage
.nyc_output
tests

Private Registries

Advanced
# Login to AWS ECR
aws ecr get-login-password --region us-east-1 | \
  docker login --username AWS --password-stdin 123456789.dkr.ecr.us-east-1.amazonaws.com

# Tag and push image
docker tag myapp:v1 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:v1
docker push 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:v1

# Pull from private registry
docker pull 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:v1

Production Best Practices

Advanced
Production Checklist

Before deploying containers to production, ensure you've covered all items in this checklist.

  1. Health checks — Always define HEALTHCHECK in your Dockerfile
  2. Resource limits — Set memory and CPU limits with --memory and --cpus
  3. Logging — Use a logging driver (json-file, fluentd, awslogs)
  4. Restart policies — Use --restart unless-stopped or always
  5. Read-only filesystem — Mount the root filesystem as read-only
  6. No hardcoded secrets — Use Docker secrets or environment variable injection
  7. Image scanning — Scan images in your CI/CD pipeline before deployment
  8. Graceful shutdown — Handle SIGTERM in your application
# Production-ready container run
docker run -d \
  --name webapp \
  --restart unless-stopped \
  --memory 512m \
  --cpus 0.5 \
  --read-only \
  --tmpfs /tmp \
  --cap-drop ALL \
  --security-opt no-new-privileges \
  -p 3000:3000 \
  --log-driver json-file \
  --log-opt max-size=10m \
  --log-opt max-file=3 \
  myapp:v1.2.0