What is Docker?
BeginnerDocker is an open platform for developing, shipping, and running applications inside lightweight, portable containers. Containers package an application with all its dependencies, ensuring it runs consistently across any environment.
A container is an isolated process that shares the host OS kernel but has its own filesystem, networking, and process space. Unlike VMs, containers don't need a full guest OS, making them extremely lightweight.
Containers vs Virtual Machines
| Feature | Containers | Virtual Machines |
|---|---|---|
| Startup Time | Seconds | Minutes |
| Size | MBs | GBs |
| OS | Shares host kernel | Full guest OS |
| Isolation | Process-level | Hardware-level |
| Performance | Near-native | Overhead from hypervisor |
Docker Architecture
Docker uses a client-server architecture:
- Docker Client — The CLI tool (
docker) that sends commands to the daemon - Docker Daemon (
dockerd) — Manages images, containers, networks, and volumes - Docker Registry — Stores Docker images (e.g., Docker Hub, ECR, GCR)
Installation
BeginnerInstall Docker on Ubuntu
# Update package index
sudo apt update
# Install prerequisites
sudo apt install -y apt-transport-https ca-certificates curl gnupg lsb-release
# Add Docker's GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Add Docker repository
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list
# Install Docker Engine
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io
# Add your user to the docker group
sudo usermod -aG docker $USER
# Verify installation
docker --version
docker run hello-world
Basic Commands
Beginner# Pull an image from Docker Hub
docker pull nginx:latest
# Run a container
docker run -d --name webserver -p 8080:80 nginx:latest
# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# View container logs
docker logs webserver
# Execute a command inside a running container
docker exec -it webserver /bin/bash
# Stop and remove a container
docker stop webserver
docker rm webserver
# List images
docker images
# Remove an image
docker rmi nginx:latest
# System cleanup
docker system prune -a
Images vs Containers
BeginnerAn image is a read-only template with instructions for creating a container. A container is a runnable instance of an image. You can create many containers from the same image.
An image is like a class in OOP, and a container is like an object (instance) of that class. The image is the blueprint; the container is the running thing.
Dockerfile Basics
BeginnerA Dockerfile is a text file with instructions to build a Docker image.
# Use official Node.js as base image
FROM node:20-alpine
# Set working directory
WORKDIR /app
# Copy dependency files first (for caching)
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy application source
COPY . .
# Expose port
EXPOSE 3000
# Define the command to run
CMD ["node", "server.js"]
Build and run the image:
# Build the image
docker build -t myapp:v1 .
# Run a container from the image
docker run -d -p 3000:3000 --name myapp myapp:v1
Common Dockerfile Instructions
| Instruction | Purpose |
|---|---|
FROM | Base image to build upon |
WORKDIR | Set working directory inside container |
COPY | Copy files from host to container |
RUN | Execute command during build |
EXPOSE | Document which port the app uses |
ENV | Set environment variables |
CMD | Default command when container starts |
ENTRYPOINT | Configure container as an executable |
HEALTHCHECK | Check container health |
Docker Networking
IntermediateDocker provides several network drivers to control how containers communicate.
Network Types
- bridge — Default network. Containers on the same bridge can communicate.
- host — Container shares the host's network stack.
- none — No networking.
- overlay — Multi-host networking for Docker Swarm.
# Create a custom network
docker network create mynetwork
# Run containers on the same network
docker run -d --name api --network mynetwork myapi:v1
docker run -d --name db --network mynetwork postgres:15
# Now 'api' can reach 'db' by hostname
# Example: postgres://db:5432/mydb
# List networks
docker network ls
# Inspect a network
docker network inspect mynetwork
Volumes & Storage
IntermediateVolumes persist data beyond the container lifecycle.
# Create a named volume
docker volume create pgdata
# Use volume with a container
docker run -d \
--name postgres \
-v pgdata:/var/lib/postgresql/data \
-e POSTGRES_PASSWORD=secret \
postgres:15
# Bind mount (map host directory)
docker run -d \
--name devserver \
-v $(pwd)/src:/app/src \
-p 3000:3000 \
myapp:dev
# List volumes
docker volume ls
# Inspect a volume
docker volume inspect pgdata
Docker Compose
IntermediateDocker Compose defines and runs multi-container applications using a YAML file.
# docker-compose.yml
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgres://user:pass@db:5432/mydb
- REDIS_URL=redis://cache:6379
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
volumes:
- ./src:/app/src
restart: unless-stopped
db:
image: postgres:15-alpine
environment:
POSTGRES_DB: mydb
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user -d mydb"]
interval: 10s
timeout: 5s
retries: 5
cache:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
pgdata:
# Start all services
docker compose up -d
# View logs
docker compose logs -f app
# Stop all services
docker compose down
# Stop and remove volumes
docker compose down -v
Multi-Stage Builds
IntermediateMulti-stage builds dramatically reduce final image size by separating the build environment from the runtime environment.
# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production
FROM node:20-alpine AS production
RUN addgroup -g 1001 -S appgroup && \
adduser -S appuser -u 1001
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./
USER appuser
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=3s \
CMD wget -q --spider http://localhost:3000/health || exit 1
CMD ["node", "dist/server.js"]
Multi-stage builds can reduce image sizes by 90%+. A typical Node.js app image goes from ~1GB to under 100MB.
Container Security
AdvancedSecurity is critical for production containers. Follow these best practices:
Security Best Practices
- Don't run as root — Always create and use a non-root user
- Use minimal base images — Prefer
alpineordistrolessimages - Scan for vulnerabilities — Use
docker scout, Trivy, or Snyk - Pin image versions — Never use
:latestin production - Use multi-stage builds — Don't ship build tools in production
- Sign your images — Use Docker Content Trust
- Limit capabilities — Drop all capabilities and add only what's needed
# Scan an image for vulnerabilities
docker scout cves myapp:v1
# Run with limited capabilities
docker run -d \
--cap-drop=ALL \
--cap-add=NET_BIND_SERVICE \
--read-only \
--security-opt=no-new-privileges \
--tmpfs /tmp \
myapp:v1
# Use distroless base image
FROM gcr.io/distroless/nodejs20-debian12
COPY --from=builder /app/dist /app
WORKDIR /app
CMD ["server.js"]
Image Optimization
AdvancedLayer Caching Strategy
Docker caches each layer. Order your Dockerfile instructions from least to most frequently changed:
# Good: Dependencies change less often than source code
COPY package*.json ./
RUN npm ci
COPY . .
# Bad: Invalidates npm cache on every code change
COPY . .
RUN npm ci
.dockerignore
Always include a .dockerignore file to exclude unnecessary files:
node_modules
.git
.env
*.md
docker-compose*.yml
.github
coverage
.nyc_output
tests
Private Registries
Advanced# Login to AWS ECR
aws ecr get-login-password --region us-east-1 | \
docker login --username AWS --password-stdin 123456789.dkr.ecr.us-east-1.amazonaws.com
# Tag and push image
docker tag myapp:v1 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:v1
docker push 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:v1
# Pull from private registry
docker pull 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:v1
Production Best Practices
AdvancedBefore deploying containers to production, ensure you've covered all items in this checklist.
- Health checks — Always define HEALTHCHECK in your Dockerfile
- Resource limits — Set memory and CPU limits with
--memoryand--cpus - Logging — Use a logging driver (json-file, fluentd, awslogs)
- Restart policies — Use
--restart unless-stoppedoralways - Read-only filesystem — Mount the root filesystem as read-only
- No hardcoded secrets — Use Docker secrets or environment variable injection
- Image scanning — Scan images in your CI/CD pipeline before deployment
- Graceful shutdown — Handle SIGTERM in your application
# Production-ready container run
docker run -d \
--name webapp \
--restart unless-stopped \
--memory 512m \
--cpus 0.5 \
--read-only \
--tmpfs /tmp \
--cap-drop ALL \
--security-opt no-new-privileges \
-p 3000:3000 \
--log-driver json-file \
--log-opt max-size=10m \
--log-opt max-file=3 \
myapp:v1.2.0