What is Kubernetes?
BeginnerKubernetes (K8s) is an open-source container orchestration platform originally developed by Google. It automates the deployment, scaling, and management of containerized applications across clusters of machines.
While Docker manages individual containers, Kubernetes manages clusters of containers — handling load balancing, self-healing, rolling updates, secret management, and much more.
Kubernetes Architecture
BeginnerControl Plane Components
- kube-apiserver — The frontend for the Kubernetes control plane
- etcd — Key-value store for all cluster data
- kube-scheduler — Assigns pods to nodes based on resource availability
- kube-controller-manager — Runs controller loops (ReplicaSet, Deployment, etc.)
Node Components
- kubelet — Agent on each node that ensures containers are running
- kube-proxy — Network proxy maintaining network rules
- Container Runtime — Software for running containers (containerd, CRI-O)
Local Setup with Minikube
Beginner# Install minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
# Start a local cluster
minikube start --driver=docker --cpus=2 --memory=4096
# Verify cluster is running
kubectl cluster-info
kubectl get nodes
# Enable useful addons
minikube addons enable ingress
minikube addons enable metrics-server
minikube addons enable dashboard
# Open Kubernetes dashboard
minikube dashboard
Pods
BeginnerA Pod is the smallest deployable unit in Kubernetes. It can contain one or more containers that share networking and storage.
# pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
# Create the pod
kubectl apply -f pod.yaml
# List pods
kubectl get pods
# Describe a pod (detailed info)
kubectl describe pod nginx-pod
# View logs
kubectl logs nginx-pod
# Execute into a pod
kubectl exec -it nginx-pod -- /bin/bash
# Delete a pod
kubectl delete pod nginx-pod
Deployments
BeginnerA Deployment manages a set of identical pods, ensuring the desired number of replicas are running and handling rolling updates.
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
labels:
app: webapp
spec:
replicas: 3
selector:
matchLabels:
app: webapp
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: myregistry/webapp:v1.0
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: "production"
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 15
periodSeconds: 20
# Apply deployment
kubectl apply -f deployment.yaml
# Check rollout status
kubectl rollout status deployment/webapp
# Scale deployment
kubectl scale deployment webapp --replicas=5
# Update image (triggers rolling update)
kubectl set image deployment/webapp webapp=myregistry/webapp:v2.0
# Rollback to previous version
kubectl rollout undo deployment/webapp
# View rollout history
kubectl rollout history deployment/webapp
Services
BeginnerA Service exposes a set of pods as a network service with a stable DNS name and IP address.
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: webapp
type: ClusterIP # Internal only
ports:
- protocol: TCP
port: 80 # Service port
targetPort: 3000 # Container port
---
# LoadBalancer service (cloud providers)
apiVersion: v1
kind: Service
metadata:
name: webapp-lb
spec:
selector:
app: webapp
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 3000
Service Types
| Type | Description | Use Case |
|---|---|---|
ClusterIP | Internal cluster IP only | Internal services |
NodePort | Exposes on each node's IP | Development/testing |
LoadBalancer | Cloud provider load balancer | Production external access |
ExternalName | Maps to external DNS name | External service references |
ConfigMaps & Secrets
Intermediate# ConfigMap for application configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
DATABASE_HOST: "postgres-service"
DATABASE_PORT: "5432"
LOG_LEVEL: "info"
APP_NAME: "DevOps Mastery"
---
# Secret for sensitive data
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
stringData:
DATABASE_PASSWORD: "my-super-secret-password"
API_KEY: "sk-1234567890abcdef"
# Using ConfigMap and Secret in a Deployment
spec:
containers:
- name: webapp
image: myapp:v1
envFrom:
- configMapRef:
name: app-config
- secretRef:
name: app-secrets
# Or individual keys:
env:
- name: DB_PASS
valueFrom:
secretKeyRef:
name: app-secrets
key: DATABASE_PASSWORD
Namespaces
IntermediateNamespaces provide a mechanism for isolating groups of resources within a single cluster.
# Create namespaces for different environments
kubectl create namespace development
kubectl create namespace staging
kubectl create namespace production
# Deploy to a specific namespace
kubectl apply -f deployment.yaml -n production
# Set default namespace for your context
kubectl config set-context --current --namespace=production
# List resources in a namespace
kubectl get all -n production
# Resource quotas per namespace
apiVersion: v1
kind: ResourceQuota
metadata:
name: production-quota
namespace: production
spec:
hard:
requests.cpu: "4"
requests.memory: "8Gi"
limits.cpu: "8"
limits.memory: "16Gi"
pods: "50"
Ingress
IntermediateIngress manages external HTTP/HTTPS access to services with routing rules, TLS termination, and virtual hosting.
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
ingressClassName: nginx
tls:
- hosts:
- app.example.com
- api.example.com
secretName: tls-secret
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webapp-service
port:
number: 80
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
Persistent Storage
Intermediate# PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp2 # AWS EBS
resources:
requests:
storage: 20Gi
---
# Using PVC in a StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:15
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 20Gi
Helm Charts
IntermediateHelm is the package manager for Kubernetes, allowing you to define, install, and upgrade complex applications.
# Install Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# Add a chart repository
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
# Search for charts
helm search repo nginx
# Install a chart
helm install my-nginx bitnami/nginx --namespace production
# Install with custom values
helm install my-app ./my-chart \
--namespace production \
--values production-values.yaml \
--set image.tag=v2.0
# List releases
helm list -n production
# Upgrade a release
helm upgrade my-app ./my-chart --values production-values.yaml
# Rollback
helm rollback my-app 1
Creating Your Own Chart
# Create a new chart scaffold
helm create my-webapp
# Chart structure:
# my-webapp/
# ├── Chart.yaml # Chart metadata
# ├── values.yaml # Default configuration values
# ├── templates/
# │ ├── deployment.yaml
# │ ├── service.yaml
# │ ├── ingress.yaml
# │ ├── hpa.yaml
# │ ├── _helpers.tpl # Template helpers
# │ └── NOTES.txt # Post-install notes
# └── charts/ # Sub-chart dependencies
RBAC & Security
Advanced# ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-service-account
namespace: production
---
# Role with specific permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: app-role
namespace: production
rules:
- apiGroups: [""]
resources: ["pods", "services", "configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch", "update"]
---
# Bind Role to ServiceAccount
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-role-binding
namespace: production
subjects:
- kind: ServiceAccount
name: app-service-account
namespace: production
roleRef:
kind: Role
name: app-role
apiGroup: rbac.authorization.k8s.io
Horizontal Pod Autoscaler (HPA)
Advanced# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: webapp-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: webapp
minReplicas: 3
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleUp:
stabilizationWindowSeconds: 60
policies:
- type: Percent
value: 50
periodSeconds: 60
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 25
periodSeconds: 60
Operators
AdvancedKubernetes Operators extend the K8s API to manage complex, stateful applications. They encode operational knowledge into software.
- Prometheus Operator — Manages Prometheus monitoring instances
- Cert-Manager — Automates TLS certificate management
- Strimzi — Manages Apache Kafka on Kubernetes
- CloudNativePG — Manages PostgreSQL clusters
Production Patterns
AdvancedRunning Kubernetes in production requires careful attention to security, monitoring, backup, and disaster recovery.
- Use managed Kubernetes — EKS, GKE, or AKS to reduce operational burden
- Implement NetworkPolicies — Control pod-to-pod communication
- Set resource requests and limits — Prevent resource contention
- Pod Disruption Budgets — Ensure availability during maintenance
- Multi-AZ deployments — Spread pods across availability zones
- GitOps with ArgoCD — Declarative, version-controlled deployments
- Monitoring stack — Prometheus + Grafana + Alertmanager
- Backup etcd — Regular backups of cluster state