Kubernetes has become the standard for container orchestration. If you're interviewing for backend, DevOps, or full-stack roles, expect questions about how Kubernetes works and why certain patterns exist.
This guide covers the core Kubernetes concepts that come up in interviews—with practical YAML examples you can actually use.
Pods: The Foundation
Q: What is a Pod in Kubernetes?
This is the starting point for most Kubernetes interviews.
Weak answer: "A pod is a container."
Strong answer:
A Pod is the smallest deployable unit in Kubernetes. It's a wrapper around one or more containers that:
- Share a network namespace - All containers in a pod have the same IP address and can reach each other via
localhost - Share storage volumes - Containers can access the same mounted volumes
- Are scheduled together - Always run on the same node
- Have a shared lifecycle - Created and destroyed as a unit
# Simple pod definition
apiVersion: v1
kind: Pod
metadata:
name: my-app
labels:
app: my-app
spec:
containers:
- name: app
image: my-app:1.0
ports:
- containerPort: 3000Key insight for interviews: Pods are ephemeral. When a pod dies, it's replaced with a new pod (new IP, new identity)—not restarted. This is why you need Services for stable networking and Deployments for reliability.
When to use multi-container pods:
- Sidecar patterns (logging, monitoring agents)
- Init containers (database migrations, config fetching)
- Tightly coupled processes that must share resources
Deployments: Managing Pods at Scale
Q: What's the difference between a Pod and a Deployment?
Understanding this distinction is crucial.
A Pod is a single instance. A Deployment is a controller that:
- Maintains a desired number of pod replicas
- Handles rolling updates and rollbacks
- Replaces failed pods automatically
- Provides declarative updates
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: my-app:1.0
ports:
- containerPort: 3000
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"What interviewers want to hear:
- You rarely create pods directly—you create Deployments
- Deployments create ReplicaSets, which create Pods
- The hierarchy: Deployment → ReplicaSet → Pods
# Common deployment commands
kubectl apply -f deployment.yaml
kubectl get deployments
kubectl rollout status deployment/my-app
kubectl rollout undo deployment/my-app
kubectl scale deployment/my-app --replicas=5Rolling Updates and Rollbacks
Q: How do rolling updates work in Kubernetes?
Rolling updates let you update pods without downtime.
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # Max pods above desired count
maxUnavailable: 0 # Max pods that can be unavailableThe process:
- New ReplicaSet created with updated pod template
- New pods start coming up
- Old pods terminate as new pods become ready
- Traffic shifts gradually to new pods
# Update image (triggers rolling update)
kubectl set image deployment/my-app app=my-app:2.0
# Watch the rollout
kubectl rollout status deployment/my-app
# Something wrong? Roll back
kubectl rollout undo deployment/my-app
# Roll back to specific revision
kubectl rollout undo deployment/my-app --to-revision=2
# View history
kubectl rollout history deployment/my-appInterview follow-up: "How would you do a blue-green deployment?"
Kubernetes doesn't have built-in blue-green, but you can:
- Create two deployments (blue and green)
- Switch the Service selector between them
- Or use Ingress rules to control traffic routing
Services: Stable Networking
Q: Why do we need Services? How do they work?
Since pods are ephemeral with changing IPs, Services provide stable endpoints.
Service types:
| Type | Use Case | Access |
|---|---|---|
| ClusterIP | Internal communication | Inside cluster only |
| NodePort | Development/testing | node-ip:port |
| LoadBalancer | Production external access | Cloud LB → pods |
| ExternalName | External service alias | DNS CNAME |
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
type: ClusterIP
selector:
app: my-app # Routes to pods with this label
ports:
- port: 80 # Service port
targetPort: 3000 # Container portHow it works:
- Service gets a stable ClusterIP
- Label selector finds matching pods
- kube-proxy configures iptables/IPVS rules
- Traffic load-balanced across healthy pods
# Inside cluster, access via:
curl http://my-app-service # Same namespace
curl http://my-app-service.default # Cross-namespace
curl http://my-app-service.default.svc.cluster.local # FQDNKey point: Services use label selectors, not pod names. Add/remove pods dynamically, and the Service automatically routes to them.
ConfigMaps and Secrets
Q: How do you handle configuration in Kubernetes?
Configuration should be separate from container images. Kubernetes provides two resources:
ConfigMap - Non-sensitive configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
DATABASE_HOST: "postgres.default.svc.cluster.local"
LOG_LEVEL: "info"
config.json: |
{
"feature_flags": {
"new_ui": true
}
}Secret - Sensitive data (base64 encoded):
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
data:
DATABASE_PASSWORD: cGFzc3dvcmQxMjM= # base64 encoded
API_KEY: c2VjcmV0LWtleQ==Using them in pods:
spec:
containers:
- name: app
image: my-app:1.0
# As environment variables
envFrom:
- configMapRef:
name: app-config
- secretRef:
name: app-secrets
# Or mount as files
volumeMounts:
- name: config-volume
mountPath: /app/config
volumes:
- name: config-volume
configMap:
name: app-configInterview tip: Know that Secrets are only base64 encoded, not encrypted by default. For real security, enable encryption at rest and use external secret managers (Vault, AWS Secrets Manager) with operators.
Namespaces: Logical Isolation
Q: What are Namespaces used for?
Namespaces divide cluster resources between multiple teams or environments.
apiVersion: v1
kind: Namespace
metadata:
name: production
---
apiVersion: v1
kind: Namespace
metadata:
name: stagingBenefits:
- Name scoping - Same resource names in different namespaces
- Resource quotas - Limit CPU/memory per namespace
- Access control - RBAC policies per namespace
- Organization - Logical grouping of resources
# Work in a specific namespace
kubectl get pods -n production
kubectl apply -f deployment.yaml -n staging
# Set default namespace for context
kubectl config set-context --current --namespace=productionDefault namespaces:
default- Where resources go if no namespace specifiedkube-system- Kubernetes system componentskube-public- Publicly accessible resources
When NOT to use namespaces: For versioning (use labels), for separating unrelated applications (use separate clusters for strong isolation).
Labels and Selectors
Q: How do labels work in Kubernetes?
Labels are key-value pairs attached to objects. Selectors query objects by their labels.
metadata:
labels:
app: my-app
environment: production
version: v1.2.0
team: backendWhy they matter:
- Services find pods via label selectors
- Deployments manage pods via label selectors
- You can query and filter resources
# Filter by label
kubectl get pods -l app=my-app
kubectl get pods -l 'environment in (staging, production)'
kubectl get pods -l app=my-app,version=v1.2.0
# Delete by label
kubectl delete pods -l environment=testBest practices:
- Use consistent naming conventions
- Include:
app,environment,version,team - Labels are for selection; annotations are for metadata
Resource Requests and Limits
Q: What's the difference between requests and limits?
This affects scheduling and resource management.
resources:
requests:
memory: "128Mi"
cpu: "100m" # 100 millicores = 0.1 CPU
limits:
memory: "256Mi"
cpu: "500m"Requests:
- Guaranteed resources
- Used for scheduling decisions
- Pod won't be scheduled if node can't provide requested resources
Limits:
- Maximum resources allowed
- Exceeding memory limit → pod killed (OOMKilled)
- Exceeding CPU limit → throttled (not killed)
What interviewers want to hear:
- Always set requests—otherwise pods might get scheduled on overloaded nodes
- Limits prevent runaway containers from affecting others
requestsaffect scheduling;limitsaffect runtime behavior
# See resource usage
kubectl top pods
kubectl top nodes
# Describe to see limits and current usage
kubectl describe pod my-app-xxxLiveness and Readiness Probes
Q: What's the difference between liveness and readiness probes?
Both check container health, but have different purposes.
Liveness probe: "Is the container alive?"
- Failure → Kubernetes restarts the container
- Use for: detecting deadlocks, hung processes
Readiness probe: "Can this container serve traffic?"
- Failure → Removed from Service endpoints
- Use for: warming caches, waiting for dependencies
spec:
containers:
- name: app
livenessProbe:
httpGet:
path: /healthz
port: 3000
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 3
failureThreshold: 3Probe types:
httpGet- HTTP request (most common for web apps)tcpSocket- TCP connection checkexec- Run a command in container
Common mistake: Setting initialDelaySeconds too low—container killed before it starts.
Essential kubectl Commands
Interviewers often expect fluency with kubectl.
# Get resources
kubectl get pods
kubectl get pods -o wide # More details (IP, node)
kubectl get pods -o yaml # Full YAML output
kubectl get all # Pods, services, deployments
# Describe (detailed info + events)
kubectl describe pod my-app-xxx
kubectl describe node node-1
# Logs
kubectl logs my-app-xxx
kubectl logs my-app-xxx -c sidecar # Specific container
kubectl logs -f my-app-xxx # Follow/stream
kubectl logs --previous my-app-xxx # Previous crashed container
# Execute commands
kubectl exec -it my-app-xxx -- /bin/sh
kubectl exec my-app-xxx -- env
# Apply/delete
kubectl apply -f manifest.yaml
kubectl delete -f manifest.yaml
kubectl delete pod my-app-xxx
# Debug
kubectl get events --sort-by='.lastTimestamp'
kubectl describe pod my-app-xxx | grep -A 10 EventsCommon Interview Scenarios
"A pod is stuck in Pending state. How do you troubleshoot?"
kubectl describe pod my-app-xxx
# Look for Events section:
# - Insufficient CPU/memory → scale cluster or reduce requests
# - No nodes match selectors → check nodeSelector/affinity
# - PVC pending → check storage class, PV availability"A pod is in CrashLoopBackOff. What do you do?"
# Check logs from crashed container
kubectl logs my-app-xxx --previous
# Common causes:
# - Application error on startup
# - Missing config/secrets
# - Liveness probe failing too quickly
# - OOMKilled (check resource limits)"How would you expose an application externally?"
- Development: NodePort service
- Production: LoadBalancer service (cloud) or Ingress controller
- Ingress for HTTP routing, SSL termination, path-based routing
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-ingress
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app-service
port:
number: 80Quick Reference
| Concept | Purpose | Key Point |
|---|---|---|
| Pod | Run containers | Ephemeral, shared network/storage |
| Deployment | Manage pod replicas | Rolling updates, self-healing |
| Service | Stable networking | Label selectors, load balancing |
| ConfigMap | Non-sensitive config | Env vars or mounted files |
| Secret | Sensitive data | Base64, enable encryption at rest |
| Namespace | Logical isolation | Quotas, RBAC, name scoping |
| Labels | Organize/select | Key-value pairs for querying |
| Probes | Health checks | Liveness (restart), Readiness (traffic) |
Related Articles
If you found this helpful, check out these related guides:
- Complete DevOps Engineer Interview Guide - comprehensive preparation guide for DevOps interviews
- Docker Interview Guide - Container fundamentals before orchestration
- System Design Interview Guide - Architecture patterns where K8s fits in
- Node.js Advanced Interview Guide - Building the apps that run in your clusters
- Linux Commands Interview Guide - Essential commands for debugging containers
- CI/CD & GitHub Actions Interview Guide - Deploying to Kubernetes from pipelines
What's Next?
These core concepts cover what most developers need for interviews. As you go deeper, explore:
- Helm - Package management for Kubernetes
- Horizontal Pod Autoscaler - Automatic scaling based on metrics
- Network Policies - Controlling pod-to-pod traffic
- RBAC - Role-based access control
The developers who stand out understand not just the "what" but the "why"—why pods are ephemeral, why Services use selectors, why you separate config from code.
