Kubernetes has become the standard for container orchestration. If you're interviewing for backend, DevOps, or full-stack roles, expect questions about how Kubernetes works and why certain patterns exist.
This guide covers the core Kubernetes concepts that come up in interviews—with practical YAML examples you can actually use.
Table of Contents
- Pod Questions
- Deployment Questions
- Rolling Update Questions
- Service Questions
- ConfigMap and Secret Questions
- Namespace Questions
- Label and Selector Questions
- Resource Management Questions
- Health Check Questions
- kubectl Command Questions
- Troubleshooting Questions
Pod Questions
Pods are the fundamental building blocks of Kubernetes applications.
What is a Pod in Kubernetes?
A Pod is the smallest deployable unit in Kubernetes. It's not just a container - it's a wrapper around one or more containers that share resources. Understanding this distinction is the starting point for most Kubernetes interviews.
Containers within a pod share a network namespace, meaning they have the same IP address and can communicate via localhost. They also share storage volumes and are always scheduled together on the same node. This tight coupling is intentional - pods group containers that need to work as a single unit.
# Simple pod definition
apiVersion: v1
kind: Pod
metadata:
name: my-app
labels:
app: my-app
spec:
containers:
- name: app
image: my-app:1.0
ports:
- containerPort: 3000Key characteristics:
- Share a network namespace - All containers have the same IP and can reach each other via localhost
- Share storage volumes - Containers can access the same mounted volumes
- Scheduled together - Always run on the same node
- Shared lifecycle - Created and destroyed as a unit
Why are Pods ephemeral and what does that mean?
Pods are designed to be disposable. When a pod dies, Kubernetes doesn't try to heal it - instead, it creates a brand new pod with a new IP address and identity. This is fundamentally different from traditional VMs where you'd troubleshoot and fix a failing instance.
This ephemeral nature is why you need Services for stable networking (pods get new IPs when replaced) and Deployments for reliability (to ensure replacements are created). Understanding this principle helps you design applications that embrace, rather than fight, Kubernetes patterns.
When should you use multi-container pods?
Most pods should contain a single container - the "one container per pod" pattern. Multi-container pods are for tightly coupled processes that genuinely need to share resources and can't function independently.
The most common patterns are sidecar containers (logging agents, monitoring, proxies), init containers (database migrations, config fetching), and ambassador containers (proxy connections to external services). If containers could reasonably run as separate services, they should be in separate pods.
Deployment Questions
Deployments are how you run applications in production.
What is the difference between a Pod and a Deployment?
A Pod is a single instance of your application running on one node. A Deployment is a higher-level controller that manages multiple pod replicas across your cluster. This distinction is crucial - you almost never create pods directly in production.
Deployments provide features that bare pods don't have: self-healing (replacing failed pods), scaling (running multiple replicas), rolling updates (zero-downtime deployments), and rollback capability. The hierarchy is Deployment → ReplicaSet → Pods, with each level managing the one below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: my-app:1.0
ports:
- containerPort: 3000
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"How does a Deployment manage pods through ReplicaSets?
When you create a Deployment, it creates a ReplicaSet which in turn creates the pods. The Deployment manages the ReplicaSet, and the ReplicaSet manages the pods. This extra layer exists to enable rolling updates - when you update a Deployment, it creates a new ReplicaSet for the new version while scaling down the old one.
You can observe this hierarchy with kubectl get all - you'll see the Deployment, its ReplicaSet(s), and the pods. Each ReplicaSet has a template hash in its name corresponding to a specific pod configuration. This design allows Kubernetes to keep old ReplicaSets around for rollbacks.
# Common deployment commands
kubectl apply -f deployment.yaml
kubectl get deployments
kubectl rollout status deployment/my-app
kubectl rollout undo deployment/my-app
kubectl scale deployment/my-app --replicas=5Rolling Update Questions
Rolling updates enable zero-downtime deployments.
How do rolling updates work in Kubernetes?
Rolling updates gradually replace old pods with new ones, ensuring your application remains available throughout the deployment. Kubernetes creates pods with the new configuration while terminating old pods, maintaining a minimum number of available pods at all times.
The maxSurge setting controls how many extra pods can exist during the update (above the desired count), while maxUnavailable controls how many pods can be unavailable. Setting maxUnavailable: 0 ensures zero downtime but requires extra capacity during updates.
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # Max pods above desired count
maxUnavailable: 0 # Max pods that can be unavailableThe process:
- New ReplicaSet created with updated pod template
- New pods start coming up
- Old pods terminate as new pods become ready
- Traffic shifts gradually to new pods
How do you perform and monitor a rolling update?
Triggering a rolling update is as simple as changing the pod template in your Deployment - typically by updating the container image. Kubernetes automatically detects the change and begins the rollout process. You can monitor progress with the rollout status command.
If something goes wrong, rollback is immediate with the undo command. Kubernetes keeps rollout history, so you can even roll back to specific previous versions if needed.
# Update image (triggers rolling update)
kubectl set image deployment/my-app app=my-app:2.0
# Watch the rollout
kubectl rollout status deployment/my-app
# Something wrong? Roll back
kubectl rollout undo deployment/my-app
# Roll back to specific revision
kubectl rollout undo deployment/my-app --to-revision=2
# View history
kubectl rollout history deployment/my-appHow would you implement a blue-green deployment in Kubernetes?
Kubernetes doesn't have built-in blue-green deployments, but you can implement them using Services and multiple Deployments. The idea is to run two complete environments (blue and green), then switch traffic instantly by changing which deployment the Service points to.
Create two separate deployments with different labels (version: blue, version: green). The Service selector points to one version at a time. To switch, update the Service's selector to point to the other deployment. This gives you instant cutover and easy rollback by switching back.
Service Questions
Services provide stable networking for ephemeral pods.
Why do we need Services and how do they work?
Since pods are ephemeral with constantly changing IP addresses, applications need a stable way to find and communicate with each other. Services solve this by providing a consistent endpoint that automatically routes traffic to healthy pods matching a label selector.
When you create a Service, it gets a stable ClusterIP that never changes. kube-proxy on each node configures iptables or IPVS rules to forward traffic to backend pods. As pods come and go, the Service automatically updates its endpoints - no configuration changes needed.
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
type: ClusterIP
selector:
app: my-app # Routes to pods with this label
ports:
- port: 80 # Service port
targetPort: 3000 # Container portWhat are the different Service types and when should you use each?
Kubernetes offers four Service types, each suited for different access patterns. ClusterIP is the default and most common, providing internal-only access. NodePort exposes your service on a specific port on every node, useful for development but not production. LoadBalancer integrates with cloud providers to provision external load balancers. ExternalName creates a DNS alias to external services.
The choice depends on who needs access: internal services use ClusterIP, external production traffic uses LoadBalancer (or Ingress), and development/testing might use NodePort for quick external access.
| Type | Use Case | Access |
|---|---|---|
| ClusterIP | Internal communication | Inside cluster only |
| NodePort | Development/testing | node-ip:port |
| LoadBalancer | Production external access | Cloud LB → pods |
| ExternalName | External service alias | DNS CNAME |
How does DNS work for Services in Kubernetes?
Kubernetes provides built-in DNS for service discovery. Every Service gets a DNS entry that pods can use to find it. Within the same namespace, you can simply use the service name. For cross-namespace communication, use the format service-name.namespace.
The full DNS name follows the pattern service-name.namespace.svc.cluster.local, though you rarely need the full form. This DNS-based discovery means your application code doesn't need to know pod IPs or even service IPs - just use the service name.
# Inside cluster, access via:
curl http://my-app-service # Same namespace
curl http://my-app-service.default # Cross-namespace
curl http://my-app-service.default.svc.cluster.local # FQDNConfigMap and Secret Questions
Configuration management is essential for twelve-factor applications.
How do you handle configuration in Kubernetes?
Configuration should be separate from container images so you can use the same image across environments. Kubernetes provides ConfigMaps for non-sensitive configuration and Secrets for sensitive data. Both can be consumed as environment variables or mounted as files.
This separation allows you to change configuration without rebuilding images, and keeps sensitive data out of your container registry. The same Deployment manifest can reference different ConfigMaps in different namespaces for dev/staging/prod configurations.
ConfigMap for non-sensitive configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
DATABASE_HOST: "postgres.default.svc.cluster.local"
LOG_LEVEL: "info"
config.json: |
{
"feature_flags": {
"new_ui": true
}
}Secret for sensitive data (base64 encoded):
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
data:
DATABASE_PASSWORD: cGFzc3dvcmQxMjM= # base64 encoded
API_KEY: c2VjcmV0LWtleQ==How do you inject ConfigMaps and Secrets into pods?
There are two primary ways to use ConfigMaps and Secrets: as environment variables or as mounted files. Environment variables are simpler but can't be updated without restarting the pod. Mounted files can be updated dynamically (though your application must watch for changes).
The envFrom directive loads all keys as environment variables, while volumeMounts makes them available as files in a directory. Choose based on how your application expects configuration - some frameworks prefer environment variables, others expect config files.
spec:
containers:
- name: app
image: my-app:1.0
# As environment variables
envFrom:
- configMapRef:
name: app-config
- secretRef:
name: app-secrets
# Or mount as files
volumeMounts:
- name: config-volume
mountPath: /app/config
volumes:
- name: config-volume
configMap:
name: app-configWhat is the security concern with Kubernetes Secrets?
Secrets are only base64-encoded by default, not encrypted. Anyone with access to the API server or etcd can read them. Base64 is encoding, not encryption - it's trivially reversible. This is a common source of confusion and security issues.
For real security, enable encryption at rest in etcd, use RBAC to restrict Secret access, and consider external secret management solutions like HashiCorp Vault, AWS Secrets Manager, or the External Secrets Operator. These integrate with Kubernetes to inject secrets without storing them in etcd.
Namespace Questions
Namespaces organize and isolate cluster resources.
What are Namespaces and what are they used for?
Namespaces provide logical isolation within a single cluster, dividing resources between teams, projects, or environments. They're like virtual clusters within your physical cluster. Resources in different namespaces can have the same names without conflict.
Beyond organization, namespaces enable resource quotas (limiting CPU/memory per namespace), RBAC policies (controlling who can access what), and network policies (controlling pod-to-pod communication). They're essential for multi-tenant clusters.
apiVersion: v1
kind: Namespace
metadata:
name: production
---
apiVersion: v1
kind: Namespace
metadata:
name: stagingWhat are the default namespaces and when should you create new ones?
Kubernetes comes with three default namespaces: default (where resources go if unspecified), kube-system (for Kubernetes components), and kube-public (for publicly accessible resources). Most clusters also have kube-node-lease for node heartbeats.
Create namespaces for logical separation: one per environment (dev, staging, prod), per team, or per application suite. Don't use namespaces for versioning - use labels instead. For truly separate workloads requiring strong isolation, consider separate clusters rather than namespaces.
# Work in a specific namespace
kubectl get pods -n production
kubectl apply -f deployment.yaml -n staging
# Set default namespace for context
kubectl config set-context --current --namespace=productionLabel and Selector Questions
Labels are the foundation of Kubernetes' loose coupling.
How do labels and selectors work in Kubernetes?
Labels are key-value pairs attached to Kubernetes objects. They have no semantic meaning to Kubernetes itself - their power comes from selectors that query objects by their labels. This loose coupling is what makes Kubernetes flexible.
Services find pods via label selectors. Deployments manage pods via label selectors. You can query any objects by their labels using kubectl. This pattern means you can add or remove pods from a Service simply by changing labels, without touching the Service definition.
metadata:
labels:
app: my-app
environment: production
version: v1.2.0
team: backendWhat are best practices for labeling Kubernetes resources?
Consistent labeling makes your cluster manageable. At minimum, include app (application name), environment (dev/staging/prod), and version labels. Team ownership and component labels help with debugging and access control.
Use labels for selection and grouping; use annotations for non-identifying metadata like build information, documentation links, or tool-specific configuration. Labels have character restrictions (63 chars, alphanumeric, dash, underscore, dot) while annotations are more flexible.
# Filter by label
kubectl get pods -l app=my-app
kubectl get pods -l 'environment in (staging, production)'
kubectl get pods -l app=my-app,version=v1.2.0
# Delete by label
kubectl delete pods -l environment=testResource Management Questions
Resource management ensures fair scheduling and prevents resource starvation.
What is the difference between resource requests and limits?
Requests are the guaranteed resources a container needs - Kubernetes uses these for scheduling decisions. If a node can't provide the requested resources, the pod won't be scheduled there. Limits are the maximum resources a container can use.
The difference matters for behavior: exceeding memory limits causes the container to be OOMKilled, while exceeding CPU limits causes throttling (the container just runs slower). Always set requests to ensure predictable scheduling; set limits to prevent runaway containers from affecting others.
resources:
requests:
memory: "128Mi"
cpu: "100m" # 100 millicores = 0.1 CPU
limits:
memory: "256Mi"
cpu: "500m"What happens if you don't set resource requests?
Without requests, pods can be scheduled on any node regardless of actual resource availability. This leads to overcommitment - nodes running more workloads than they can handle. When resources get tight, pods without requests are the first to be evicted.
Setting appropriate requests also affects Quality of Service (QoS) classes. Pods with requests equal to limits get "Guaranteed" QoS and are least likely to be evicted. Pods with no requests or limits get "BestEffort" and are evicted first under pressure. "Burstable" is in between.
# See resource usage
kubectl top pods
kubectl top nodes
# Describe to see limits and current usage
kubectl describe pod my-app-xxxHealth Check Questions
Probes ensure traffic only goes to healthy containers.
What is the difference between liveness and readiness probes?
Both probes check container health, but they trigger different actions. A failing liveness probe tells Kubernetes the container is broken and should be restarted. A failing readiness probe tells Kubernetes the container can't serve traffic right now, removing it from Service endpoints.
Liveness probes detect deadlocks or hung processes that need a restart. Readiness probes handle temporary conditions like warming caches or waiting for dependencies - situations where restarting wouldn't help but the container shouldn't receive traffic.
Liveness probe: "Is the container alive?"
- Failure → Kubernetes restarts the container
- Use for: detecting deadlocks, hung processes
Readiness probe: "Can this container serve traffic?"
- Failure → Removed from Service endpoints
- Use for: warming caches, waiting for dependencies
How do you configure health probes in Kubernetes?
Probes support three check types: HTTP GET (returns 200-399), TCP socket (connection succeeds), and exec (command returns 0). For web applications, HTTP probes to dedicated health endpoints are most common. TCP probes work for non-HTTP services.
Key settings include initialDelaySeconds (time before first probe), periodSeconds (how often to probe), and failureThreshold (consecutive failures before action). A common mistake is setting initialDelaySeconds too low, causing containers to be killed before they finish starting.
spec:
containers:
- name: app
livenessProbe:
httpGet:
path: /healthz
port: 3000
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 3
failureThreshold: 3Probe types:
httpGet- HTTP request (most common for web apps)tcpSocket- TCP connection checkexec- Run a command in container
kubectl Command Questions
Interviewers expect fluency with kubectl.
What are the essential kubectl commands every developer should know?
The most common commands fall into a few categories: getting information (get, describe), viewing logs, executing commands in containers, and applying/deleting resources. The -o wide and -o yaml flags are particularly useful for debugging.
Mastering these commands shows you can actually work with Kubernetes, not just talk about it. Practice until they're muscle memory - interviewers notice when candidates fumble with basic kubectl operations.
# Get resources
kubectl get pods
kubectl get pods -o wide # More details (IP, node)
kubectl get pods -o yaml # Full YAML output
kubectl get all # Pods, services, deployments
# Describe (detailed info + events)
kubectl describe pod my-app-xxx
kubectl describe node node-1
# Logs
kubectl logs my-app-xxx
kubectl logs my-app-xxx -c sidecar # Specific container
kubectl logs -f my-app-xxx # Follow/stream
kubectl logs --previous my-app-xxx # Previous crashed container
# Execute commands
kubectl exec -it my-app-xxx -- /bin/sh
kubectl exec my-app-xxx -- env
# Apply/delete
kubectl apply -f manifest.yaml
kubectl delete -f manifest.yaml
kubectl delete pod my-app-xxx
# Debug
kubectl get events --sort-by='.lastTimestamp'
kubectl describe pod my-app-xxx | grep -A 10 EventsTroubleshooting Questions
Troubleshooting skills separate experienced practitioners from beginners.
How do you troubleshoot a pod stuck in Pending state?
A pod stuck in Pending means the scheduler can't find a suitable node. The Events section in kubectl describe pod tells you why. Common causes include insufficient resources (no node has enough CPU/memory), unsatisfied node selectors or affinity rules, and PersistentVolumeClaims that can't be bound.
Start by examining the events, then check if any nodes have available resources with kubectl describe nodes. The solution depends on the cause: scale up your cluster, reduce resource requests, fix selectors, or provision required storage.
kubectl describe pod my-app-xxx
# Look for Events section:
# - Insufficient CPU/memory → scale cluster or reduce requests
# - No nodes match selectors → check nodeSelector/affinity
# - PVC pending → check storage class, PV availabilityHow do you debug a pod in CrashLoopBackOff?
CrashLoopBackOff means the container keeps crashing and Kubernetes is backing off on restart attempts. The --previous flag shows logs from the crashed container instance, which usually reveals the problem.
Common causes include application errors on startup (missing dependencies, configuration issues), missing or incorrect ConfigMaps/Secrets, liveness probes failing too quickly (container killed before ready), and OOMKilled (memory limit exceeded). Fix the underlying issue rather than just increasing restart delays.
# Check logs from crashed container
kubectl logs my-app-xxx --previous
# Common causes:
# - Application error on startup
# - Missing config/secrets
# - Liveness probe failing too quickly
# - OOMKilled (check resource limits)How would you expose an application externally?
The approach depends on your environment and requirements. For development, NodePort services provide quick external access. For production cloud environments, LoadBalancer services provision cloud load balancers. For HTTP applications needing path-based routing, SSL termination, or virtual hosts, use an Ingress controller.
Ingress is the most flexible option for web applications, allowing multiple services behind a single load balancer with sophisticated routing rules. You'll need an Ingress controller (nginx-ingress, traefik, etc.) installed in your cluster.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-ingress
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app-service
port:
number: 80Options by use case:
- Development: NodePort service
- Production: LoadBalancer service (cloud) or Ingress controller
- HTTP routing: Ingress for path-based routing, SSL termination
Quick Reference
| Concept | Purpose | Key Point |
|---|---|---|
| Pod | Run containers | Ephemeral, shared network/storage |
| Deployment | Manage pod replicas | Rolling updates, self-healing |
| Service | Stable networking | Label selectors, load balancing |
| ConfigMap | Non-sensitive config | Env vars or mounted files |
| Secret | Sensitive data | Base64, enable encryption at rest |
| Namespace | Logical isolation | Quotas, RBAC, name scoping |
| Labels | Organize/select | Key-value pairs for querying |
| Probes | Health checks | Liveness (restart), Readiness (traffic) |
Related Resources
- Docker Interview Guide - Container fundamentals before orchestration
- Linux Commands Interview Guide - Essential commands for debugging containers
- CI/CD & GitHub Actions Interview Guide - Deploying to Kubernetes from pipelines
- Complete DevOps Engineer Interview Guide - Comprehensive DevOps preparation
- System Design Interview Guide - Architecture patterns where Kubernetes fits in
