Kubernetes Interview Questions: Core Concepts for Developers

·10 min read
kubernetesk8sdevopscontainersinterview-questions

Kubernetes has become the standard for container orchestration. If you're interviewing for backend, DevOps, or full-stack roles, expect questions about how Kubernetes works and why certain patterns exist.

This guide covers the core Kubernetes concepts that come up in interviews—with practical YAML examples you can actually use.

Pods: The Foundation

Q: What is a Pod in Kubernetes?

This is the starting point for most Kubernetes interviews.

Weak answer: "A pod is a container."

Strong answer:

A Pod is the smallest deployable unit in Kubernetes. It's a wrapper around one or more containers that:

  • Share a network namespace - All containers in a pod have the same IP address and can reach each other via localhost
  • Share storage volumes - Containers can access the same mounted volumes
  • Are scheduled together - Always run on the same node
  • Have a shared lifecycle - Created and destroyed as a unit
# Simple pod definition
apiVersion: v1
kind: Pod
metadata:
  name: my-app
  labels:
    app: my-app
spec:
  containers:
  - name: app
    image: my-app:1.0
    ports:
    - containerPort: 3000

Key insight for interviews: Pods are ephemeral. When a pod dies, it's replaced with a new pod (new IP, new identity)—not restarted. This is why you need Services for stable networking and Deployments for reliability.

When to use multi-container pods:

  • Sidecar patterns (logging, monitoring agents)
  • Init containers (database migrations, config fetching)
  • Tightly coupled processes that must share resources

Deployments: Managing Pods at Scale

Q: What's the difference between a Pod and a Deployment?

Understanding this distinction is crucial.

A Pod is a single instance. A Deployment is a controller that:

  • Maintains a desired number of pod replicas
  • Handles rolling updates and rollbacks
  • Replaces failed pods automatically
  • Provides declarative updates
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: app
        image: my-app:1.0
        ports:
        - containerPort: 3000
        resources:
          requests:
            memory: "128Mi"
            cpu: "100m"
          limits:
            memory: "256Mi"
            cpu: "500m"

What interviewers want to hear:

  1. You rarely create pods directly—you create Deployments
  2. Deployments create ReplicaSets, which create Pods
  3. The hierarchy: Deployment → ReplicaSet → Pods
# Common deployment commands
kubectl apply -f deployment.yaml
kubectl get deployments
kubectl rollout status deployment/my-app
kubectl rollout undo deployment/my-app
kubectl scale deployment/my-app --replicas=5

Rolling Updates and Rollbacks

Q: How do rolling updates work in Kubernetes?

Rolling updates let you update pods without downtime.

spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1        # Max pods above desired count
      maxUnavailable: 0  # Max pods that can be unavailable

The process:

  1. New ReplicaSet created with updated pod template
  2. New pods start coming up
  3. Old pods terminate as new pods become ready
  4. Traffic shifts gradually to new pods
# Update image (triggers rolling update)
kubectl set image deployment/my-app app=my-app:2.0
 
# Watch the rollout
kubectl rollout status deployment/my-app
 
# Something wrong? Roll back
kubectl rollout undo deployment/my-app
 
# Roll back to specific revision
kubectl rollout undo deployment/my-app --to-revision=2
 
# View history
kubectl rollout history deployment/my-app

Interview follow-up: "How would you do a blue-green deployment?"

Kubernetes doesn't have built-in blue-green, but you can:

  1. Create two deployments (blue and green)
  2. Switch the Service selector between them
  3. Or use Ingress rules to control traffic routing

Services: Stable Networking

Q: Why do we need Services? How do they work?

Since pods are ephemeral with changing IPs, Services provide stable endpoints.

Service types:

TypeUse CaseAccess
ClusterIPInternal communicationInside cluster only
NodePortDevelopment/testingnode-ip:port
LoadBalancerProduction external accessCloud LB → pods
ExternalNameExternal service aliasDNS CNAME
apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  type: ClusterIP
  selector:
    app: my-app  # Routes to pods with this label
  ports:
  - port: 80        # Service port
    targetPort: 3000 # Container port

How it works:

  1. Service gets a stable ClusterIP
  2. Label selector finds matching pods
  3. kube-proxy configures iptables/IPVS rules
  4. Traffic load-balanced across healthy pods
# Inside cluster, access via:
curl http://my-app-service          # Same namespace
curl http://my-app-service.default  # Cross-namespace
curl http://my-app-service.default.svc.cluster.local  # FQDN

Key point: Services use label selectors, not pod names. Add/remove pods dynamically, and the Service automatically routes to them.


ConfigMaps and Secrets

Q: How do you handle configuration in Kubernetes?

Configuration should be separate from container images. Kubernetes provides two resources:

ConfigMap - Non-sensitive configuration:

apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  DATABASE_HOST: "postgres.default.svc.cluster.local"
  LOG_LEVEL: "info"
  config.json: |
    {
      "feature_flags": {
        "new_ui": true
      }
    }

Secret - Sensitive data (base64 encoded):

apiVersion: v1
kind: Secret
metadata:
  name: app-secrets
type: Opaque
data:
  DATABASE_PASSWORD: cGFzc3dvcmQxMjM=  # base64 encoded
  API_KEY: c2VjcmV0LWtleQ==

Using them in pods:

spec:
  containers:
  - name: app
    image: my-app:1.0
    # As environment variables
    envFrom:
    - configMapRef:
        name: app-config
    - secretRef:
        name: app-secrets
    # Or mount as files
    volumeMounts:
    - name: config-volume
      mountPath: /app/config
  volumes:
  - name: config-volume
    configMap:
      name: app-config

Interview tip: Know that Secrets are only base64 encoded, not encrypted by default. For real security, enable encryption at rest and use external secret managers (Vault, AWS Secrets Manager) with operators.


Namespaces: Logical Isolation

Q: What are Namespaces used for?

Namespaces divide cluster resources between multiple teams or environments.

apiVersion: v1
kind: Namespace
metadata:
  name: production
---
apiVersion: v1
kind: Namespace
metadata:
  name: staging

Benefits:

  1. Name scoping - Same resource names in different namespaces
  2. Resource quotas - Limit CPU/memory per namespace
  3. Access control - RBAC policies per namespace
  4. Organization - Logical grouping of resources
# Work in a specific namespace
kubectl get pods -n production
kubectl apply -f deployment.yaml -n staging
 
# Set default namespace for context
kubectl config set-context --current --namespace=production

Default namespaces:

  • default - Where resources go if no namespace specified
  • kube-system - Kubernetes system components
  • kube-public - Publicly accessible resources

When NOT to use namespaces: For versioning (use labels), for separating unrelated applications (use separate clusters for strong isolation).


Labels and Selectors

Q: How do labels work in Kubernetes?

Labels are key-value pairs attached to objects. Selectors query objects by their labels.

metadata:
  labels:
    app: my-app
    environment: production
    version: v1.2.0
    team: backend

Why they matter:

  • Services find pods via label selectors
  • Deployments manage pods via label selectors
  • You can query and filter resources
# Filter by label
kubectl get pods -l app=my-app
kubectl get pods -l 'environment in (staging, production)'
kubectl get pods -l app=my-app,version=v1.2.0
 
# Delete by label
kubectl delete pods -l environment=test

Best practices:

  • Use consistent naming conventions
  • Include: app, environment, version, team
  • Labels are for selection; annotations are for metadata

Resource Requests and Limits

Q: What's the difference between requests and limits?

This affects scheduling and resource management.

resources:
  requests:
    memory: "128Mi"
    cpu: "100m"      # 100 millicores = 0.1 CPU
  limits:
    memory: "256Mi"
    cpu: "500m"

Requests:

  • Guaranteed resources
  • Used for scheduling decisions
  • Pod won't be scheduled if node can't provide requested resources

Limits:

  • Maximum resources allowed
  • Exceeding memory limit → pod killed (OOMKilled)
  • Exceeding CPU limit → throttled (not killed)

What interviewers want to hear:

  1. Always set requests—otherwise pods might get scheduled on overloaded nodes
  2. Limits prevent runaway containers from affecting others
  3. requests affect scheduling; limits affect runtime behavior
# See resource usage
kubectl top pods
kubectl top nodes
 
# Describe to see limits and current usage
kubectl describe pod my-app-xxx

Liveness and Readiness Probes

Q: What's the difference between liveness and readiness probes?

Both check container health, but have different purposes.

Liveness probe: "Is the container alive?"

  • Failure → Kubernetes restarts the container
  • Use for: detecting deadlocks, hung processes

Readiness probe: "Can this container serve traffic?"

  • Failure → Removed from Service endpoints
  • Use for: warming caches, waiting for dependencies
spec:
  containers:
  - name: app
    livenessProbe:
      httpGet:
        path: /healthz
        port: 3000
      initialDelaySeconds: 10
      periodSeconds: 5
      failureThreshold: 3
    readinessProbe:
      httpGet:
        path: /ready
        port: 3000
      initialDelaySeconds: 5
      periodSeconds: 3
      failureThreshold: 3

Probe types:

  • httpGet - HTTP request (most common for web apps)
  • tcpSocket - TCP connection check
  • exec - Run a command in container

Common mistake: Setting initialDelaySeconds too low—container killed before it starts.


Essential kubectl Commands

Interviewers often expect fluency with kubectl.

# Get resources
kubectl get pods
kubectl get pods -o wide              # More details (IP, node)
kubectl get pods -o yaml              # Full YAML output
kubectl get all                       # Pods, services, deployments
 
# Describe (detailed info + events)
kubectl describe pod my-app-xxx
kubectl describe node node-1
 
# Logs
kubectl logs my-app-xxx
kubectl logs my-app-xxx -c sidecar    # Specific container
kubectl logs -f my-app-xxx            # Follow/stream
kubectl logs --previous my-app-xxx    # Previous crashed container
 
# Execute commands
kubectl exec -it my-app-xxx -- /bin/sh
kubectl exec my-app-xxx -- env
 
# Apply/delete
kubectl apply -f manifest.yaml
kubectl delete -f manifest.yaml
kubectl delete pod my-app-xxx
 
# Debug
kubectl get events --sort-by='.lastTimestamp'
kubectl describe pod my-app-xxx | grep -A 10 Events

Common Interview Scenarios

"A pod is stuck in Pending state. How do you troubleshoot?"

kubectl describe pod my-app-xxx
# Look for Events section:
# - Insufficient CPU/memory → scale cluster or reduce requests
# - No nodes match selectors → check nodeSelector/affinity
# - PVC pending → check storage class, PV availability

"A pod is in CrashLoopBackOff. What do you do?"

# Check logs from crashed container
kubectl logs my-app-xxx --previous
 
# Common causes:
# - Application error on startup
# - Missing config/secrets
# - Liveness probe failing too quickly
# - OOMKilled (check resource limits)

"How would you expose an application externally?"

  1. Development: NodePort service
  2. Production: LoadBalancer service (cloud) or Ingress controller
  3. Ingress for HTTP routing, SSL termination, path-based routing
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-ingress
spec:
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-app-service
            port:
              number: 80

Quick Reference

ConceptPurposeKey Point
PodRun containersEphemeral, shared network/storage
DeploymentManage pod replicasRolling updates, self-healing
ServiceStable networkingLabel selectors, load balancing
ConfigMapNon-sensitive configEnv vars or mounted files
SecretSensitive dataBase64, enable encryption at rest
NamespaceLogical isolationQuotas, RBAC, name scoping
LabelsOrganize/selectKey-value pairs for querying
ProbesHealth checksLiveness (restart), Readiness (traffic)

Related Articles

If you found this helpful, check out these related guides:


What's Next?

These core concepts cover what most developers need for interviews. As you go deeper, explore:

  • Helm - Package management for Kubernetes
  • Horizontal Pod Autoscaler - Automatic scaling based on metrics
  • Network Policies - Controlling pod-to-pod traffic
  • RBAC - Role-based access control

The developers who stand out understand not just the "what" but the "why"—why pods are ephemeral, why Services use selectors, why you separate config from code.

Ready to ace your interview?

Get 550+ interview questions with detailed answers in our comprehensive PDF guides.

View PDF Guides