Docker has become essential knowledge for developers. Whether you're deploying applications, setting up local development environments, or working with CI/CD pipelines, interviewers expect you to understand containerization fundamentals.
This guide covers the Docker questions that actually come up in interviews—from basic concepts to production-ready Dockerfile patterns.
Images vs Containers: The Foundation
Q: What's the difference between a Docker image and a container?
This is often the first Docker question in an interview. Many candidates give vague answers.
Weak answer: "An image is like a template and a container is running."
Strong answer:
An image is a read-only, layered filesystem containing everything needed to run an application: code, runtime, libraries, environment variables, and configuration. Images are built from Dockerfiles and can be shared via registries.
A container is a running instance of an image. When you start a container, Docker adds a thin writable layer on top of the image layers. This is where runtime changes (logs, temp files, state) are stored.
Key distinction: You can create multiple containers from one image, each with its own isolated writable layer. The underlying image remains unchanged.
# Image: the blueprint
docker pull node:20-alpine
# Container: running instance
docker run -d --name my-app node:20-alpine
# Multiple containers from same image
docker run -d --name my-app-2 node:20-alpine
docker run -d --name my-app-3 node:20-alpineWhat interviewers want to hear: Understanding that images are immutable and containers add a writable layer. Bonus points for mentioning copy-on-write and how this enables fast container startup.
Dockerfile Instructions: CMD vs ENTRYPOINT
Q: What's the difference between CMD and ENTRYPOINT?
This trips up many developers because both seem to "run a command."
CMD provides default arguments that can be completely overridden:
FROM node:20-alpine
CMD ["npm", "start"]# Uses CMD default
docker run my-app
# Overrides CMD entirely
docker run my-app npm testENTRYPOINT defines the main executable:
FROM node:20-alpine
ENTRYPOINT ["node"]
CMD ["app.js"]# Runs: node app.js
docker run my-app
# Runs: node server.js (CMD overridden, ENTRYPOINT stays)
docker run my-app server.jsBest practice pattern:
ENTRYPOINT ["node"]
CMD ["app.js"]This lets users change the script file while keeping node as the process. For production apps, you might use:
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["start"]Where the entrypoint script handles initialization (migrations, env validation) before executing the CMD.
Layer Caching: Build Performance
Q: How does Docker layer caching work? How do you optimize for it?
Understanding layer caching separates junior from senior Docker users.
Docker executes each Dockerfile instruction and caches the result as a layer. On subsequent builds, if an instruction and all previous layers haven't changed, Docker reuses the cached layer.
The cache invalidation rule: When any layer changes, all subsequent layers must rebuild.
Bad Dockerfile (cache busted on every code change):
FROM node:20-alpine
WORKDIR /app
COPY . .
RUN npm install
CMD ["npm", "start"]Every code change copies new files, invalidating npm install cache.
Optimized Dockerfile:
FROM node:20-alpine
WORKDIR /app
# Dependencies change less often than code
COPY package*.json ./
RUN npm ci
# Code changes frequently - this layer rebuilds
COPY . .
CMD ["npm", "start"]Now npm ci only reruns when package.json or package-lock.json change.
Pro tips interviewers appreciate:
- Use
npm ciinstead ofnpm installfor reproducible builds - Order instructions from least to most frequently changed
- Use
.dockerignoreto excludenode_modules,.git, logs
Multi-Stage Builds: Production Images
Q: What are multi-stage builds and why use them?
Multi-stage builds are essential for production-ready images. They let you use full build toolchains without shipping them in your final image.
Single-stage problem:
FROM node:20
WORKDIR /app
COPY . .
RUN npm ci && npm run build
CMD ["node", "dist/index.js"]
# Image: 1.2GB (includes npm, dev dependencies, source)Multi-stage solution:
# Stage 1: Build
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/index.js"]
# Image: 150MBEven smaller with production dependencies only:
# Stage 1: Build
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production deps only
FROM node:20-alpine AS production
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY --from=builder /app/dist ./dist
CMD ["node", "dist/index.js"]
# Image: 80MBWhat interviewers want to hear: Multi-stage builds reduce attack surface (fewer packages = fewer vulnerabilities), speed up deployments (smaller images), and separate concerns (build tools vs runtime).
COPY vs ADD
Q: What's the difference between COPY and ADD?
COPY does exactly what it says—copies files from build context to the image:
COPY package.json ./
COPY src/ ./src/ADD has two extra features:
- Auto-extracts tar archives
- Can download from URLs (not recommended)
# Extracts the tar into /app
ADD app.tar.gz /app/
# Downloads file (avoid this - not cached, no checksum)
ADD https://example.com/file.txt /app/Best practice: Always use COPY unless you specifically need tar extraction. It's more explicit and predictable. For downloads, use curl or wget in a RUN instruction so you can verify checksums.
# Better than ADD for downloads
RUN curl -fsSL https://example.com/file.txt -o /app/file.txt \
&& echo "expected-checksum /app/file.txt" | sha256sum -c -Docker Compose for Development
Q: When would you use Docker Compose?
Docker Compose defines multi-container applications in a single YAML file. It's essential for local development environments that mirror production.
# docker-compose.yml
version: '3.8'
services:
app:
build: .
ports:
- "3000:3000"
volumes:
- .:/app
- /app/node_modules
environment:
- NODE_ENV=development
- DATABASE_URL=postgres://user:pass@db:5432/myapp
depends_on:
- db
- redis
db:
image: postgres:15-alpine
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=myapp
redis:
image: redis:7-alpine
volumes:
postgres_data:Key patterns to know:
- Bind mounts for live reload:
.:/appsyncs code changes - Anonymous volume for node_modules:
/app/node_modulesprevents overwriting container's modules with host's - Named volumes for persistence:
postgres_datasurvives container restarts - Service discovery: Services reach each other by name (
db:5432)
# Start all services
docker-compose up -d
# View logs
docker-compose logs -f app
# Rebuild after Dockerfile changes
docker-compose up -d --build
# Clean shutdown
docker-compose down
# Remove volumes too
docker-compose down -vVolumes vs Bind Mounts
Q: What's the difference between volumes and bind mounts?
Both persist data outside the container's writable layer, but they work differently.
Bind mounts link a host path directly:
docker run -v /host/path:/container/path myapp
# or explicitly
docker run --mount type=bind,source=/host/path,target=/container/path myapp- Depends on host filesystem structure
- Great for development (live code changes)
- Host files accessible in container immediately
Volumes are managed by Docker:
docker volume create mydata
docker run -v mydata:/container/path myapp
# or explicitly
docker run --mount type=volume,source=mydata,target=/container/path myapp- Stored in Docker's area (
/var/lib/docker/volumes/) - Portable between hosts
- Easier to backup and migrate
- Can use volume drivers for cloud storage
When to use each:
| Use Case | Choice |
|---|---|
| Development (code sync) | Bind mount |
| Database data | Volume |
| Shared config files | Bind mount |
| Production data persistence | Volume |
| CI/CD caching | Volume |
Networking Basics
Q: How do containers communicate with each other?
Docker provides several network modes:
Bridge (default): Containers on same bridge network can communicate by name.
# Create custom network
docker network create mynet
# Containers can reach each other by name
docker run -d --name api --network mynet myapi
docker run -d --name web --network mynet myweb
# From 'web', can reach: http://api:3000Host: Container shares host's network stack. No port mapping needed, but no isolation.
docker run --network host myapp
# App on port 3000 is directly on host:3000None: No networking. Complete isolation.
Docker Compose default: Creates a network per project. Services communicate by service name.
services:
api:
# reachable at 'api:3000' from other services
ports:
- "3000:3000" # Also exposed to host
worker:
# Can reach http://api:3000
# Not exposed to host (no ports mapping)Environment Variables and Secrets
Q: How do you handle configuration and secrets in Docker?
Environment variables for non-sensitive config:
# Default in Dockerfile
ENV NODE_ENV=production
ENV PORT=3000# Override at runtime
docker run -e NODE_ENV=development -e PORT=8080 myapp
# From file
docker run --env-file .env myappSecrets should never be in images or environment variables in production.
Development: .env files are fine
# docker-compose.yml
services:
app:
env_file:
- .envProduction approaches:
- Docker Swarm secrets:
echo "mysecret" | docker secret create db_password -- Mount from secret manager:
volumes:
- /run/secrets/db_password:/run/secrets/db_password:ro- Inject at runtime via orchestrator (Kubernetes secrets, AWS Secrets Manager, HashiCorp Vault)
Key point for interviews: Never bake secrets into images. Even if you delete them in a later layer, they exist in earlier layers and can be extracted.
Common Commands Cheat Sheet
Interviewers sometimes ask you to explain commands or troubleshoot scenarios.
# Images
docker build -t myapp:v1 .
docker images
docker rmi myapp:v1
docker image prune # Remove unused images
# Containers
docker run -d -p 3000:3000 --name app myapp
docker ps # Running containers
docker ps -a # All containers
docker stop app
docker rm app
docker logs app
docker logs -f app # Follow logs
# Debugging
docker exec -it app sh # Shell into container
docker inspect app # Full container details
docker stats # Resource usage
# Cleanup
docker system prune # Remove unused data
docker system prune -a # Including unused imagesTroubleshooting questions to expect:
- Container exits immediately? Check
docker logsand ensure process runs in foreground - Can't connect to port? Verify port mapping with
docker ps, check if app binds to0.0.0.0notlocalhost - Build slow? Check
.dockerignore, optimize layer order - Image too large? Use multi-stage builds, alpine base images
Quick Reference: Dockerfile Best Practices
| Practice | Example |
|---|---|
| Use specific base image tags | node:20-alpine not node:latest |
| Non-root user | USER node |
| Multi-stage builds | Separate build and runtime stages |
| Order for cache | Dependencies before source code |
| Combine RUN commands | Reduce layers with && |
| Use .dockerignore | Exclude node_modules, .git, logs |
| COPY over ADD | Unless you need tar extraction |
| Explicit WORKDIR | WORKDIR /app |
| Health checks | HEALTHCHECK CMD curl -f http://localhost/health |
Related Articles
If you found this helpful, check out these related guides:
- Complete DevOps Engineer Interview Guide - comprehensive preparation guide for DevOps interviews
- Node.js Advanced Interview Guide - Cluster mode, worker threads, and production patterns
- System Design Interview Guide - Architecture decisions where containers play a key role
- Express Middleware Interview Guide - Building the APIs that run in your containers
- Kubernetes Interview Guide - Container orchestration at scale
- Linux Commands Interview Guide - Essential commands for containers and servers
- CI/CD & GitHub Actions Interview Guide - Building and deploying Docker images in pipelines
What's Next?
Docker is the foundation, but modern deployments go further. Once you're comfortable with these concepts, explore:
- Kubernetes - Orchestrating containers at scale
- CI/CD pipelines - Automated building and deploying images
- Container security - Scanning, signing, runtime protection
The developers who stand out in interviews can explain not just how to use Docker, but why certain patterns exist and what problems they solve.
