Linux skills are expected for backend, DevOps, and full-stack roles. Even if you develop on macOS or Windows, your code likely runs on Linux servers. Interviewers want to see you're comfortable navigating, troubleshooting, and automating in a Linux environment.
This guide covers the essential Linux commands and concepts that come up in developer interviews.
File System Navigation
Q: What are the essential commands for navigating the Linux file system?
These are the basics, but interviewers expect fluency:
# Navigation
pwd # Print working directory
cd /path/to/dir # Change directory
cd .. # Go up one level
cd ~ # Go to home directory
cd - # Go to previous directory
# Listing files
ls # List files
ls -la # Long format, include hidden files
ls -lh # Human-readable sizes
ls -lt # Sort by modification time
# File info
file document.pdf # Determine file type
stat file.txt # Detailed file information
du -sh directory/ # Directory size
df -h # Disk space usageDirectory structure to know:
| Path | Purpose |
|---|---|
/ | Root directory |
/home | User home directories |
/etc | Configuration files |
/var | Variable data (logs, databases) |
/tmp | Temporary files |
/usr | User programs and data |
/opt | Optional/third-party software |
File Operations
Q: How do you create, copy, move, and delete files?
# Create
touch file.txt # Create empty file
mkdir directory # Create directory
mkdir -p path/to/nested # Create nested directories
# Copy
cp source dest # Copy file
cp -r source/ dest/ # Copy directory recursively
cp -p source dest # Preserve permissions/timestamps
# Move/rename
mv oldname newname # Rename file
mv file /path/to/dest # Move file
# Delete
rm file.txt # Delete file
rm -r directory/ # Delete directory recursively
rm -rf directory/ # Force delete (careful!)
rmdir empty_directory/ # Delete empty directory onlyInterview tip: Always mention being careful with rm -rf, especially with variables:
# DANGEROUS - if $DIR is empty, deletes everything!
rm -rf $DIR/*
# SAFER - quotes and checks
if [ -n "$DIR" ]; then
rm -rf "$DIR"/*
fiFile Permissions
Q: How do Linux file permissions work?
This is a common interview topic. Understand both symbolic and numeric notation.
# View permissions
ls -la
# -rw-r--r-- 1 user group 1234 Jan 1 12:00 file.txt
# drwxr-xr-x 2 user group 4096 Jan 1 12:00 directory/Breaking down -rw-r--r--:
| Position | Meaning |
|---|---|
| 1 | File type (- file, d directory, l link) |
| 2-4 | Owner permissions (rw-) |
| 5-7 | Group permissions (r--) |
| 8-10 | Others permissions (r--) |
Permission values:
r(read) = 4w(write) = 2x(execute) = 1
# Numeric notation
chmod 755 script.sh # rwxr-xr-x (owner: full, others: read+execute)
chmod 644 file.txt # rw-r--r-- (owner: read+write, others: read)
chmod 600 secrets.txt # rw------- (owner only)
# Symbolic notation
chmod u+x script.sh # Add execute for owner
chmod g-w file.txt # Remove write for group
chmod o=r file.txt # Set others to read only
chmod a+r file.txt # Add read for all
# Change ownership
chown user:group file.txt
chown -R user:group directory/Special permissions to know:
- SUID (4): Execute as file owner (e.g.,
passwdcommand) - SGID (2): Execute as group / new files inherit directory group
- Sticky bit (1): Only owner can delete files (e.g.,
/tmp)
chmod 4755 file # SUID
chmod 2755 directory # SGID
chmod 1777 directory # Sticky bitHard Links vs Soft Links
Q: What's the difference between hard links and soft links?
# Create soft link (symlink)
ln -s /path/to/original link_name
# Create hard link
ln /path/to/original link_name| Feature | Hard Link | Soft Link (Symlink) |
|---|---|---|
| Points to | Inode directly | File path |
| Original deleted | Still works | Broken link |
| Cross filesystem | No | Yes |
| Link to directory | No | Yes |
| File size | Same as original | Small (path length) |
Use cases:
- Soft links: Shortcuts, pointing to executables, config file alternatives
- Hard links: Backups, ensuring file persists while any link exists
# Check if files are hard linked (same inode)
ls -li file1 file2
# 123456 -rw-r--r-- 2 user group 100 Jan 1 12:00 file1
# 123456 -rw-r--r-- 2 user group 100 Jan 1 12:00 file2Text Processing: grep, sed, awk
Q: How do you search and manipulate text in Linux?
grep - Search for patterns:
grep "error" logfile.log # Find lines containing "error"
grep -i "error" logfile.log # Case insensitive
grep -r "TODO" ./src # Recursive search in directory
grep -n "error" logfile.log # Show line numbers
grep -v "debug" logfile.log # Invert match (exclude debug)
grep -c "error" logfile.log # Count matches
grep -E "error|warning" log.log # Extended regex (OR)
grep -A 3 "error" log.log # Show 3 lines after match
grep -B 2 "error" log.log # Show 2 lines before matchsed - Stream editor:
# Replace first occurrence per line
sed 's/old/new/' file.txt
# Replace all occurrences
sed 's/old/new/g' file.txt
# Edit file in-place
sed -i 's/old/new/g' file.txt
# Delete lines matching pattern
sed '/pattern/d' file.txt
# Print specific lines
sed -n '5,10p' file.txt # Lines 5-10awk - Pattern scanning:
# Print specific columns
awk '{print $1, $3}' file.txt
# With delimiter
awk -F',' '{print $2}' data.csv
# Conditional printing
awk '$3 > 100 {print $1}' data.txt
# Sum a column
awk '{sum += $1} END {print sum}' numbers.txt
# Common log parsing
awk '{print $1}' access.log | sort | uniq -c | sort -rn | head
# ^ Count requests by IP, sorted by frequencyProcess Management
Q: How do you find and manage processes?
# View processes
ps aux # All processes, detailed
ps aux | grep node # Find specific process
pgrep -a node # Find by name (PID and command)
top # Interactive process viewer
htop # Better interactive viewer
# Process details
ps aux --sort=-%mem | head # Top memory consumers
ps aux --sort=-%cpu | head # Top CPU consumers
# Background/foreground
command & # Run in background
jobs # List background jobs
fg %1 # Bring job 1 to foreground
bg %1 # Resume job 1 in background
nohup command & # Run immune to hangupsKilling processes:
kill PID # SIGTERM (graceful shutdown)
kill -9 PID # SIGKILL (force kill)
kill -15 PID # SIGTERM (same as kill PID)
pkill process_name # Kill by name
pkill -f "node server" # Kill by full command match
killall node # Kill all processes with name
# Find what's using a port
lsof -i :3000
# Kill it
kill $(lsof -t -i :3000)Common signals:
| Signal | Number | Purpose |
|---|---|---|
| SIGHUP | 1 | Hangup (reload config) |
| SIGINT | 2 | Interrupt (Ctrl+C) |
| SIGKILL | 9 | Force kill (can't be caught) |
| SIGTERM | 15 | Graceful termination |
| SIGSTOP | 19 | Pause process |
| SIGCONT | 18 | Resume process |
Networking Commands
Q: What networking commands should every developer know?
# Test connectivity
ping google.com
ping -c 4 google.com # Only 4 packets
# DNS lookup
nslookup example.com
dig example.com
# HTTP requests
curl https://api.example.com
curl -X POST -d '{"key":"value"}' -H "Content-Type: application/json" url
curl -I https://example.com # Headers only
curl -o file.zip https://url # Download to file
wget https://url/file.zip # Download file
# View network connections
netstat -tulpn # Listening ports with PIDs
ss -tulpn # Modern replacement for netstat
lsof -i :3000 # What's using port 3000
# Network interfaces
ip addr # Show IP addresses
ifconfig # Legacy command
ip route # Show routing table
# Trace route
traceroute google.comPort checking:
# Check if port is open locally
nc -zv localhost 3000
# Check remote port
nc -zv example.com 443
# Common ports to know
# 22 - SSH
# 80 - HTTP
# 443 - HTTPS
# 3000 - Node.js default
# 5432 - PostgreSQL
# 3306 - MySQL
# 6379 - Redis
# 27017 - MongoDBLog Analysis
Q: How do you analyze logs in Linux?
# View logs
cat /var/log/syslog # Full file
tail /var/log/syslog # Last 10 lines
tail -f /var/log/syslog # Follow (live updates)
tail -n 100 app.log # Last 100 lines
head -n 50 app.log # First 50 lines
less /var/log/syslog # Paginated viewing
# Search logs
grep "ERROR" app.log
grep -i "error\|warning" app.log
zgrep "ERROR" app.log.gz # Search compressed logs
# Count occurrences
grep -c "ERROR" app.log
# Time-based filtering (if logs have timestamps)
awk '$0 >= "2024-01-15 10:00" && $0 <= "2024-01-15 11:00"' app.log
# Analyze log patterns
cut -d' ' -f1 access.log | sort | uniq -c | sort -rn | head
# ^ Count occurrences of first field (e.g., IPs in access log)journalctl (systemd logs):
journalctl # All logs
journalctl -u nginx # Logs for specific service
journalctl -f # Follow mode
journalctl --since "1 hour ago"
journalctl --since "2024-01-15" --until "2024-01-16"
journalctl -p err # Only errorsfind and locate
Q: How do you find files in Linux?
# find - search in real-time
find /path -name "*.log" # By name pattern
find /path -name "*.log" -mtime -7 # Modified in last 7 days
find /path -size +100M # Larger than 100MB
find /path -type f -name "*.tmp" -delete # Find and delete
find /path -type d -name "node_modules" # Find directories
find /path -user john # By owner
find /path -perm 755 # By permissions
# Execute command on results
find . -name "*.js" -exec grep "TODO" {} \;
find . -name "*.log" -exec rm {} \;
# Using xargs (often faster)
find . -name "*.js" | xargs grep "TODO"
find . -name "*.log" -print0 | xargs -0 rm # Handle spaces in names
# locate - uses database (faster but may be outdated)
locate filename
updatedb # Update the locate databaseEnvironment Variables
Q: How do environment variables work in Linux?
# View variables
env # All environment variables
echo $PATH # Specific variable
printenv HOME
# Set temporarily (current session)
export MY_VAR="value"
export PATH="$PATH:/new/path"
# Set for single command
MY_VAR="value" ./script.sh
# Persistent variables
# Add to ~/.bashrc or ~/.bash_profile:
export MY_VAR="value"
# Then reload:
source ~/.bashrcImportant variables:
| Variable | Purpose |
|---|---|
PATH | Directories to search for executables |
HOME | User's home directory |
USER | Current username |
SHELL | Current shell |
PWD | Current working directory |
LANG | Language/locale settings |
Shell Scripting Basics
Q: Can you write a basic shell script?
Interviewers often ask for simple scripts on the spot.
#!/bin/bash
# Variables
NAME="World"
echo "Hello, $NAME"
# Command line arguments
echo "Script name: $0"
echo "First argument: $1"
echo "All arguments: $@"
echo "Number of arguments: $#"
# Conditionals
if [ -f "$1" ]; then
echo "File exists"
elif [ -d "$1" ]; then
echo "It's a directory"
else
echo "Not found"
fi
# Loops
for file in *.txt; do
echo "Processing $file"
done
for i in {1..5}; do
echo "Number $i"
done
while read -r line; do
echo "$line"
done < file.txt
# Functions
greet() {
local name=$1
echo "Hello, $name"
}
greet "Developer"
# Exit codes
command
if [ $? -eq 0 ]; then
echo "Success"
else
echo "Failed"
fiCommon test operators:
| Operator | Meaning |
|---|---|
-f file | File exists and is regular file |
-d dir | Directory exists |
-e path | Path exists |
-r file | File is readable |
-w file | File is writable |
-x file | File is executable |
-z string | String is empty |
-n string | String is not empty |
I/O Redirection and Pipes
Q: Explain input/output redirection in Linux.
# Output redirection
command > file.txt # Redirect stdout (overwrite)
command >> file.txt # Redirect stdout (append)
command 2> error.log # Redirect stderr
command > out.log 2>&1 # Redirect both stdout and stderr
command &> all.log # Shorthand for above (bash)
# Input redirection
command < input.txt # Read from file
command << EOF # Here document
line 1
line 2
EOF
# Pipes
command1 | command2 # Pipe stdout to next command
command1 |& command2 # Pipe stdout and stderr
# Common pipe patterns
cat file | grep "pattern" | sort | uniq -c
ps aux | grep node | awk '{print $2}' | xargs killFile descriptors:
0= stdin1= stdout2= stderr
# Discard output
command > /dev/null # Discard stdout
command 2> /dev/null # Discard stderr
command > /dev/null 2>&1 # Discard all outputQuick Reference: Most Common Commands
| Category | Commands |
|---|---|
| Navigation | cd, ls, pwd, find, locate |
| Files | cp, mv, rm, mkdir, touch, cat, less |
| Permissions | chmod, chown, ls -l |
| Text | grep, sed, awk, cut, sort, uniq |
| Processes | ps, top, kill, pkill, bg, fg |
| Network | curl, ping, netstat, ss, lsof |
| Disk | df, du, mount |
| Archives | tar, gzip, unzip |
Related Articles
If you found this helpful, check out these related guides:
- Complete DevOps Engineer Interview Guide - comprehensive preparation guide for DevOps interviews
- Docker Interview Guide - Containers run on Linux
- Kubernetes Interview Guide - Orchestrating Linux containers
- Node.js Advanced Interview Guide - Server-side JavaScript on Linux
- System Design Interview Guide - Architecture where Linux servers power everything
- CI/CD & GitHub Actions Interview Guide - Automation pipelines using shell commands
What's Next?
These commands cover what most developers need day-to-day. As you go deeper, explore:
- systemd - Service management
- cron - Scheduled tasks
- iptables/nftables - Firewall rules
- strace/ltrace - System call debugging
The developers who stand out can navigate, troubleshoot, and automate on Linux without hesitation. Practice these commands until they're muscle memory.
