Linux skills are expected for backend, DevOps, and full-stack roles. Even if you develop on macOS or Windows, your code likely runs on Linux servers. Interviewers want to see you're comfortable navigating, troubleshooting, and automating in a Linux environment.
This guide covers the essential Linux commands and concepts that come up in developer interviews.
Table of Contents
- File System Navigation Questions
- File Operations Questions
- File Permissions Questions
- Hard Links and Soft Links Questions
- Text Processing Questions
- Process Management Questions
- Networking Questions
- Log Analysis Questions
- Find and Locate Questions
- Environment Variables Questions
- Shell Scripting Questions
- I/O Redirection Questions
File System Navigation Questions
Navigating the Linux file system is fundamental to every Linux-related interview.
What are the essential commands for navigating the Linux file system?
The core navigation commands are pwd (print working directory), cd (change directory), and ls (list files). These are basic commands, but interviewers expect fluency and knowledge of useful flags. Understanding how to efficiently move around the file system shows practical Linux experience.
The cd command has several shortcuts worth knowing: cd ~ goes to your home directory, cd - returns to the previous directory (useful for toggling between two locations), and cd .. moves up one level.
# Navigation
pwd # Print working directory
cd /path/to/dir # Change directory
cd .. # Go up one level
cd ~ # Go to home directory
cd - # Go to previous directory
# Listing files
ls # List files
ls -la # Long format, include hidden files
ls -lh # Human-readable sizes
ls -lt # Sort by modification time
# File info
file document.pdf # Determine file type
stat file.txt # Detailed file information
du -sh directory/ # Directory size
df -h # Disk space usageWhat are the important directories in the Linux file system?
The Linux file system follows a hierarchical structure with a root directory (/) at the top. Each directory has a specific purpose defined by the Filesystem Hierarchy Standard (FHS). Understanding this structure helps you know where to find configuration files, logs, and user data.
Knowing these paths is essential for troubleshooting and configuring Linux systems. Configuration files live in /etc, logs in /var/log, and user home directories under /home.
| Path | Purpose |
|---|---|
/ | Root directory |
/home | User home directories |
/etc | Configuration files |
/var | Variable data (logs, databases) |
/tmp | Temporary files |
/usr | User programs and data |
/opt | Optional/third-party software |
File Operations Questions
File manipulation is a daily task for any Linux user.
How do you create, copy, move, and delete files in Linux?
These fundamental file operations form the backbone of Linux administration. The touch command creates empty files, mkdir creates directories, cp copies, mv moves or renames, and rm removes files. Each command has flags that modify its behavior for different use cases.
The -r (recursive) flag is particularly important for commands that operate on directories. Without it, commands like cp and rm won't work on directories. The -p flag for cp preserves file attributes like timestamps and permissions.
# Create
touch file.txt # Create empty file
mkdir directory # Create directory
mkdir -p path/to/nested # Create nested directories
# Copy
cp source dest # Copy file
cp -r source/ dest/ # Copy directory recursively
cp -p source dest # Preserve permissions/timestamps
# Move/rename
mv oldname newname # Rename file
mv file /path/to/dest # Move file
# Delete
rm file.txt # Delete file
rm -r directory/ # Delete directory recursively
rm -rf directory/ # Force delete (careful!)
rmdir empty_directory/ # Delete empty directory onlyWhy is rm -rf dangerous and how do you use it safely?
The rm -rf command combines recursive deletion with force mode, meaning it will delete directories and their contents without any confirmation prompts. This is powerful but dangerous because there's no undo in Linux - deleted files are gone permanently unless you have backups.
The danger is amplified when using variables. If a variable is empty or undefined, rm -rf $DIR/* could expand to rm -rf /*, attempting to delete everything on the system. Always validate variables before using them in destructive commands.
# DANGEROUS - if $DIR is empty, deletes everything!
rm -rf $DIR/*
# SAFER - quotes and checks
if [ -n "$DIR" ]; then
rm -rf "$DIR"/*
fiFile Permissions Questions
File permissions are one of the most common Linux interview topics.
How do Linux file permissions work?
Linux uses a permission system with three sets of permissions (owner, group, others) and three permission types (read, write, execute). When you run ls -l, the first column shows these permissions. The system uses this to control who can access files and what actions they can perform.
Understanding both symbolic notation (rwx) and numeric notation (755) is essential. In numeric notation, read=4, write=2, execute=1, and you add them together for each permission set. So 755 means owner has full access (4+2+1=7), while group and others can read and execute (4+1=5).
# View permissions
ls -la
# -rw-r--r-- 1 user group 1234 Jan 1 12:00 file.txt
# drwxr-xr-x 2 user group 4096 Jan 1 12:00 directory/Breaking down -rw-r--r--:
| Position | Meaning |
|---|---|
| 1 | File type (- file, d directory, l link) |
| 2-4 | Owner permissions (rw-) |
| 5-7 | Group permissions (r--) |
| 8-10 | Others permissions (r--) |
How do you change file permissions with chmod?
The chmod command changes file permissions using either numeric or symbolic notation. Numeric notation specifies all permissions at once (chmod 755), while symbolic notation allows incremental changes (chmod u+x adds execute for the user). Both approaches are valid; choose based on whether you're setting permissions from scratch or modifying existing ones.
For ownership changes, use chown to change the owner and group. The -R flag makes changes recursive, applying to all files and subdirectories.
# Numeric notation
chmod 755 script.sh # rwxr-xr-x (owner: full, others: read+execute)
chmod 644 file.txt # rw-r--r-- (owner: read+write, others: read)
chmod 600 secrets.txt # rw------- (owner only)
# Symbolic notation
chmod u+x script.sh # Add execute for owner
chmod g-w file.txt # Remove write for group
chmod o=r file.txt # Set others to read only
chmod a+r file.txt # Add read for all
# Change ownership
chown user:group file.txt
chown -R user:group directory/What are SUID, SGID, and the sticky bit?
These are special permissions that extend beyond basic read/write/execute. SUID (Set User ID) makes an executable run with the owner's permissions instead of the caller's - this is how regular users can run passwd to change their own passwords even though the file is owned by root.
SGID (Set Group ID) on a directory makes new files inherit the directory's group, useful for shared project directories. The sticky bit on a directory means only the file owner can delete files, even if others have write permission - this is why users can't delete each other's files in /tmp.
chmod 4755 file # SUID
chmod 2755 directory # SGID
chmod 1777 directory # Sticky bitHard Links and Soft Links Questions
Understanding links is a common interview topic that tests deeper Linux knowledge.
What is the difference between hard links and soft links?
A hard link is another name for an existing file - it points directly to the file's inode (the data structure storing file metadata and data location). Multiple hard links to the same inode mean the file persists until all links are deleted. This makes hard links useful for backup scenarios.
A soft link (symbolic link or symlink) is a special file that contains a path to another file. It's like a shortcut - if the original file is moved or deleted, the symlink breaks. However, symlinks can span filesystems and link to directories, which hard links cannot do.
# Create soft link (symlink)
ln -s /path/to/original link_name
# Create hard link
ln /path/to/original link_name| Feature | Hard Link | Soft Link (Symlink) |
|---|---|---|
| Points to | Inode directly | File path |
| Original deleted | Still works | Broken link |
| Cross filesystem | No | Yes |
| Link to directory | No | Yes |
| File size | Same as original | Small (path length) |
How can you tell if two files are hard linked?
Files that are hard linked share the same inode number. You can check this using ls -li, which displays the inode number in the first column. If two files have the same inode, they're hard links to the same data on disk.
The link count (shown in ls -l output) indicates how many hard links point to the inode. When this count reaches zero, the file's data is actually deleted.
# Check if files are hard linked (same inode)
ls -li file1 file2
# 123456 -rw-r--r-- 2 user group 100 Jan 1 12:00 file1
# 123456 -rw-r--r-- 2 user group 100 Jan 1 12:00 file2Use cases:
- Soft links: Shortcuts, pointing to executables, config file alternatives
- Hard links: Backups, ensuring file persists while any link exists
Text Processing Questions
Text processing commands are essential for log analysis and data manipulation.
How do you search for patterns in files with grep?
grep (Global Regular Expression Print) searches for patterns in files or piped input. It's one of the most frequently used commands for log analysis, code searching, and data filtering. The basic syntax is grep pattern file, and it returns all lines containing the pattern.
Common flags enhance grep's functionality: -i for case-insensitive search, -r for recursive directory search, -n for line numbers, -v for inverting the match (showing lines that don't match), and -E for extended regular expressions.
grep "error" logfile.log # Find lines containing "error"
grep -i "error" logfile.log # Case insensitive
grep -r "TODO" ./src # Recursive search in directory
grep -n "error" logfile.log # Show line numbers
grep -v "debug" logfile.log # Invert match (exclude debug)
grep -c "error" logfile.log # Count matches
grep -E "error|warning" log.log # Extended regex (OR)
grep -A 3 "error" log.log # Show 3 lines after match
grep -B 2 "error" log.log # Show 2 lines before matchHow do you use sed for text substitution?
sed (Stream Editor) processes text line by line, most commonly used for find-and-replace operations. The substitution syntax is s/old/new/ where s means substitute. By default, sed only replaces the first occurrence on each line; add g for global replacement.
The -i flag edits files in place (modifying the original file). Without it, sed outputs to stdout, leaving the original file unchanged. This is useful for testing your sed command before applying it permanently.
# Replace first occurrence per line
sed 's/old/new/' file.txt
# Replace all occurrences
sed 's/old/new/g' file.txt
# Edit file in-place
sed -i 's/old/new/g' file.txt
# Delete lines matching pattern
sed '/pattern/d' file.txt
# Print specific lines
sed -n '5,10p' file.txt # Lines 5-10How do you use awk for column-based text processing?
awk is a powerful pattern-scanning language that excels at processing columnar data. By default, it splits input on whitespace, making columns accessible as $1, $2, etc. ($0 is the entire line). This makes it perfect for parsing logs, CSVs, and command output.
The -F flag changes the field separator, essential for processing CSV files or other delimited data. awk also supports conditions and calculations, making it useful for filtering and aggregating data.
# Print specific columns
awk '{print $1, $3}' file.txt
# With delimiter
awk -F',' '{print $2}' data.csv
# Conditional printing
awk '$3 > 100 {print $1}' data.txt
# Sum a column
awk '{sum += $1} END {print sum}' numbers.txt
# Common log parsing
awk '{print $1}' access.log | sort | uniq -c | sort -rn | head
# ^ Count requests by IP, sorted by frequencyProcess Management Questions
Process management is crucial for system administration and troubleshooting.
How do you view and find running processes?
The ps command shows process information, with ps aux being the most common form that shows all processes with detailed information. The output includes PID (process ID), CPU/memory usage, start time, and the command that started the process.
For interactive monitoring, top or htop (an enhanced version) provides a real-time view of processes sorted by resource usage. These tools are essential for identifying performance bottlenecks.
# View processes
ps aux # All processes, detailed
ps aux | grep node # Find specific process
pgrep -a node # Find by name (PID and command)
top # Interactive process viewer
htop # Better interactive viewer
# Process details
ps aux --sort=-%mem | head # Top memory consumers
ps aux --sort=-%cpu | head # Top CPU consumers
# Background/foreground
command & # Run in background
jobs # List background jobs
fg %1 # Bring job 1 to foreground
bg %1 # Resume job 1 in background
nohup command & # Run immune to hangupsHow do you kill processes in Linux?
The kill command sends signals to processes. By default, it sends SIGTERM (signal 15), which asks the process to terminate gracefully. If a process doesn't respond, SIGKILL (signal 9) forces immediate termination but doesn't allow cleanup.
For killing by name instead of PID, use pkill or killall. The lsof -i :port command is particularly useful for finding what process is using a specific port - a common issue when a port is already in use.
kill PID # SIGTERM (graceful shutdown)
kill -9 PID # SIGKILL (force kill)
kill -15 PID # SIGTERM (same as kill PID)
pkill process_name # Kill by name
pkill -f "node server" # Kill by full command match
killall node # Kill all processes with name
# Find what's using a port
lsof -i :3000
# Kill it
kill $(lsof -t -i :3000)What are the common Linux signals and their purposes?
Signals are software interrupts sent to processes to notify them of events. Understanding signals is important for writing robust applications and for system administration. Each signal has a number and a name, and processes can handle most signals (except SIGKILL and SIGSTOP).
SIGTERM is the polite termination request that allows cleanup. SIGKILL is the forced termination that can't be caught or ignored. SIGHUP is often used to tell daemons to reload their configuration files.
| Signal | Number | Purpose |
|---|---|---|
| SIGHUP | 1 | Hangup (reload config) |
| SIGINT | 2 | Interrupt (Ctrl+C) |
| SIGKILL | 9 | Force kill (can't be caught) |
| SIGTERM | 15 | Graceful termination |
| SIGSTOP | 19 | Pause process |
| SIGCONT | 18 | Resume process |
Networking Questions
Networking commands are essential for troubleshooting and API development.
What networking commands should every developer know?
Network troubleshooting starts with basic connectivity tests. ping checks if a host is reachable, nslookup and dig perform DNS lookups, and traceroute shows the path packets take to reach a destination. These commands help identify where network problems occur.
For HTTP requests from the command line, curl is the standard tool. It supports all HTTP methods, custom headers, and request bodies, making it invaluable for testing APIs. The wget command is simpler and focused on downloading files.
# Test connectivity
ping google.com
ping -c 4 google.com # Only 4 packets
# DNS lookup
nslookup example.com
dig example.com
# HTTP requests
curl https://api.example.com
curl -X POST -d '{"key":"value"}' -H "Content-Type: application/json" url
curl -I https://example.com # Headers only
curl -o file.zip https://url # Download to file
wget https://url/file.zip # Download file
# View network connections
netstat -tulpn # Listening ports with PIDs
ss -tulpn # Modern replacement for netstat
lsof -i :3000 # What's using port 3000
# Network interfaces
ip addr # Show IP addresses
ifconfig # Legacy command
ip route # Show routing table
# Trace route
traceroute google.comHow do you check if a port is open?
The nc (netcat) command with -zv flags tests if a port is open without sending data. This is useful for verifying that services are listening and accessible. The -z flag means zero-I/O mode (just check), and -v provides verbose output.
Knowing common port numbers is expected in interviews. Memorizing the defaults for databases (PostgreSQL 5432, MySQL 3306, MongoDB 27017, Redis 6379) and web services (HTTP 80, HTTPS 443, SSH 22) demonstrates practical experience.
# Check if port is open locally
nc -zv localhost 3000
# Check remote port
nc -zv example.com 443Common ports to know:
- 22 - SSH
- 80 - HTTP
- 443 - HTTPS
- 3000 - Node.js default
- 5432 - PostgreSQL
- 3306 - MySQL
- 6379 - Redis
- 27017 - MongoDB
Log Analysis Questions
Log analysis is a critical skill for debugging production issues.
How do you view and analyze logs in Linux?
Log files are typically stored in /var/log/ on Linux systems. The tail command is most commonly used for log analysis - tail -f follows the log in real-time, essential for watching logs as events happen. For large files, less provides paginated viewing with search capabilities.
The combination of grep, awk, sort, and uniq forms a powerful log analysis toolkit. You can extract patterns, count occurrences, and identify trends without loading entire files into memory.
# View logs
cat /var/log/syslog # Full file
tail /var/log/syslog # Last 10 lines
tail -f /var/log/syslog # Follow (live updates)
tail -n 100 app.log # Last 100 lines
head -n 50 app.log # First 50 lines
less /var/log/syslog # Paginated viewing
# Search logs
grep "ERROR" app.log
grep -i "error\|warning" app.log
zgrep "ERROR" app.log.gz # Search compressed logs
# Count occurrences
grep -c "ERROR" app.log
# Time-based filtering (if logs have timestamps)
awk '$0 >= "2024-01-15 10:00" && $0 <= "2024-01-15 11:00"' app.log
# Analyze log patterns
cut -d' ' -f1 access.log | sort | uniq -c | sort -rn | head
# ^ Count occurrences of first field (e.g., IPs in access log)How do you use journalctl for systemd logs?
On modern Linux distributions using systemd, journalctl provides access to the system journal. It offers powerful filtering by service, time range, and priority level. Unlike traditional log files, the journal is binary and provides structured querying.
The -u flag filters by service unit, -f follows in real-time (like tail -f), and --since/--until filter by time. Priority filtering with -p can show only errors, helping focus on problems.
journalctl # All logs
journalctl -u nginx # Logs for specific service
journalctl -f # Follow mode
journalctl --since "1 hour ago"
journalctl --since "2024-01-15" --until "2024-01-16"
journalctl -p err # Only errorsFind and Locate Questions
Finding files efficiently is essential for Linux administration.
How do you find files in Linux?
The find command searches for files in real-time by traversing the directory tree. It's flexible and powerful, supporting searches by name, type, size, modification time, permissions, and owner. The -exec flag allows running commands on found files.
While find searches in real-time, locate uses a pre-built database that's much faster but may not include recently created files. Run updatedb to refresh the locate database.
# find - search in real-time
find /path -name "*.log" # By name pattern
find /path -name "*.log" -mtime -7 # Modified in last 7 days
find /path -size +100M # Larger than 100MB
find /path -type f -name "*.tmp" -delete # Find and delete
find /path -type d -name "node_modules" # Find directories
find /path -user john # By owner
find /path -perm 755 # By permissions
# Execute command on results
find . -name "*.js" -exec grep "TODO" {} \;
find . -name "*.log" -exec rm {} \;
# Using xargs (often faster)
find . -name "*.js" | xargs grep "TODO"
find . -name "*.log" -print0 | xargs -0 rm # Handle spaces in names
# locate - uses database (faster but may be outdated)
locate filename
updatedb # Update the locate databaseEnvironment Variables Questions
Environment variables configure the shell and applications.
How do environment variables work in Linux?
Environment variables are key-value pairs that configure the behavior of processes. They're inherited by child processes, so setting a variable in your shell makes it available to programs you run. The export command makes a variable available to child processes.
Variables set in the current session are temporary. For persistence, add them to shell configuration files like ~/.bashrc (for interactive non-login shells) or ~/.bash_profile (for login shells). After modifying these files, run source ~/.bashrc to apply changes.
# View variables
env # All environment variables
echo $PATH # Specific variable
printenv HOME
# Set temporarily (current session)
export MY_VAR="value"
export PATH="$PATH:/new/path"
# Set for single command
MY_VAR="value" ./script.sh
# Persistent variables
# Add to ~/.bashrc or ~/.bash_profile:
export MY_VAR="value"
# Then reload:
source ~/.bashrcWhat are the important Linux environment variables?
Certain environment variables are fundamental to Linux operation. PATH determines where the shell looks for executables - if a command isn't found, it's often because the directory isn't in PATH. HOME points to the user's home directory, and SHELL indicates the current shell.
Understanding these variables helps troubleshoot issues like "command not found" errors (PATH problem) or scripts that behave differently under cron (different environment).
| Variable | Purpose |
|---|---|
PATH | Directories to search for executables |
HOME | User's home directory |
USER | Current username |
SHELL | Current shell |
PWD | Current working directory |
LANG | Language/locale settings |
Shell Scripting Questions
Basic shell scripting knowledge is expected for developer positions.
Can you write a basic shell script?
Shell scripts automate command sequences and are essential for system administration and CI/CD pipelines. A script starts with a shebang (#!/bin/bash) that specifies the interpreter. Variables don't need declaration - just assign values. Access them with $ prefix.
Interviewers often ask you to write simple scripts on the spot. Understanding conditionals, loops, and command-line arguments demonstrates practical shell proficiency.
#!/bin/bash
# Variables
NAME="World"
echo "Hello, $NAME"
# Command line arguments
echo "Script name: $0"
echo "First argument: $1"
echo "All arguments: $@"
echo "Number of arguments: $#"
# Conditionals
if [ -f "$1" ]; then
echo "File exists"
elif [ -d "$1" ]; then
echo "It's a directory"
else
echo "Not found"
fi
# Loops
for file in *.txt; do
echo "Processing $file"
done
for i in {1..5}; do
echo "Number $i"
done
while read -r line; do
echo "$line"
done < file.txt
# Functions
greet() {
local name=$1
echo "Hello, $name"
}
greet "Developer"
# Exit codes
command
if [ $? -eq 0 ]; then
echo "Success"
else
echo "Failed"
fiWhat are the common test operators in shell scripts?
Test operators are used in conditional statements to check file properties, compare strings, and evaluate numeric expressions. The [ ] syntax (or test command) evaluates these conditions. Understanding these operators is essential for writing robust scripts.
File tests check existence, type, and permissions. String tests compare or check if strings are empty. These form the foundation of conditional logic in shell scripts.
| Operator | Meaning |
|---|---|
-f file | File exists and is regular file |
-d dir | Directory exists |
-e path | Path exists |
-r file | File is readable |
-w file | File is writable |
-x file | File is executable |
-z string | String is empty |
-n string | String is not empty |
I/O Redirection Questions
Understanding I/O redirection is fundamental to shell mastery.
How does input/output redirection work in Linux?
Every process has three standard streams: stdin (0), stdout (1), and stderr (2). Redirection allows you to change where these streams go - sending output to files, combining streams, or using files as input. This is essential for logging, error handling, and command pipelines.
The > operator redirects stdout to a file (overwriting), while >> appends. 2> redirects stderr specifically. To combine both stdout and stderr into one stream, use 2>&1 (redirect stderr to where stdout is going).
# Output redirection
command > file.txt # Redirect stdout (overwrite)
command >> file.txt # Redirect stdout (append)
command 2> error.log # Redirect stderr
command > out.log 2>&1 # Redirect both stdout and stderr
command &> all.log # Shorthand for above (bash)
# Input redirection
command < input.txt # Read from file
command << EOF # Here document
line 1
line 2
EOF
# Pipes
command1 | command2 # Pipe stdout to next command
command1 |& command2 # Pipe stdout and stderr
# Common pipe patterns
cat file | grep "pattern" | sort | uniq -c
ps aux | grep node | awk '{print $2}' | xargs killWhat is /dev/null and when do you use it?
/dev/null is a special device file that discards all data written to it. It's commonly used to suppress output you don't care about, such as discarding error messages or hiding verbose command output.
Redirecting to /dev/null is useful in scripts where you only care about the exit code, not the output. It's also used to prevent commands from blocking while waiting for input.
# Discard output
command > /dev/null # Discard stdout
command 2> /dev/null # Discard stderr
command > /dev/null 2>&1 # Discard all outputFile descriptors:
0= stdin1= stdout2= stderr
Quick Reference
| Category | Commands |
|---|---|
| Navigation | cd, ls, pwd, find, locate |
| Files | cp, mv, rm, mkdir, touch, cat, less |
| Permissions | chmod, chown, ls -l |
| Text | grep, sed, awk, cut, sort, uniq |
| Processes | ps, top, kill, pkill, bg, fg |
| Network | curl, ping, netstat, ss, lsof |
| Disk | df, du, mount |
| Archives | tar, gzip, unzip |
Related Resources
- Docker Interview Guide - Containers run on Linux
- Kubernetes Interview Guide - Orchestrating Linux containers
- CI/CD & GitHub Actions Interview Guide - Automation pipelines using shell commands
- Complete DevOps Engineer Interview Guide - Comprehensive DevOps preparation
