Node.js handles millions of concurrent connections on a single thread—something that would crash traditional threaded servers. How? The answer lies in the event loop, libuv, and a runtime architecture that 90% of Node developers use daily without truly understanding. These advanced questions test the runtime-level knowledge that separates senior backend developers from those who've just used Express.
Table of Contents
- Event Loop Questions
- Async Timing Questions
- Libuv Questions
- Streams Questions
- Scaling Questions
- Module System Questions
- Error Handling Questions
- Security Questions
- Quick Reference
Event Loop Questions
These questions test your understanding of Node.js's core async mechanism.
How does the Node.js event loop work?
The event loop is Node.js's mechanism for handling asynchronous operations on a single thread. It continuously cycles through phases—timers, pending callbacks, poll (for I/O), check (for setImmediate), and close callbacks—executing queued callbacks in each phase. This design lets Node.js handle thousands of concurrent connections without creating a thread for each one.
Think of the event loop like a theme park with different queues for different rides. Each "phase" is a different ride, and the event loop is the attendant who moves through each queue in order, letting people onto the ride before moving to the next.
flowchart TB
T["timers<br/>← setTimeout, setInterval"] --> PC["pending callbacks<br/>← I/O callbacks deferred"]
PC --> IP["idle, prepare<br/>← internal use"]
IP --> P["poll<br/>← retrieve new I/O events"]
P --> C["check<br/>← setImmediate callbacks"]
C --> CC["close callbacks<br/>← socket.on('close')"]
CC --> THere's how different async operations execute:
console.log('1: Start');
setTimeout(() => console.log('2: setTimeout'), 0);
setImmediate(() => console.log('3: setImmediate'));
process.nextTick(() => console.log('4: nextTick'));
Promise.resolve().then(() => console.log('5: Promise'));
console.log('6: End');The output is:
1: Start
6: End
4: nextTick
5: Promise
2: setTimeout
3: setImmediate
process.nextTick() and Promise callbacks aren't part of the event loop phases at all—they're part of the "microtask queue" that runs between phases. That's why they execute before setTimeout and setImmediate even though they were scheduled after.
Async Timing Questions
These questions reveal understanding of callback scheduling priorities.
What is the difference between process.nextTick() and setImmediate()?
process.nextTick() runs callbacks immediately after the current operation completes, before the event loop continues to the next phase. setImmediate() runs callbacks in the check phase of the next event loop iteration. Despite the name, nextTick is more "immediate" than setImmediate.
The naming is admittedly confusing—you'd think "immediate" would be faster than "next tick." But think of it this way: nextTick means "right now, before anything else" while setImmediate means "as soon as the current phase is done."
setImmediate(() => console.log('1: setImmediate'));
process.nextTick(() => console.log('2: nextTick'));
console.log('3: synchronous');
// Output:
// 3: synchronous
// 2: nextTick
// 1: setImmediateThe key insight is that process.nextTick() doesn't wait for the event loop at all. It queues the callback in a special "nextTick queue" that's processed after the current JavaScript execution completes but before the event loop moves on.
This has a dangerous implication:
// DON'T DO THIS - it starves the event loop
function recursiveNextTick() {
process.nextTick(recursiveNextTick);
}
recursiveNextTick();
// The event loop never gets to process I/O!The recursive nextTick keeps the microtask queue full, so the event loop never moves to the poll phase where it would handle I/O. Your server becomes unresponsive.
When should you use setImmediate() to break up long operations?
Use setImmediate() for breaking up long-running operations so the event loop can process other callbacks in between. This pattern prevents blocking the event loop while processing large datasets.
function processLargeArray(array, callback) {
let index = 0;
function processChunk() {
const chunkEnd = Math.min(index + 1000, array.length);
while (index < chunkEnd) {
// Process item
index++;
}
if (index < array.length) {
setImmediate(processChunk); // Yield to event loop
} else {
callback();
}
}
processChunk();
}Libuv Questions
These questions reveal whether you understand Node.js at the C layer.
What is libuv and why is it critical to Node.js?
libuv is the C library that provides Node.js with its event loop, thread pool, and asynchronous I/O capabilities. It abstracts platform-specific async operations (epoll on Linux, kqueue on macOS, IOCP on Windows) into a consistent API. The thread pool handles operations that can't be done asynchronously at the OS level, like file I/O and DNS lookups.
Node.js's famous non-blocking I/O isn't actually JavaScript magic—it's libuv doing the heavy lifting. Think of libuv as the engine under Node.js's hood.
flowchart TB
App["Node.js Application Code<br/>(JavaScript)"]
App --> Bindings["Node.js Bindings<br/>(C++)"]
Bindings --> Libuv["libuv (C)"]
Libuv --> EL["Event Loop"]
Libuv --> TP["Thread Pool<br/>(4 threads default)"]
EL --> Net["Network I/O<br/>(truly async)"]
TP --> FS["File System<br/>DNS Lookups<br/>Crypto<br/>Compression"]The key insight is that not all async operations are created equal. Network I/O is truly asynchronous at the OS level—libuv just registers callbacks with the OS. But file I/O on most systems is blocking, so libuv uses its thread pool to simulate async behavior.
// This uses the thread pool (file I/O)
const fs = require('fs');
fs.readFile('large-file.txt', (err, data) => {
// One of the 4 thread pool threads handled this
});
// This is truly async (network I/O)
const http = require('http');
http.get('http://example.com', (res) => {
// No thread pool needed - OS handles async
});You can increase the thread pool size for I/O-heavy applications:
process.env.UV_THREADPOOL_SIZE = 8; // Must be set before any async operationsStreams Questions
These questions separate developers who've processed large files from those who haven't.
How do Node.js streams work and when should you use them?
Streams process data in chunks rather than loading everything into memory at once. There are four types: Readable (data source), Writable (data destination), Duplex (both read and write), and Transform (modify data as it passes through). Streams use backpressure to prevent fast producers from overwhelming slow consumers.
Think of streams like a factory assembly line versus a warehouse. Without streams, you'd load an entire file into a warehouse (memory), then move it all at once. With streams, items move through the factory piece by piece—you only need space for what's currently being processed.
// Without streams - loads entire file into memory
const fs = require('fs');
fs.readFile('huge-file.txt', (err, data) => {
// 'data' is the ENTIRE file - could be gigabytes
fs.writeFile('copy.txt', data, (err) => {
console.log('Done');
});
});
// With streams - processes in chunks
const readStream = fs.createReadStream('huge-file.txt');
const writeStream = fs.createWriteStream('copy.txt');
readStream.pipe(writeStream);
// Data flows in ~64KB chunks - memory stays constantThe real power comes from piping and transforming:
const zlib = require('zlib');
const crypto = require('crypto');
fs.createReadStream('data.txt')
.pipe(zlib.createGzip()) // Compress
.pipe(crypto.createCipher('aes-256-cbc', 'secret')) // Encrypt
.pipe(fs.createWriteStream('data.txt.gz.enc'))
.on('finish', () => console.log('Done'));Each chunk flows through the entire pipeline before the next chunk starts. Memory usage stays constant regardless of file size.
How does backpressure work in Node.js streams?
Backpressure occurs when your writable stream can't keep up with your readable stream. Without handling it, you'd buffer unlimited data in memory. The .write() method returns false when the internal buffer is full, signaling you should pause.
const readable = fs.createReadStream('huge-file.txt');
const writable = fs.createWriteStream('output.txt');
readable.on('data', (chunk) => {
const canContinue = writable.write(chunk);
if (!canContinue) {
readable.pause(); // Stop reading until drain
writable.once('drain', () => readable.resume());
}
});The .pipe() method handles backpressure automatically—that's why it's preferred over manual event handling.
Scaling Questions
These questions test your understanding of Node.js's single-threaded limitation.
How do you scale Node.js across multiple CPU cores?
Node.js runs on a single thread by default, but you can scale using the cluster module (fork multiple processes that share a port), worker threads (spawn threads for CPU-intensive tasks), or process managers like PM2. The cluster module is best for web servers; worker threads are best for CPU-intensive computation within a single request.
The cluster module forks your process multiple times, and all children share the same server port:
const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
console.log(`Master ${process.pid} is running`);
// Fork workers for each CPU
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('exit', (worker, code, signal) => {
console.log(`Worker ${worker.process.pid} died`);
cluster.fork(); // Restart dead workers
});
} else {
// Workers share the TCP connection
http.createServer((req, res) => {
res.writeHead(200);
res.end(`Handled by worker ${process.pid}\n`);
}).listen(8000);
console.log(`Worker ${process.pid} started`);
}The OS distributes incoming connections across workers using round-robin (or other algorithms depending on the platform).
When should you use worker threads instead of the cluster module?
Worker threads are for CPU-intensive operations that would otherwise block the event loop within a single request. Unlike cluster (which creates separate processes with separate memory), worker threads share memory and can transfer data more efficiently.
const { Worker, isMainThread, parentPort } = require('worker_threads');
if (isMainThread) {
// Main thread
const worker = new Worker(__filename);
worker.on('message', (result) => {
console.log('Fibonacci result:', result);
});
worker.postMessage(45); // Calculate fib(45)
// Event loop is free to handle other requests!
} else {
// Worker thread
parentPort.on('message', (n) => {
// CPU-intensive work happens here
const result = fibonacci(n);
parentPort.postMessage(result);
});
}
function fibonacci(n) {
if (n < 2) return n;
return fibonacci(n - 1) + fibonacci(n - 2);
}The key difference: cluster creates separate processes (separate memory, no shared state), while worker threads share memory and can transfer data more efficiently.
Module System Questions
These questions test your knowledge of Node.js's module evolution.
What is the difference between CommonJS and ES Modules?
CommonJS uses require() and module.exports with synchronous loading at runtime. ES Modules use import/export with asynchronous loading at parse time, enabling static analysis and tree shaking. ES Modules support top-level await; CommonJS doesn't. Node.js now supports both, with ES Modules being the modern standard.
CommonJS was Node.js's original module system, designed before JavaScript had native modules:
// math.js (CommonJS)
function add(a, b) {
return a + b;
}
module.exports = { add };
// app.js
const { add } = require('./math');
console.log(add(2, 3));ES Modules are the JavaScript standard, added to Node.js later:
// math.mjs (ES Module)
export function add(a, b) {
return a + b;
}
// app.mjs
import { add } from './math.mjs';
console.log(add(2, 3));The critical difference is timing. CommonJS loads synchronously at runtime—the code is evaluated when require() is called. ES Modules load asynchronously at parse time—the imports are resolved before any code runs.
How do conditional imports differ between CommonJS and ES Modules?
CommonJS allows conditional imports anywhere because require() is just a function call. ES Modules require dynamic import() for conditional loading.
// CommonJS - conditional imports work
if (process.env.NODE_ENV === 'production') {
const analytics = require('./analytics');
analytics.track('startup');
}
// ES Modules - conditional imports need dynamic import()
if (process.env.NODE_ENV === 'production') {
const { track } = await import('./analytics.mjs');
track('startup');
}ES Modules also enable tree shaking—bundlers can analyze imports statically and remove unused exports. CommonJS can't be statically analyzed because require() can be called anywhere with dynamic strings.
To use ES Modules in Node.js, add to package.json:
{
"type": "module"
}Or use the .mjs extension for individual files.
Error Handling Questions
These questions reveal whether you've debugged production Node.js applications.
How do you handle errors in async code?
For callbacks, follow the error-first pattern and always check the error parameter. For Promises, use .catch() or try/catch with async/await. For event emitters, listen to the 'error' event—unhandled errors crash the process. Use process.on('unhandledRejection') and process.on('uncaughtException') as safety nets, but fix the root cause.
Node.js has multiple async patterns, each with its own error handling approach. Missing any of them causes silent failures or crashes.
For callbacks, always check the error first:
fs.readFile('file.txt', (err, data) => {
if (err) {
console.error('Failed to read file:', err);
return; // Don't continue with undefined data!
}
processData(data);
});For Promises, errors propagate through the chain:
fetchUser(userId)
.then(user => fetchOrders(user.id))
.then(orders => processOrders(orders))
.catch(err => {
// Catches errors from ANY step above
console.error('Pipeline failed:', err);
});With async/await, use try/catch:
async function handleRequest() {
try {
const user = await fetchUser(userId);
const orders = await fetchOrders(user.id);
return processOrders(orders);
} catch (err) {
console.error('Request failed:', err);
throw err; // Re-throw if caller should handle it
}
}Why do unhandled event emitter errors crash the process?
Event emitters are designed to emit an 'error' event when something goes wrong. If no listener is attached, Node.js treats it as an unhandled exception and crashes the process to prevent silent failures.
const stream = fs.createReadStream('missing-file.txt');
// Without this, the process crashes
stream.on('error', (err) => {
console.error('Stream error:', err);
});For safety nets, add global handlers:
process.on('unhandledRejection', (reason, promise) => {
console.error('Unhandled Rejection:', reason);
// Log it, but don't exit - investigate and fix the root cause
});
process.on('uncaughtException', (err) => {
console.error('Uncaught Exception:', err);
// Exit after logging - the process is in unknown state
process.exit(1);
});Security Questions
These questions test production readiness.
What are the security best practices for Node.js applications?
Key practices include validating all user input, using parameterized queries to prevent SQL injection, sanitizing output to prevent XSS, using helmet middleware for security headers, keeping dependencies updated and audited, running with minimal privileges, and using environment variables for secrets. Rate limiting and proper authentication are essential for APIs.
Start with security headers using helmet:
const helmet = require('helmet');
app.use(helmet()); // Sets 11 security headers by defaultValidate and sanitize all input:
const { body, validationResult } = require('express-validator');
app.post('/user',
body('email').isEmail().normalizeEmail(),
body('name').trim().escape(),
(req, res) => {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
// Process validated input
}
);Prevent SQL injection with parameterized queries:
// NEVER do this
const query = `SELECT * FROM users WHERE id = ${userId}`;
// Always do this
const query = 'SELECT * FROM users WHERE id = $1';
await pool.query(query, [userId]);How should you manage secrets in Node.js applications?
Never commit secrets to git. Use environment variables for configuration and consider a secret manager for production deployments.
// Never commit secrets to git
// Use environment variables
const dbPassword = process.env.DB_PASSWORD;
// Or use a secret manager in production
const secrets = await secretManager.getSecret('db-credentials');Keep dependencies secure:
npm audit # Check for vulnerabilities
npm audit fix # Auto-fix where possible
npm outdated # Check for updatesRate limiting prevents abuse:
const rateLimit = require('express-rate-limit');
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100 // 100 requests per window
});
app.use('/api', limiter);Quick Reference
| Topic | Key Points |
|---|---|
| Event Loop | Phases: timers → pending → poll → check → close. Microtasks run between phases. |
| nextTick vs setImmediate | nextTick runs before event loop continues; setImmediate runs in check phase |
| libuv | C library providing event loop, thread pool (4 threads default), cross-platform I/O |
| Streams | Readable, Writable, Duplex, Transform. Use .pipe() for automatic backpressure |
| Cluster | Multiple processes sharing a port. Best for web servers. |
| Worker Threads | Shared memory threads. Best for CPU-intensive computation. |
| CommonJS vs ESM | require() is sync at runtime; import is async at parse time |
| Error Handling | Callbacks: error-first. Promises: .catch(). Events: 'error' listener required |
Related Articles
- Complete Node.js Backend Developer Interview Guide - comprehensive preparation guide for backend interviews
- JavaScript Event Loop Interview Guide - How async JavaScript really works under the hood
- WebSockets & Socket.IO Interview Guide - Real-time communication with Socket.IO
- PostgreSQL & Node.js Interview Guide - Database connections, pooling, and transactions
- MongoDB Interview Guide - Mongoose schemas, aggregation pipelines, and NoSQL patterns
- Express.js Middleware Interview Guide - Middleware patterns and request lifecycle
- REST API Interview Guide - API design principles and best practices
- Docker Interview Guide - Containerizing Node.js applications for production
- Kubernetes Interview Guide - Orchestrating Node.js containers at scale
- Linux Commands Interview Guide - Essential commands for server management
