8 Node.js Advanced Interview Questions for 2026 (ANSWERED)

·16 min read
Node.jsInterview QuestionsAdvanced Node.jsEvent LoopStreamsBackendSenior DeveloperJavaScript

Node.js handles millions of concurrent connections on a single thread—something that would crash traditional threaded servers. How? The answer lies in the event loop, libuv, and a runtime architecture that 90% of Node developers use daily without truly understanding.

Here's a revealing interview question: "What happens when you call setTimeout(() => {}, 0)?" If you answered "it runs after zero milliseconds," you've already failed. These eight questions test the runtime-level knowledge that separates senior backend developers from those who've just used Express.

Question 1: How Does the Node.js Event Loop Work?

This is the foundational question that sets the tone for any serious Node.js interview.

The 30-Second Answer

The event loop is Node.js's mechanism for handling asynchronous operations on a single thread. It continuously cycles through phases—timers, pending callbacks, poll (for I/O), check (for setImmediate), and close callbacks—executing queued callbacks in each phase. This design lets Node.js handle thousands of concurrent connections without creating a thread for each one.

The 2-Minute Answer (If They Want More)

Think of the event loop like a theme park with different queues for different rides. Each "phase" is a different ride, and the event loop is the attendant who moves through each queue in order, letting people onto the ride before moving to the next.

Here's the actual phase order:

   ┌───────────────────────────┐
┌─>│           timers          │  ← setTimeout, setInterval
│  └─────────────┬─────────────┘
│  ┌─────────────┴─────────────┐
│  │     pending callbacks     │  ← I/O callbacks deferred
│  └─────────────┬─────────────┘
│  ┌─────────────┴─────────────┐
│  │       idle, prepare       │  ← internal use
│  └─────────────┬─────────────┘
│  ┌─────────────┴─────────────┐
│  │           poll            │  ← retrieve new I/O events
│  └─────────────┬─────────────┘
│  ┌─────────────┴─────────────┐
│  │           check           │  ← setImmediate callbacks
│  └─────────────┬─────────────┘
│  ┌─────────────┴─────────────┐
└──┤      close callbacks      │  ← socket.on('close')
   └───────────────────────────┘

Let me show you with code:

console.log('1: Start');
 
setTimeout(() => console.log('2: setTimeout'), 0);
setImmediate(() => console.log('3: setImmediate'));
process.nextTick(() => console.log('4: nextTick'));
Promise.resolve().then(() => console.log('5: Promise'));
 
console.log('6: End');

The output is:

1: Start
6: End
4: nextTick
5: Promise
2: setTimeout
3: setImmediate

Here's where it gets interesting. process.nextTick() and Promise callbacks aren't part of the event loop phases at all—they're part of the "microtask queue" that runs between phases. That's why they execute before setTimeout and setImmediate even though they were scheduled after.

What interviewers are looking for: Understanding of the phase order and awareness that nextTick and Promises have special priority. The candidates who impress me explain that the event loop is what enables Node.js's non-blocking I/O, not just a mechanism for scheduling callbacks.

Common follow-up: "What happens if you put a very long-running operation in a callback?" It blocks the event loop, preventing all other callbacks from executing. This is why you should never do CPU-intensive work on the main thread.

Question 2: What's the Difference Between process.nextTick() and setImmediate()?

This question trips up experienced developers because the names are counterintuitive.

The 30-Second Answer

process.nextTick() runs callbacks immediately after the current operation completes, before the event loop continues to the next phase. setImmediate() runs callbacks in the check phase of the next event loop iteration. Despite the name, nextTick is more "immediate" than setImmediate.

The 2-Minute Answer (If They Want More)

The naming is admittedly confusing—you'd think "immediate" would be faster than "next tick." But think of it this way: nextTick means "right now, before anything else" while setImmediate means "as soon as the current phase is done."

setImmediate(() => console.log('1: setImmediate'));
process.nextTick(() => console.log('2: nextTick'));
console.log('3: synchronous');
 
// Output:
// 3: synchronous
// 2: nextTick
// 1: setImmediate

The key insight is that process.nextTick() doesn't wait for the event loop at all. It queues the callback in a special "nextTick queue" that's processed after the current JavaScript execution completes but before the event loop moves on.

This has a dangerous implication:

// DON'T DO THIS - it starves the event loop
function recursiveNextTick() {
    process.nextTick(recursiveNextTick);
}
recursiveNextTick();
// The event loop never gets to process I/O!

The recursive nextTick keeps the microtask queue full, so the event loop never moves to the poll phase where it would handle I/O. Your server becomes unresponsive.

A pattern that's served me well: use setImmediate() for breaking up long-running operations so the event loop can process other callbacks in between:

function processLargeArray(array, callback) {
    let index = 0;
 
    function processChunk() {
        const chunkEnd = Math.min(index + 1000, array.length);
 
        while (index < chunkEnd) {
            // Process item
            index++;
        }
 
        if (index < array.length) {
            setImmediate(processChunk); // Yield to event loop
        } else {
            callback();
        }
    }
 
    processChunk();
}

What interviewers are looking for: Understanding that nextTick has higher priority and can starve the event loop. Knowledge of when to use each.

Question 3: What Is libuv and Why Is It Critical to Node.js?

This question reveals whether you understand Node.js at the C layer.

The 30-Second Answer

libuv is the C library that provides Node.js with its event loop, thread pool, and asynchronous I/O capabilities. It abstracts platform-specific async operations (epoll on Linux, kqueue on macOS, IOCP on Windows) into a consistent API. The thread pool handles operations that can't be done asynchronously at the OS level, like file I/O and DNS lookups.

The 2-Minute Answer (If They Want More)

Node.js's famous non-blocking I/O isn't actually JavaScript magic—it's libuv doing the heavy lifting. Think of libuv as the engine under Node.js's hood.

Here's what libuv provides:

Node.js Application Code (JavaScript)
           │
           ▼
    Node.js Bindings (C++)
           │
           ▼
         libuv (C)
    ┌──────┴──────┐
    │             │
Event Loop    Thread Pool
    │         (4 threads default)
    │             │
    ▼             ▼
 Network I/O   File System
 (truly async) DNS Lookups
               Crypto
               Compression

The key insight is that not all async operations are created equal. Network I/O is truly asynchronous at the OS level—libuv just registers callbacks with the OS. But file I/O on most systems is blocking, so libuv uses its thread pool to simulate async behavior.

// This uses the thread pool (file I/O)
const fs = require('fs');
fs.readFile('large-file.txt', (err, data) => {
    // One of the 4 thread pool threads handled this
});
 
// This is truly async (network I/O)
const http = require('http');
http.get('http://example.com', (res) => {
    // No thread pool needed - OS handles async
});

You can increase the thread pool size for I/O-heavy applications:

process.env.UV_THREADPOOL_SIZE = 8; // Must be set before any async operations

What interviewers are looking for: Awareness that libuv provides the event loop, not JavaScript. Understanding the distinction between truly async operations and thread pool operations. Knowing about UV_THREADPOOL_SIZE is a bonus.

Question 4: How Do Node.js Streams Work?

This question separates developers who've processed large files from those who haven't.

The 30-Second Answer

Streams process data in chunks rather than loading everything into memory at once. There are four types: Readable (data source), Writable (data destination), Duplex (both read and write), and Transform (modify data as it passes through). Streams use backpressure to prevent fast producers from overwhelming slow consumers.

The 2-Minute Answer (If They Want More)

Think of streams like a factory assembly line versus a warehouse. Without streams, you'd load an entire file into a warehouse (memory), then move it all at once. With streams, items move through the factory piece by piece—you only need space for what's currently being processed.

Here's the difference in practice:

// Without streams - loads entire file into memory
const fs = require('fs');
 
fs.readFile('huge-file.txt', (err, data) => {
    // 'data' is the ENTIRE file - could be gigabytes
    fs.writeFile('copy.txt', data, (err) => {
        console.log('Done');
    });
});
 
// With streams - processes in chunks
const readStream = fs.createReadStream('huge-file.txt');
const writeStream = fs.createWriteStream('copy.txt');
 
readStream.pipe(writeStream);
// Data flows in ~64KB chunks - memory stays constant

The real power comes from piping and transforming:

const zlib = require('zlib');
const crypto = require('crypto');
 
fs.createReadStream('data.txt')
    .pipe(zlib.createGzip())           // Compress
    .pipe(crypto.createCipher('aes-256-cbc', 'secret'))  // Encrypt
    .pipe(fs.createWriteStream('data.txt.gz.enc'))
    .on('finish', () => console.log('Done'));

Each chunk flows through the entire pipeline before the next chunk starts. Memory usage stays constant regardless of file size.

Here's where streams get tricky—backpressure. If your writable stream can't keep up with your readable stream, you need to handle it:

const readable = fs.createReadStream('huge-file.txt');
const writable = fs.createWriteStream('output.txt');
 
readable.on('data', (chunk) => {
    const canContinue = writable.write(chunk);
 
    if (!canContinue) {
        readable.pause(); // Stop reading until drain
        writable.once('drain', () => readable.resume());
    }
});

The .pipe() method handles backpressure automatically—that's why it's preferred over manual event handling.

What interviewers are looking for: Understanding of why streams matter for memory efficiency. Knowledge of the four stream types. Awareness of backpressure.

Common follow-up: "When would you choose streams over reading the whole file?" Any time the file could be larger than available memory, when you want to start processing before the entire file is read, or when you're proxying data between sources.

Question 5: How Do You Scale Node.js Across Multiple CPU Cores?

This question tests your understanding of Node.js's single-threaded limitation.

The 30-Second Answer

Node.js runs on a single thread by default, but you can scale using the cluster module (fork multiple processes that share a port), worker threads (spawn threads for CPU-intensive tasks), or process managers like PM2. The cluster module is best for web servers; worker threads are best for CPU-intensive computation within a single request.

The 2-Minute Answer (If They Want More)

A common misconception is that Node.js can't use multiple cores. It absolutely can—you just need to do it explicitly.

The cluster module forks your process multiple times, and all children share the same server port:

const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;
 
if (cluster.isMaster) {
    console.log(`Master ${process.pid} is running`);
 
    // Fork workers for each CPU
    for (let i = 0; i < numCPUs; i++) {
        cluster.fork();
    }
 
    cluster.on('exit', (worker, code, signal) => {
        console.log(`Worker ${worker.process.pid} died`);
        cluster.fork(); // Restart dead workers
    });
} else {
    // Workers share the TCP connection
    http.createServer((req, res) => {
        res.writeHead(200);
        res.end(`Handled by worker ${process.pid}\n`);
    }).listen(8000);
 
    console.log(`Worker ${process.pid} started`);
}

The OS distributes incoming connections across workers using round-robin (or other algorithms depending on the platform).

Worker threads are different—they're for CPU-intensive operations that would otherwise block the event loop:

const { Worker, isMainThread, parentPort } = require('worker_threads');
 
if (isMainThread) {
    // Main thread
    const worker = new Worker(__filename);
 
    worker.on('message', (result) => {
        console.log('Fibonacci result:', result);
    });
 
    worker.postMessage(45); // Calculate fib(45)
 
    // Event loop is free to handle other requests!
} else {
    // Worker thread
    parentPort.on('message', (n) => {
        // CPU-intensive work happens here
        const result = fibonacci(n);
        parentPort.postMessage(result);
    });
}
 
function fibonacci(n) {
    if (n < 2) return n;
    return fibonacci(n - 1) + fibonacci(n - 2);
}

The key difference: cluster creates separate processes (separate memory, no shared state), while worker threads share memory and can transfer data more efficiently.

What interviewers are looking for: Understanding of when to use cluster vs worker threads. Knowledge that each cluster worker has its own event loop and memory. Awareness that PM2 provides cluster mode out of the box.

Question 6: What's the Difference Between CommonJS and ES Modules?

This question tests your knowledge of Node.js's module evolution.

The 30-Second Answer

CommonJS uses require() and module.exports with synchronous loading at runtime. ES Modules use import/export with asynchronous loading at parse time, enabling static analysis and tree shaking. ES Modules support top-level await; CommonJS doesn't. Node.js now supports both, with ES Modules being the modern standard.

The 2-Minute Answer (If They Want More)

CommonJS was Node.js's original module system, designed before JavaScript had native modules:

// math.js (CommonJS)
function add(a, b) {
    return a + b;
}
module.exports = { add };
 
// app.js
const { add } = require('./math');
console.log(add(2, 3));

ES Modules are the JavaScript standard, added to Node.js later:

// math.mjs (ES Module)
export function add(a, b) {
    return a + b;
}
 
// app.mjs
import { add } from './math.mjs';
console.log(add(2, 3));

The critical difference is timing. CommonJS loads synchronously at runtime—the code is evaluated when require() is called. ES Modules load asynchronously at parse time—the imports are resolved before any code runs.

This timing difference has real implications:

// CommonJS - conditional imports work
if (process.env.NODE_ENV === 'production') {
    const analytics = require('./analytics');
    analytics.track('startup');
}
 
// ES Modules - conditional imports need dynamic import()
if (process.env.NODE_ENV === 'production') {
    const { track } = await import('./analytics.mjs');
    track('startup');
}

ES Modules also enable tree shaking—bundlers can analyze imports statically and remove unused exports. CommonJS can't be statically analyzed because require() can be called anywhere with dynamic strings.

To use ES Modules in Node.js:

// package.json
{
    "type": "module"
}

Or use the .mjs extension for individual files.

What interviewers are looking for: Understanding of synchronous vs asynchronous loading. Knowledge of how to enable ES Modules. Awareness of the tree shaking advantage.

Question 7: How Do You Handle Errors in Async Code?

This question reveals whether you've debugged production Node.js applications.

The 30-Second Answer

For callbacks, follow the error-first pattern and always check the error parameter. For Promises, use .catch() or try/catch with async/await. For event emitters, listen to the 'error' event—unhandled errors crash the process. Use process.on('unhandledRejection') and process.on('uncaughtException') as safety nets, but fix the root cause.

The 2-Minute Answer (If They Want More)

Node.js has multiple async patterns, each with its own error handling approach. Missing any of them causes silent failures or crashes.

For callbacks, always check the error first:

fs.readFile('file.txt', (err, data) => {
    if (err) {
        console.error('Failed to read file:', err);
        return; // Don't continue with undefined data!
    }
    processData(data);
});

For Promises, errors propagate through the chain:

fetchUser(userId)
    .then(user => fetchOrders(user.id))
    .then(orders => processOrders(orders))
    .catch(err => {
        // Catches errors from ANY step above
        console.error('Pipeline failed:', err);
    });

With async/await, use try/catch:

async function handleRequest() {
    try {
        const user = await fetchUser(userId);
        const orders = await fetchOrders(user.id);
        return processOrders(orders);
    } catch (err) {
        console.error('Request failed:', err);
        throw err; // Re-throw if caller should handle it
    }
}

Event emitters are trickier—an unhandled 'error' event crashes the process:

const stream = fs.createReadStream('missing-file.txt');
 
// Without this, the process crashes
stream.on('error', (err) => {
    console.error('Stream error:', err);
});

For safety nets, add global handlers:

process.on('unhandledRejection', (reason, promise) => {
    console.error('Unhandled Rejection:', reason);
    // Log it, but don't exit - investigate and fix the root cause
});
 
process.on('uncaughtException', (err) => {
    console.error('Uncaught Exception:', err);
    // Exit after logging - the process is in unknown state
    process.exit(1);
});

What interviewers are looking for: Knowledge of all three patterns (callbacks, Promises, events). Awareness that unhandled event emitter errors crash the process. Understanding that global handlers are safety nets, not solutions.

Question 8: What Are the Security Best Practices for Node.js Applications?

This question tests production readiness.

The 30-Second Answer

Key practices include validating all user input, using parameterized queries to prevent SQL injection, sanitizing output to prevent XSS, using helmet middleware for security headers, keeping dependencies updated and audited, running with minimal privileges, and using environment variables for secrets. Rate limiting and proper authentication are essential for APIs.

The 2-Minute Answer (If They Want More)

Security in Node.js comes down to not trusting any input and following the principle of least privilege.

Start with security headers using helmet:

const helmet = require('helmet');
app.use(helmet()); // Sets 11 security headers by default

Validate and sanitize all input:

const { body, validationResult } = require('express-validator');
 
app.post('/user',
    body('email').isEmail().normalizeEmail(),
    body('name').trim().escape(),
    (req, res) => {
        const errors = validationResult(req);
        if (!errors.isEmpty()) {
            return res.status(400).json({ errors: errors.array() });
        }
        // Process validated input
    }
);

Prevent SQL injection with parameterized queries:

// NEVER do this
const query = `SELECT * FROM users WHERE id = ${userId}`;
 
// Always do this
const query = 'SELECT * FROM users WHERE id = $1';
await pool.query(query, [userId]);

Manage secrets properly:

// Never commit secrets to git
// Use environment variables
const dbPassword = process.env.DB_PASSWORD;
 
// Or use a secret manager in production
const secrets = await secretManager.getSecret('db-credentials');

Keep dependencies secure:

npm audit                    # Check for vulnerabilities
npm audit fix               # Auto-fix where possible
npm outdated                # Check for updates

Rate limiting prevents abuse:

const rateLimit = require('express-rate-limit');
 
const limiter = rateLimit({
    windowMs: 15 * 60 * 1000, // 15 minutes
    max: 100                   // 100 requests per window
});
 
app.use('/api', limiter);

What interviewers are looking for: Practical security knowledge, not just theoretical awareness. Experience with real tools like helmet, express-validator, and rate limiting. Understanding of common vulnerabilities.

What These Questions Reveal

After walking through all eight questions, you'll notice they test three areas. First, understanding of Node.js's runtime—the event loop, libuv, and how async actually works. Second, ability to handle real production challenges—scaling, streams, error handling. Third, security and operational awareness.

The candidates who impress me most don't just recite facts—they explain trade-offs. "Cluster gives you process isolation but worker threads share memory more efficiently." "nextTick has higher priority but can starve the event loop." That's the thinking senior developers bring to architecture decisions.

Wrapping Up

These eight questions represent the Node.js knowledge that separates senior backend developers from those who've just built Express APIs. They're not about memorizing APIs—they're about understanding how Node.js works at the runtime level and making informed decisions about patterns and architecture.

If you want to practice more questions like these, our collection includes 800+ interview questions covering Node.js, JavaScript, React, TypeScript, and more—each with detailed explanations that help you understand the why, not just the what.


Related Articles

If you found this helpful, check out these related guides:

Ready to ace your interview?

Get 550+ interview questions with detailed answers in our comprehensive PDF guides.

View PDF Guides