GitHub switched their entire API to GraphQL in 2016. Shopify processes billions of GraphQL queries daily. Netflix uses it to power their streaming platform. According to the State of JavaScript 2024 survey, GraphQL adoption continues to climb, and "explain how you'd solve the N+1 problem" has become a standard senior developer interview question. Here's how to demonstrate you actually understand GraphQL, not just use it.
The 30-Second Answer
When the interviewer asks "What is GraphQL?", here's your concise answer:
"GraphQL is a query language for APIs developed by Facebook that lets clients request exactly the data they need in a single request. Unlike REST with multiple endpoints returning fixed data structures, GraphQL has one endpoint where clients specify their requirements declaratively. This eliminates over-fetching and under-fetching, makes APIs self-documenting through the schema, and enables frontend teams to work independently without waiting for backend changes."
Wait for follow-up questions. Don't launch into resolvers and DataLoader unprompted.
The 2-Minute Answer (If They Want More)
If they ask you to elaborate:
"GraphQL was created by Facebook in 2012 and open-sourced in 2015 to solve real problems with REST APIs in their mobile apps. The core concepts are:
Schema - A strongly-typed contract defining all available types, queries, and mutations. It enables validation, auto-completion, and serves as documentation.
Queries - Read operations where clients specify exactly which fields they want, including nested relationships, all in one request.
Mutations - Write operations for creating, updating, or deleting data. They execute sequentially to ensure predictable results.
Resolvers - Functions that actually fetch the data for each field. They bridge the schema to your data sources.
The main benefits are eliminating round trips to gather related data, strong typing for better tooling, and a self-documenting API. The tradeoffs include implementation complexity, caching challenges since everything goes through one endpoint, and the need to prevent expensive queries from overwhelming your server."
GraphQL vs REST: The Question Everyone Gets Wrong
Here's where candidates stumble: they list features instead of explaining the actual tradeoffs.
# REST approach requires multiple requests:
# GET /api/users/123
# GET /api/users/123/posts
# GET /api/posts/456/comments
# GraphQL accomplishes this in ONE request:
query GetUserDashboard {
user(id: "123") {
name
email
posts(limit: 5) {
title
commentCount
comments(limit: 3) {
text
author {
name
}
}
}
}
}The candidates who impress me explain both sides:
GraphQL excels when: You have complex, nested data requirements, multiple clients (mobile, web, desktop) needing different data shapes, or teams that need to iterate quickly without backend dependencies. If you're building a dashboard that pulls data from multiple sources, GraphQL shines.
REST excels when: You have simple CRUD operations, need HTTP caching at the CDN level, handle file uploads/downloads, or have a small team without GraphQL experience. For a basic blog API, REST's simplicity wins.
The answer isn't "GraphQL is better than REST." It's "use the right tool for your use case."
Schema Design: The Foundation
Understanding the schema separates developers who copy-paste GraphQL from those who design APIs.
# Schema Definition Language (SDL)
type Query {
user(id: ID!): User
posts(limit: Int = 10, offset: Int = 0): [Post!]!
searchContent(query: String!): [SearchResult!]!
}
type Mutation {
createPost(input: CreatePostInput!): CreatePostResult!
updateUser(id: ID!, input: UpdateUserInput!): User
deletePost(id: ID!): Boolean!
}
type User {
id: ID!
name: String!
email: String!
posts: [Post!]!
createdAt: DateTime!
}
type Post {
id: ID!
title: String!
content: String!
author: User!
comments: [Comment!]!
publishedAt: DateTime
}
# Input types for mutations (can't use regular types)
input CreatePostInput {
title: String!
content: String!
authorId: ID!
}
# Union type for polymorphic returns
union SearchResult = User | Post | Comment
# Union for error handling (modern pattern)
union CreatePostResult = Post | ValidationError | AuthorizationError
type ValidationError {
message: String!
field: String!
}
type AuthorizationError {
message: String!
}Let me show you the nullability patterns that trip up most developers:
type User {
# [String]! - Non-null list, nullable items
# Valid: [], ["tag1", null, "tag2"]
# Invalid: null
tags: [String]!
# [String!] - Nullable list, non-null items
# Valid: null, [], ["tag1", "tag2"]
# Invalid: ["tag1", null]
middleNames: [String!]
# [String!]! - Non-null list, non-null items
# Valid: [], ["role1", "role2"]
# Invalid: null, ["role1", null]
roles: [String!]!
}The interview question: "What's the difference between [Post]!, [Post!], and [Post!]!?" Nail the nullability semantics and you've shown schema design maturity.
Resolvers: Where the Magic Happens
Resolvers connect your schema to actual data. Understanding the four arguments is essential:
const resolvers = {
Query: {
user: async (parent, args, context, info) => {
// parent: undefined for root queries
// args: { id: "123" } from query arguments
// context: shared per-request data (auth, DB, loaders)
// info: query metadata (rarely used directly)
return context.db.users.findById(args.id);
},
posts: async (parent, { limit, offset }, context) => {
// Destructure args for cleaner code
return context.db.posts.findMany({
take: limit,
skip: offset,
orderBy: { createdAt: 'desc' }
});
}
},
User: {
// This resolver runs for the 'posts' field on User type
posts: async (parent, args, context) => {
// parent is the User object from the parent resolver
return context.db.posts.findMany({
where: { authorId: parent.id }
});
},
// Computed field - doesn't exist in database
fullName: (parent) => {
return `${parent.firstName} ${parent.lastName}`;
}
},
Post: {
author: async (parent, args, context) => {
// parent is the Post object
return context.db.users.findById(parent.authorId);
}
}
};Here's the pattern that shows experience: resolvers should be thin. Business logic belongs in service layers, not resolvers.
// BAD: Business logic in resolver
const resolvers = {
Mutation: {
createPost: async (parent, { input }, context) => {
// Don't do all this in the resolver
if (!context.currentUser) throw new Error('Not authenticated');
if (input.title.length < 5) throw new Error('Title too short');
const post = await context.db.posts.create({
data: { ...input, authorId: context.currentUser.id }
});
await sendNotification(post);
return post;
}
}
};
// GOOD: Delegate to service layer
const resolvers = {
Mutation: {
createPost: async (parent, { input }, context) => {
return context.services.posts.create(input, context.currentUser);
}
}
};The N+1 Problem: The Senior Developer Question
This is the question that separates juniors from seniors. If you can explain and solve the N+1 problem, you've demonstrated real GraphQL expertise.
Here's the problem:
// BAD: N+1 queries - 101 database calls for 100 users
const resolvers = {
Query: {
users: () => db.users.findMany() // Query 1: Get 100 users
},
User: {
posts: (user) => db.posts.findMany({
where: { authorId: user.id }
}) // Queries 2-101: One for EACH user!
}
};When you query users { name posts { title } }, here's what happens:
- One query fetches 100 users
- For each of the 100 users, a separate query fetches their posts
- Total: 101 database queries instead of 2
The solution is DataLoader, a utility that batches and caches requests:
const DataLoader = require('dataloader');
// Batch function: receives array of IDs, returns array of results
// Results MUST be in same order as input IDs
const batchGetPostsByUserIds = async (userIds) => {
// ONE query for all users
const posts = await db.posts.findMany({
where: { authorId: { in: userIds } }
});
// Group posts by authorId
const postsByUser = {};
posts.forEach(post => {
if (!postsByUser[post.authorId]) {
postsByUser[post.authorId] = [];
}
postsByUser[post.authorId].push(post);
});
// Return in same order as input IDs
return userIds.map(id => postsByUser[id] || []);
};
// Create loaders per request (important!)
const createLoaders = () => ({
postsByUser: new DataLoader(batchGetPostsByUserIds),
users: new DataLoader(async (ids) => {
const users = await db.users.findMany({
where: { id: { in: ids } }
});
return ids.map(id => users.find(u => u.id === id));
})
});
// Context setup - new loaders per request
const context = ({ req }) => ({
currentUser: getUserFromToken(req.headers.authorization),
db,
loaders: createLoaders() // Fresh loaders prevent cross-request caching
});
// Resolvers using DataLoader
const resolvers = {
User: {
posts: (user, args, { loaders }) => {
// DataLoader batches all calls in one event loop tick
return loaders.postsByUser.load(user.id);
}
},
Post: {
author: (post, args, { loaders }) => {
return loaders.users.load(post.authorId);
}
}
};Now instead of 101 queries, you get 2: one for users, one for all their posts.
The interview follow-up: "Why create new DataLoader instances per request?" Answer: DataLoader caches results within its instance. Using the same instance across requests would return stale data and leak information between users.
Authentication and Authorization
Authentication (who are you?) happens outside GraphQL. Authorization (what can you do?) happens inside resolvers.
// Context setup: authentication
const context = async ({ req }) => {
const token = req.headers.authorization?.replace('Bearer ', '');
let currentUser = null;
if (token) {
try {
const decoded = jwt.verify(token, process.env.JWT_SECRET);
currentUser = await db.users.findById(decoded.userId);
} catch (err) {
// Invalid token - user stays null
}
}
return {
currentUser,
db,
loaders: createLoaders()
};
};
// Authorization in resolvers
const resolvers = {
Query: {
me: (parent, args, context) => {
if (!context.currentUser) {
throw new AuthenticationError('Must be logged in');
}
return context.currentUser;
},
adminDashboard: (parent, args, context) => {
if (!context.currentUser) {
throw new AuthenticationError('Must be logged in');
}
if (context.currentUser.role !== 'ADMIN') {
throw new ForbiddenError('Admin access required');
}
return getDashboardStats();
}
},
Mutation: {
deletePost: async (parent, { id }, context) => {
if (!context.currentUser) {
throw new AuthenticationError('Must be logged in');
}
const post = await context.db.posts.findById(id);
// Field-level authorization
if (post.authorId !== context.currentUser.id &&
context.currentUser.role !== 'ADMIN') {
throw new ForbiddenError('Not authorized to delete this post');
}
await context.db.posts.delete(id);
return true;
}
}
};A pattern that's served me well is directive-based authorization:
# Schema with auth directives
directive @auth(requires: Role = USER) on FIELD_DEFINITION
enum Role {
USER
ADMIN
}
type Query {
publicPosts: [Post!]!
me: User! @auth
adminDashboard: Dashboard! @auth(requires: ADMIN)
}Error Handling: The Professional Approach
GraphQL returns partial data with errors - a fundamentally different model from REST:
// GraphQL can return BOTH data and errors
{
"data": {
"user": {
"name": "Alice",
"posts": null // This field failed
}
},
"errors": [
{
"message": "Database connection failed",
"path": ["user", "posts"],
"extensions": {
"code": "INTERNAL_SERVER_ERROR"
}
}
]
}The modern pattern uses union types for expected errors:
union CreateUserResult = User | EmailTakenError | ValidationError
type EmailTakenError {
message: String!
suggestedEmail: String
}
type ValidationError {
message: String!
field: String!
}
type Mutation {
createUser(input: CreateUserInput!): CreateUserResult!
}// Resolver returning typed errors
const resolvers = {
Mutation: {
createUser: async (parent, { input }, context) => {
// Check for existing email
const existing = await context.db.users.findByEmail(input.email);
if (existing) {
return {
__typename: 'EmailTakenError',
message: 'Email already registered',
suggestedEmail: suggestAlternative(input.email)
};
}
// Validate input
const validation = validateUserInput(input);
if (!validation.valid) {
return {
__typename: 'ValidationError',
message: validation.error,
field: validation.field
};
}
// Success case
const user = await context.db.users.create(input);
return { __typename: 'User', ...user };
}
}
};This pattern makes errors type-safe and forces clients to handle them explicitly.
Performance and Security
Interviewers love asking about GraphQL's attack surface:
// PROBLEM: Malicious deeply nested query
query EvilQuery {
users { # Level 1
posts { # Level 2
comments { # Level 3
author { # Level 4
posts { # Level 5
comments { # Level 6 - recursion continues...
...
}
}
}
}
}
}
}
// SOLUTION 1: Query depth limiting
const depthLimit = require('graphql-depth-limit');
const server = new ApolloServer({
typeDefs,
resolvers,
validationRules: [depthLimit(5)] // Max 5 levels deep
});
// SOLUTION 2: Query complexity analysis
const { createComplexityLimitRule } = require('graphql-validation-complexity');
const complexityRule = createComplexityLimitRule(1000, {
scalarCost: 1,
objectCost: 10,
listFactor: 20 // Lists multiply cost
});
// SOLUTION 3: Disable introspection in production
const server = new ApolloServer({
typeDefs,
resolvers,
introspection: process.env.NODE_ENV !== 'production'
});
// SOLUTION 4: Persisted queries (whitelist)
// Only allow pre-registered queries
const server = new ApolloServer({
typeDefs,
resolvers,
persistedQueries: {
cache: new RedisCache({ host: 'localhost' })
}
});Pagination: Cursor vs Offset
The Relay Connection specification has become the standard for GraphQL pagination:
type Query {
# Cursor-based pagination (recommended)
posts(first: Int, after: String, last: Int, before: String): PostConnection!
}
type PostConnection {
edges: [PostEdge!]!
pageInfo: PageInfo!
totalCount: Int!
}
type PostEdge {
node: Post!
cursor: String!
}
type PageInfo {
hasNextPage: Boolean!
hasPreviousPage: Boolean!
startCursor: String
endCursor: String
}// Cursor-based pagination resolver
const resolvers = {
Query: {
posts: async (parent, { first = 10, after }, context) => {
// Decode cursor (base64 encoded ID)
const afterId = after ? Buffer.from(after, 'base64').toString() : null;
// Fetch one extra to check hasNextPage
const posts = await context.db.posts.findMany({
take: first + 1,
cursor: afterId ? { id: afterId } : undefined,
skip: afterId ? 1 : 0,
orderBy: { createdAt: 'desc' }
});
const hasNextPage = posts.length > first;
const edges = posts.slice(0, first).map(post => ({
node: post,
cursor: Buffer.from(post.id).toString('base64')
}));
return {
edges,
pageInfo: {
hasNextPage,
hasPreviousPage: !!after,
startCursor: edges[0]?.cursor,
endCursor: edges[edges.length - 1]?.cursor
},
totalCount: await context.db.posts.count()
};
}
}
};Why cursors over offsets? Offset pagination breaks when items are added or removed during pagination. Cursor pagination is stable because it references specific items, not positions.
Subscriptions: Real-Time Data
type Subscription {
postAdded: Post!
commentAdded(postId: ID!): Comment!
}const { PubSub } = require('graphql-subscriptions');
const pubsub = new PubSub();
const resolvers = {
Mutation: {
createPost: async (parent, { input }, context) => {
const post = await context.db.posts.create(input);
// Publish to subscribers
pubsub.publish('POST_ADDED', { postAdded: post });
return post;
},
addComment: async (parent, { postId, text }, context) => {
const comment = await context.db.comments.create({
postId,
text,
authorId: context.currentUser.id
});
pubsub.publish(`COMMENT_ADDED_${postId}`, {
commentAdded: comment
});
return comment;
}
},
Subscription: {
postAdded: {
subscribe: () => pubsub.asyncIterator(['POST_ADDED'])
},
commentAdded: {
subscribe: (parent, { postId }) => {
return pubsub.asyncIterator([`COMMENT_ADDED_${postId}`]);
}
}
}
};When should you use subscriptions vs polling? Subscriptions for high-frequency updates (chat, live feeds), polling for low-frequency updates where complexity isn't justified.
Apollo Client: The Frontend Perspective
Understanding the client side shows full-stack awareness:
import { ApolloClient, InMemoryCache, gql, useQuery } from '@apollo/client';
// Client setup
const client = new ApolloClient({
uri: 'https://api.example.com/graphql',
cache: new InMemoryCache({
typePolicies: {
Query: {
fields: {
posts: {
// Merge pagination results
keyArgs: false,
merge(existing = { edges: [] }, incoming) {
return {
...incoming,
edges: [...existing.edges, ...incoming.edges]
};
}
}
}
}
}
})
});
// React hook usage
const GET_USER = gql`
query GetUser($id: ID!) {
user(id: $id) {
id
name
posts {
id
title
}
}
}
`;
function UserProfile({ userId }) {
const { loading, error, data } = useQuery(GET_USER, {
variables: { id: userId }
});
if (loading) return <Spinner />;
if (error) return <Error message={error.message} />;
return (
<div>
<h1>{data.user.name}</h1>
{data.user.posts.map(post => (
<PostCard key={post.id} post={post} />
))}
</div>
);
}Quick Reference: GraphQL Concepts
| Concept | Purpose | Example |
|---|---|---|
| Query | Read data | query { user(id: "1") { name } } |
| Mutation | Write data | mutation { createUser(name: "Alice") { id } } |
| Subscription | Real-time updates | subscription { postAdded { title } } |
| Resolver | Fetch data for field | user: (parent, args, ctx) => db.findUser(args.id) |
| Schema | Type definitions | type User { id: ID!, name: String! } |
| DataLoader | Batch/cache queries | new DataLoader(batchFn) |
| Fragment | Reusable field sets | fragment UserFields on User { id name } |
| Directive | Field behavior | @deprecated @auth(role: ADMIN) |
Common Interview Scenarios
Scenario: "Design a GraphQL API for a social media feed"
"I'd start with the core types: User, Post, Comment, and Like. The Query type would have
feed(first: Int, after: String)returning a PostConnection for cursor-based pagination. I'd use DataLoader for the author relationship to prevent N+1 queries. For real-time updates, I'd add apostAddedsubscription filtered by followed users. Authentication would be JWT-based through the context, with field-level authorization for sensitive data like email."
Scenario: "How would you migrate from REST to GraphQL?"
"I'd take an incremental approach rather than a big-bang migration. First, I'd create a GraphQL layer that wraps existing REST endpoints - resolvers would call the REST API internally. This lets us validate the schema design with real usage. Then we'd gradually move data fetching directly to the database, replacing REST calls with direct queries. The REST API could run in parallel during migration, eventually becoming deprecated."
Scenario: "Your GraphQL API is slow - how do you debug it?"
"I'd start with Apollo Studio or similar tools to trace resolver execution times. Common culprits are N+1 queries (add DataLoader), missing database indexes (add indexes on foreign keys), or over-fetching at the database level (use projections). I'd also check query complexity - maybe clients are requesting too much nested data. Solutions include query complexity limits, depth limiting, and persisted queries to whitelist allowed operations."
What Interviewers Really Want to Hear
-
Understanding tradeoffs - Don't say GraphQL is better than REST. Explain when each excels.
-
N+1 awareness - If you mention DataLoader before being asked, you've shown senior-level thinking.
-
Schema design sense - Understanding nullability, input types vs output types, and union types for errors.
-
Security consciousness - Mentioning query depth limiting, complexity analysis, and introspection disabling.
-
Full-stack awareness - Understanding both server implementation and client caching/state management.
GraphQL interviews test whether you've just used a GraphQL library or actually understand the architectural decisions behind API design. The N+1 problem, schema evolution, and security concerns are where senior candidates shine.
Related Articles
If you found this helpful, check out these related guides:
- Complete Node.js Backend Developer Interview Guide - comprehensive preparation guide for backend interviews
- REST API Interview Guide - API design principles and best practices
- Node.js Advanced Interview Guide - Event loop, streams, and Node.js internals
- TypeScript Type vs Interface Interview Guide - Type definitions for GraphQL schemas
Deepen your API knowledge with our REST API Design Guide and Node.js Advanced Questions.
