40+ Spring Boot Advanced Interview Questions 2025: Auto-Configuration, Cloud & WebFlux

·28 min read
spring-bootspring-cloudjavaarchitectureseniormicroservicesinterview-preparation

Senior Java developer interviews go beyond basic Spring Boot usage. Interviewers expect you to understand how Spring Boot works under the hood, architect production-ready systems, and make informed decisions about reactive vs imperative programming, microservices patterns, and performance optimization.

This guide covers the advanced Spring Boot topics that distinguish senior engineers: internals, custom starters, Spring Cloud, WebFlux, and production architecture patterns.

Table of Contents

  1. Auto-Configuration Questions
  2. Conditional Annotation Questions
  3. Bean Lifecycle Questions
  4. Custom Starter Questions
  5. Configuration Properties Questions
  6. Spring Cloud Config Questions
  7. Service Discovery Questions
  8. API Gateway Questions
  9. Distributed Tracing Questions
  10. WebFlux Questions
  11. Production Actuator Questions
  12. Performance Tuning Questions

Auto-Configuration Questions

Understanding how Spring Boot works enables better debugging and custom solutions.

How does Spring Boot auto-configuration actually work?

Spring Boot auto-configuration is the mechanism that automatically configures beans based on classpath dependencies and existing bean definitions. When you add @SpringBootApplication to your main class, it implicitly includes @EnableAutoConfiguration, which triggers the entire auto-configuration process. The AutoConfigurationImportSelector loads candidate configuration classes from META-INF/spring/org.springframework.boot.autoconfigure.AutoConfiguration.imports (or spring.factories in older versions).

Each auto-configuration class is evaluated against conditional annotations to determine if it should be activated. This conditional evaluation happens at startup, checking classpath contents, existing beans, and property values. The key insight is that auto-configuration provides sensible defaults that back off when you define your own beans.

flowchart TB
    subgraph BOOT["@SpringBootApplication"]
        ENABLE["@EnableAutoConfiguration"]
        IMPORT["@Import(AutoConfigurationImportSelector.class)"]
        ENABLE --> IMPORT
    end
 
    subgraph SELECTOR["AutoConfigurationImportSelector"]
        S1["1. Load META-INF/spring/...AutoConfiguration.imports"]
        S2["2. Filter by @Conditional annotations"]
        S3["3. Order by @AutoConfigureOrder, Before, After"]
        S4["4. Return matching configuration classes"]
        S1 --> S2 --> S3 --> S4
    end
 
    subgraph CONDITIONAL["Conditional Evaluation"]
        C1["@ConditionalOnClass<br/>Class exists on classpath?"]
        C2["@ConditionalOnMissingBean<br/>Bean not already defined?"]
        C3["@ConditionalOnProperty<br/>Property set to expected value?"]
        C4["@ConditionalOnWebApplication<br/>Web app context?"]
    end
 
    BOOT --> SELECTOR
    SELECTOR --> CONDITIONAL

What does a real auto-configuration class look like?

Auto-configuration classes combine multiple conditional annotations to create sophisticated activation logic. The DataSourceAutoConfiguration class demonstrates this pattern well—it checks for required classes on the classpath, verifies that the user hasn't defined their own DataSource bean, and delegates to nested configuration classes for embedded versus pooled data sources.

The proxyBeanMethods = false setting is a performance optimization that skips CGLIB proxying when configuration methods don't call each other, reducing startup time and memory usage.

// DataSourceAutoConfiguration (simplified)
@AutoConfiguration(before = SqlInitializationAutoConfiguration.class)
@ConditionalOnClass({ DataSource.class, EmbeddedDatabaseType.class })
@ConditionalOnMissingBean(type = "io.r2dbc.spi.ConnectionFactory")
@EnableConfigurationProperties(DataSourceProperties.class)
public class DataSourceAutoConfiguration {
 
    @Configuration(proxyBeanMethods = false)
    @Conditional(EmbeddedDatabaseCondition.class)
    @ConditionalOnMissingBean({ DataSource.class, XADataSource.class })
    @Import(EmbeddedDataSourceConfiguration.class)
    protected static class EmbeddedDatabaseConfiguration {
    }
 
    @Configuration(proxyBeanMethods = false)
    @Conditional(PooledDataSourceCondition.class)
    @ConditionalOnMissingBean({ DataSource.class, XADataSource.class })
    @Import({ DataSourceConfiguration.Hikari.class,
              DataSourceConfiguration.Tomcat.class,
              DataSourceConfiguration.Dbcp2.class })
    protected static class PooledDataSourceConfiguration {
    }
}

Conditional Annotation Questions

Conditional annotations control when beans and configurations are activated.

What are the built-in @Conditional annotations in Spring Boot?

Spring Boot provides a rich set of conditional annotations that evaluate various aspects of the application context. These annotations can be combined on the same class or method—all conditions must match for the bean to be created. Understanding these conditions is essential for debugging why certain auto-configurations activate or don't activate.

The most commonly used conditions check for class presence (@ConditionalOnClass), bean absence (@ConditionalOnMissingBean), and property values (@ConditionalOnProperty). Less common but equally useful are resource conditions, expression conditions, and web application type conditions.

// Built-in conditions
@ConditionalOnClass(DataSource.class)           // Class on classpath
@ConditionalOnMissingClass("com.example.Foo")   // Class NOT on classpath
@ConditionalOnBean(DataSource.class)            // Bean exists
@ConditionalOnMissingBean(DataSource.class)     // Bean doesn't exist
@ConditionalOnProperty(                         // Property matches
    prefix = "app.feature",
    name = "enabled",
    havingValue = "true",
    matchIfMissing = false
)
@ConditionalOnResource(resources = "classpath:schema.sql")
@ConditionalOnWebApplication(type = Type.SERVLET)
@ConditionalOnExpression("${app.advanced:false} and ${app.experimental:false}")

How do you create a custom condition?

Custom conditions implement the Condition interface and provide arbitrary logic for determining whether a configuration should activate. This is useful when built-in conditions don't cover your requirements—for example, checking multiple environment variables, validating external service availability, or implementing feature flag logic.

The ConditionContext provides access to the environment, bean factory, class loader, and resource loader, giving you full flexibility in what you can evaluate.

// Custom condition
public class OnProductionEnvironmentCondition implements Condition {
    @Override
    public boolean matches(ConditionContext context,
                          AnnotatedTypeMetadata metadata) {
        Environment env = context.getEnvironment();
        String[] activeProfiles = env.getActiveProfiles();
        return Arrays.asList(activeProfiles).contains("production");
    }
}
 
@Configuration
@Conditional(OnProductionEnvironmentCondition.class)
public class ProductionOnlyConfiguration {
    // Only loaded in production
}

Bean Lifecycle Questions

Understanding the bean lifecycle helps with initialization logic and debugging.

What is the complete Spring bean lifecycle?

The Spring bean lifecycle consists of eight distinct phases from instantiation to destruction. After the constructor is called, Spring injects dependencies through setter methods and field injection. Then Aware interfaces are invoked, giving beans access to framework components like BeanFactory and ApplicationContext. Pre-initialization involves BeanPostProcessors and @PostConstruct methods, followed by initialization callbacks from InitializingBean or custom init methods.

Post-initialization is when AOP proxies are created, wrapping the original bean. The bean is then ready for use until the application shuts down, when destruction callbacks clean up resources.

flowchart TB
    P1["1. Instantiation<br/>• Constructor called<br/>• Dependencies injected"]
    P2["2. Population<br/>• @Autowired fields/setters<br/>• @Value properties resolved"]
    P3["3. Aware Interfaces<br/>• BeanNameAware<br/>• BeanFactoryAware<br/>• ApplicationContextAware"]
    P4["4. Pre-Initialization<br/>• BeanPostProcessor.postProcessBefore<br/>• @PostConstruct methods"]
    P5["5. Initialization<br/>• InitializingBean.afterPropertiesSet<br/>• Custom init-method"]
    P6["6. Post-Initialization<br/>• BeanPostProcessor.postProcessAfter<br/>• AOP proxies created"]
    P7["7. Ready for Use"]
    P8["8. Destruction<br/>• @PreDestroy methods<br/>• DisposableBean.destroy<br/>• Custom destroy-method"]
 
    P1 --> P2 --> P3 --> P4 --> P5 --> P6 --> P7 --> P8

How do you implement lifecycle callbacks in a Spring bean?

You can implement lifecycle callbacks through annotations (@PostConstruct, @PreDestroy), interfaces (InitializingBean, DisposableBean), or Aware interfaces for accessing framework components. The execution order is predictable: Aware methods run first, then @PostConstruct, then afterPropertiesSet(). For destruction, @PreDestroy runs before destroy().

Modern Spring applications typically prefer annotations over interfaces for cleaner code, but interfaces are useful when you need guaranteed invocation order or when working with beans that might not support annotation processing.

@Component
public class LifecycleDemoBean implements BeanNameAware, InitializingBean,
        DisposableBean, ApplicationContextAware {
 
    private String beanName;
    private ApplicationContext context;
 
    public LifecycleDemoBean() {
        System.out.println("1. Constructor");
    }
 
    @Autowired
    public void setDependency(SomeDependency dep) {
        System.out.println("2. Dependency injection");
    }
 
    @Override
    public void setBeanName(String name) {
        this.beanName = name;
        System.out.println("3. BeanNameAware: " + name);
    }
 
    @Override
    public void setApplicationContext(ApplicationContext ctx) {
        this.context = ctx;
        System.out.println("3. ApplicationContextAware");
    }
 
    @PostConstruct
    public void postConstruct() {
        System.out.println("4. @PostConstruct");
    }
 
    @Override
    public void afterPropertiesSet() {
        System.out.println("5. InitializingBean.afterPropertiesSet");
    }
 
    @PreDestroy
    public void preDestroy() {
        System.out.println("8. @PreDestroy");
    }
 
    @Override
    public void destroy() {
        System.out.println("8. DisposableBean.destroy");
    }
}

Custom Starter Questions

Creating custom starters is a senior-level skill for building reusable infrastructure.

When should you create a custom Spring Boot starter?

Custom starters are appropriate when you have configuration that needs to be reused across multiple projects. Common scenarios include company-wide standards for logging, security, and observability; integrations with internal APIs or proprietary databases; and infrastructure setup for messaging or caching systems. A starter bundles auto-configuration, dependencies, and sensible defaults into a single dependency that other projects can include.

The standard structure uses two modules: an autoconfigure module containing the configuration logic, and a starter module that pulls in the autoconfigure module plus any required dependencies. This separation allows projects to use just the autoconfigure module if they want to manage dependencies themselves.

my-company-spring-boot-starter/
├── my-company-spring-boot-autoconfigure/     # Auto-configuration module
│   ├── src/main/java/
│   │   └── com/company/autoconfigure/
│   │       ├── MyServiceAutoConfiguration.java
│   │       ├── MyServiceProperties.java
│   │       └── MyService.java
│   ├── src/main/resources/
│   │   └── META-INF/
│   │       └── spring/
│   │           └── org.springframework.boot.autoconfigure.AutoConfiguration.imports
│   └── pom.xml
│
└── my-company-spring-boot-starter/           # Starter module (dependencies only)
    └── pom.xml

How do you build an auto-configuration class for a custom starter?

Building an auto-configuration requires three components: configuration properties that bind external configuration, the service class being configured, and the auto-configuration class that wires everything together. The auto-configuration class uses conditional annotations to ensure it only activates when appropriate—typically when the service class is on the classpath, the feature is enabled, and the user hasn't defined their own bean.

The @ConditionalOnMissingBean annotation is particularly important—it allows users to override your default configuration by defining their own bean of the same type.

// 1. Configuration properties
@ConfigurationProperties(prefix = "company.service")
public class MyServiceProperties {
 
    private boolean enabled = true;
    private String endpoint = "https://api.company.com";
    private Duration timeout = Duration.ofSeconds(30);
    private RetryConfig retry = new RetryConfig();
 
    // Nested configuration
    public static class RetryConfig {
        private int maxAttempts = 3;
        private Duration backoff = Duration.ofMillis(100);
 
        // Getters and setters
    }
 
    // Getters and setters
}
 
// 2. The service being auto-configured
public class MyService {
 
    private final MyServiceProperties properties;
    private final RestClient restClient;
 
    public MyService(MyServiceProperties properties, RestClient restClient) {
        this.properties = properties;
        this.restClient = restClient;
    }
 
    public Response callApi(Request request) {
        return restClient.post()
            .uri(properties.getEndpoint())
            .body(request)
            .retrieve()
            .body(Response.class);
    }
}
 
// 3. Auto-configuration class
@AutoConfiguration
@ConditionalOnClass(MyService.class)
@ConditionalOnProperty(
    prefix = "company.service",
    name = "enabled",
    havingValue = "true",
    matchIfMissing = true
)
@EnableConfigurationProperties(MyServiceProperties.class)
public class MyServiceAutoConfiguration {
 
    @Bean
    @ConditionalOnMissingBean
    public MyService myService(MyServiceProperties properties,
                               ObjectProvider<RestClient.Builder> restClientBuilder) {
        RestClient restClient = restClientBuilder
            .getIfAvailable(RestClient::builder)
            .requestFactory(new JdkClientHttpRequestFactory())
            .build();
 
        return new MyService(properties, restClient);
    }
 
    @Bean
    @ConditionalOnMissingBean
    @ConditionalOnProperty(prefix = "company.service.retry", name = "enabled",
                          havingValue = "true", matchIfMissing = true)
    public RetryTemplate myServiceRetryTemplate(MyServiceProperties properties) {
        return RetryTemplate.builder()
            .maxAttempts(properties.getRetry().getMaxAttempts())
            .exponentialBackoff(
                properties.getRetry().getBackoff().toMillis(),
                2.0,
                30000)
            .build();
    }
}

How do you register an auto-configuration class?

Auto-configuration classes must be registered in a special file so Spring Boot discovers them at startup. In Spring Boot 3.x, this file is located at META-INF/spring/org.springframework.boot.autoconfigure.AutoConfiguration.imports. Each line contains the fully qualified class name of an auto-configuration class.

The starter POM pulls in the autoconfigure module and any required runtime dependencies, making it a single dependency for consuming projects.

# META-INF/spring/org.springframework.boot.autoconfigure.AutoConfiguration.imports
com.company.autoconfigure.MyServiceAutoConfiguration
<!-- my-company-spring-boot-starter/pom.xml -->
<project>
    <artifactId>my-company-spring-boot-starter</artifactId>
 
    <dependencies>
        <!-- Pull in the autoconfigure module -->
        <dependency>
            <groupId>com.company</groupId>
            <artifactId>my-company-spring-boot-autoconfigure</artifactId>
            <version>${project.version}</version>
        </dependency>
 
        <!-- Required dependencies for users -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
    </dependencies>
</project>

How do you provide IDE auto-completion for custom configuration properties?

Configuration metadata enables IDE auto-completion and documentation for your custom properties. Add the spring-boot-configuration-processor annotation processor dependency, and document properties using Javadoc comments. For additional hints like valid values or deprecation warnings, create an additional-spring-configuration-metadata.json file.

This metadata improves the developer experience significantly, making your starter feel as polished as official Spring Boot starters.

// Enable IDE auto-completion for properties
// Add spring-boot-configuration-processor dependency
 
@ConfigurationProperties(prefix = "company.service")
public class MyServiceProperties {
 
    /**
     * Enable or disable the company service integration.
     */
    private boolean enabled = true;
 
    /**
     * Base URL for the company API.
     */
    private String endpoint = "https://api.company.com";
}
// META-INF/additional-spring-configuration-metadata.json
{
  "properties": [
    {
      "name": "company.service.endpoint",
      "type": "java.lang.String",
      "description": "Base URL for the company API.",
      "defaultValue": "https://api.company.com"
    }
  ],
  "hints": [
    {
      "name": "company.service.endpoint",
      "values": [
        {
          "value": "https://api.company.com",
          "description": "Production endpoint"
        },
        {
          "value": "https://sandbox.company.com",
          "description": "Sandbox endpoint"
        }
      ]
    }
  ]
}

Configuration Properties Questions

Advanced configuration management is essential for production applications.

What is the order of property source precedence in Spring Boot?

Spring Boot loads properties from multiple sources with a defined precedence order—higher sources override lower ones. Command line arguments have the highest precedence, followed by inline JSON, servlet parameters, JNDI, system properties, environment variables, profile-specific files, application files, @PropertySource annotations, and finally default properties.

This hierarchy enables flexible configuration: use application.yml for defaults, profile-specific files for environment differences, environment variables for containerized deployments, and command line arguments for one-off overrides.

flowchart TB
    subgraph PRECEDENCE["Property Source Precedence (highest to lowest)"]
        P1["1. Command line arguments<br/>--server.port=8081"]
        P2["2. SPRING_APPLICATION_JSON<br/>(inline JSON)"]
        P3["3. ServletConfig/ServletContext<br/>parameters"]
        P4["4. JNDI attributes"]
        P5["5. Java System properties<br/>-Dserver.port=8081"]
        P6["6. OS environment variables<br/>SERVER_PORT=8081"]
        P7["7. Profile-specific properties<br/>application-{profile}.yml"]
        P8["8. Application properties<br/>application.yml"]
        P9["9. @PropertySource annotations"]
        P10["10. Default properties<br/>SpringApplication.setDefaultProps"]
    end
 
    P1 --> P2 --> P3 --> P4 --> P5
    P5 --> P6 --> P7 --> P8 --> P9 --> P10

How do you use profile-based configuration?

Profile-based configuration allows different settings for different environments. You can create separate files like application-local.yml and application-production.yml, or use document separators within a single file. Beans can be conditionally created using @Profile annotations, and profiles can be activated programmatically based on runtime conditions.

The spring.config.activate.on-profile property in YAML indicates which profile a configuration section belongs to. Multiple profiles can be active simultaneously, with later profiles overriding earlier ones.

# application.yml - Common settings
spring:
  application:
    name: my-service
 
---
# application-local.yml
spring:
  config:
    activate:
      on-profile: local
  datasource:
    url: jdbc:h2:mem:testdb
logging:
  level:
    com.company: DEBUG
 
---
# application-production.yml
spring:
  config:
    activate:
      on-profile: production
  datasource:
    url: jdbc:postgresql://prod-db:5432/myapp
    hikari:
      maximum-pool-size: 20
logging:
  level:
    root: WARN
    com.company: INFO
// Profile-specific beans
@Configuration
public class DataSourceConfig {
 
    @Bean
    @Profile("local")
    public DataSource h2DataSource() {
        return new EmbeddedDatabaseBuilder()
            .setType(EmbeddedDatabaseType.H2)
            .build();
    }
 
    @Bean
    @Profile("production")
    public DataSource productionDataSource(DataSourceProperties properties) {
        return properties.initializeDataSourceBuilder()
            .type(HikariDataSource.class)
            .build();
    }
}
 
// Programmatic profile activation
@SpringBootApplication
public class Application {
    public static void main(String[] args) {
        SpringApplication app = new SpringApplication(Application.class);
 
        if (System.getenv("KUBERNETES_SERVICE_HOST") != null) {
            app.setAdditionalProfiles("kubernetes");
        }
 
        app.run(args);
    }
}

How do you create type-safe configuration with validation?

Type-safe configuration uses @ConfigurationProperties to bind properties to strongly-typed Java objects. Adding @Validated enables JSR-303 validation annotations like @NotNull, @Min, and @Max. Nested configuration classes organize related properties, and @DurationUnit specifies the time unit for Duration properties.

This approach catches configuration errors at startup rather than runtime, provides IDE auto-completion, and makes configuration usage self-documenting through the class structure.

@ConfigurationProperties(prefix = "app.features")
@Validated
public class FeatureProperties {
 
    @NotNull
    private Map<String, FeatureFlag> flags = new HashMap<>();
 
    @Valid
    private RateLimiting rateLimiting = new RateLimiting();
 
    public static class FeatureFlag {
        private boolean enabled = false;
        private Set<String> allowedUsers = new HashSet<>();
        private LocalDateTime enabledUntil;
 
        // Getters and setters
    }
 
    public static class RateLimiting {
        @Min(1)
        @Max(10000)
        private int requestsPerMinute = 100;
 
        @DurationUnit(ChronoUnit.SECONDS)
        private Duration window = Duration.ofMinutes(1);
 
        // Getters and setters
    }
}
 
// Usage
@Service
@RequiredArgsConstructor
public class FeatureService {
    private final FeatureProperties features;
 
    public boolean isFeatureEnabled(String featureName, String userId) {
        FeatureFlag flag = features.getFlags().get(featureName);
        if (flag == null || !flag.isEnabled()) {
            return false;
        }
        if (!flag.getAllowedUsers().isEmpty() &&
            !flag.getAllowedUsers().contains(userId)) {
            return false;
        }
        if (flag.getEnabledUntil() != null &&
            LocalDateTime.now().isAfter(flag.getEnabledUntil())) {
            return false;
        }
        return true;
    }
}

Spring Cloud Config Questions

Spring Cloud provides tools for distributed systems patterns.

How do you set up Spring Cloud Config Server?

Spring Cloud Config Server provides centralized configuration management for microservices. The server can be backed by Git repositories, HashiCorp Vault, or databases, serving configuration to client applications over HTTP. Git-backed configuration provides versioning, audit trails, and the ability to use branches for different environments.

Encryption support allows sensitive properties to be stored encrypted in the repository and decrypted on the server before being sent to clients. The search-paths configuration supports placeholders like {application} for organizing configurations by service name.

# Config Server application.yml
spring:
  application:
    name: config-server
  cloud:
    config:
      server:
        git:
          uri: https://github.com/company/config-repo
          search-paths: '{application}'
          default-label: main
        encrypt:
          enabled: true
 
encrypt:
  key: ${ENCRYPT_KEY}  # Or use keystore for asymmetric
 
server:
  port: 8888
# Client application.yml
spring:
  application:
    name: order-service
  config:
    import: "configserver:http://config-server:8888"
  cloud:
    config:
      fail-fast: true
      retry:
        max-attempts: 6
        initial-interval: 1000
        multiplier: 1.5

How do you refresh configuration at runtime without restarting?

The @RefreshScope annotation marks beans that should be recreated when configuration changes. When you trigger a refresh via the /actuator/refresh endpoint, Spring destroys these beans and creates new instances with updated configuration values. For cluster-wide refresh, Spring Cloud Bus broadcasts refresh events to all service instances through a message broker.

This enables dynamic configuration changes like adjusting discount percentages or feature flags without service restarts, though you should be careful about which beans are refresh-scoped—stateful beans may lose important state on refresh.

// Refresh configuration at runtime
@RefreshScope
@Service
public class PricingService {
 
    @Value("${pricing.discount-percentage:0}")
    private double discountPercentage;
 
    public BigDecimal calculatePrice(BigDecimal basePrice) {
        BigDecimal discount = basePrice.multiply(
            BigDecimal.valueOf(discountPercentage / 100));
        return basePrice.subtract(discount);
    }
}
 
// Trigger refresh via actuator
// POST /actuator/refresh
 
// Or use Spring Cloud Bus for cluster-wide refresh
// POST /actuator/busrefresh

Service Discovery Questions

Service discovery enables dynamic microservices communication.

How do you configure Eureka for service discovery?

Eureka provides client-side service discovery where services register themselves and discover others through a central registry. The Eureka server maintains the registry and handles heartbeats from registered services. Client services use @LoadBalanced on their HTTP clients to automatically resolve service names to actual instances.

Self-preservation mode prevents Eureka from removing instances during network partitions—disable it in development but keep it enabled in production to maintain availability during temporary network issues.

# Eureka Server
spring:
  application:
    name: eureka-server
 
eureka:
  client:
    register-with-eureka: false
    fetch-registry: false
  server:
    enable-self-preservation: false  # Disable in dev
 
---
# Service Client
spring:
  application:
    name: order-service
 
eureka:
  client:
    service-url:
      defaultZone: http://eureka:8761/eureka/
  instance:
    prefer-ip-address: true
    lease-renewal-interval-in-seconds: 10

How do you create a load-balanced REST client with service discovery?

Load-balanced clients resolve service names to actual instance URLs using the service registry. The @LoadBalanced annotation on a RestClient.Builder bean enables this resolution. When you make requests using the service name as the host (like http://user-service), the load balancer intercepts the request, looks up available instances, and routes to one of them.

This abstraction means your code doesn't need to know about individual service instances or their locations—just the logical service name.

// Load-balanced RestClient
@Configuration
public class RestClientConfig {
 
    @Bean
    @LoadBalanced
    public RestClient.Builder loadBalancedRestClientBuilder() {
        return RestClient.builder();
    }
}
 
@Service
public class UserClient {
 
    private final RestClient restClient;
 
    public UserClient(RestClient.Builder builder) {
        this.restClient = builder
            .baseUrl("http://user-service")  // Service name, not URL
            .build();
    }
 
    public User getUser(Long id) {
        return restClient.get()
            .uri("/api/users/{id}", id)
            .retrieve()
            .body(User.class);
    }
}

API Gateway Questions

API gateways provide a single entry point for microservices.

How do you configure Spring Cloud Gateway routes?

Spring Cloud Gateway routes requests to backend services based on predicates like path patterns, headers, or query parameters. Routes can include filters for request/response modification, rate limiting, circuit breaking, and retry logic. The lb:// prefix indicates load-balanced routing through service discovery.

Gateway configuration is declarative in YAML, making it easy to understand the routing topology at a glance. Filters can strip path prefixes, add headers, apply rate limits, or implement custom logic.

spring:
  cloud:
    gateway:
      routes:
        - id: order-service
          uri: lb://order-service
          predicates:
            - Path=/api/orders/**
          filters:
            - StripPrefix=1
            - name: CircuitBreaker
              args:
                name: orderServiceCB
                fallbackUri: forward:/fallback/orders
            - name: Retry
              args:
                retries: 3
                statuses: BAD_GATEWAY,SERVICE_UNAVAILABLE
                methods: GET
                backoff:
                  firstBackoff: 50ms
                  maxBackoff: 500ms
                  factor: 2
 
        - id: user-service
          uri: lb://user-service
          predicates:
            - Path=/api/users/**
          filters:
            - StripPrefix=1
            - name: RequestRateLimiter
              args:
                redis-rate-limiter.replenishRate: 10
                redis-rate-limiter.burstCapacity: 20
                key-resolver: "#{@userKeyResolver}"

How do you implement a custom global filter for authentication?

Global filters apply to all routes and are useful for cross-cutting concerns like authentication, logging, or request correlation. Implement GlobalFilter and Ordered interfaces, with lower order values executing first. The filter chain is reactive, using Mono<Void> for non-blocking execution.

You can modify requests by mutating headers, validate tokens and reject unauthorized requests, or enrich requests with user context before forwarding to downstream services.

// Custom filters
@Component
public class AuthenticationFilter implements GlobalFilter, Ordered {
 
    @Override
    public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
        String token = exchange.getRequest().getHeaders()
            .getFirst(HttpHeaders.AUTHORIZATION);
 
        if (token == null || !token.startsWith("Bearer ")) {
            exchange.getResponse().setStatusCode(HttpStatus.UNAUTHORIZED);
            return exchange.getResponse().setComplete();
        }
 
        // Validate token and add user info to headers
        String userId = validateAndExtractUserId(token);
        ServerHttpRequest modifiedRequest = exchange.getRequest().mutate()
            .header("X-User-Id", userId)
            .build();
 
        return chain.filter(exchange.mutate().request(modifiedRequest).build());
    }
 
    @Override
    public int getOrder() {
        return -100;  // Run early
    }
}

Distributed Tracing Questions

Distributed tracing tracks requests across service boundaries.

How do you configure distributed tracing with Spring Boot?

Spring Boot 3.x uses Micrometer Tracing for distributed tracing, supporting backends like Zipkin, Jaeger, and OTLP. Configuration involves setting the sampling probability (1.0 samples all requests, lower values reduce overhead in production) and the exporter endpoint. Trace context propagates automatically through instrumented HTTP clients, message brokers, and async methods.

The sampling probability should be tuned based on traffic volume—high-traffic services might sample only 1-10% of requests to manage storage costs while still getting representative traces.

# application.yml
spring:
  application:
    name: order-service
 
management:
  tracing:
    sampling:
      probability: 1.0  # Sample all requests (reduce in production)
  zipkin:
    tracing:
      endpoint: http://zipkin:9411/api/v2/spans

How do you create custom spans for detailed tracing?

While automatic instrumentation covers most cases, custom spans provide visibility into specific business operations. Create spans using the Tracer API, adding tags for searchable attributes and events for notable occurrences. Always use try-with-resources or finally blocks to ensure spans are properly closed, even when exceptions occur.

Tags should be low-cardinality (don't use user IDs or order IDs as tag values—use them sparingly or as span fields instead) to keep tracing backend storage manageable.

// Traces propagate automatically through:
// - RestClient/WebClient (with instrumentation)
// - Kafka (with tracing headers)
// - @Async methods
 
// Manual span creation
@Service
@RequiredArgsConstructor
public class OrderService {
 
    private final Tracer tracer;
 
    public Order processOrder(Order order) {
        Span span = tracer.nextSpan().name("process-order").start();
        try (Tracer.SpanInScope ws = tracer.withSpan(span)) {
            span.tag("order.id", order.getId().toString());
            span.tag("order.amount", order.getAmount().toString());
 
            // Processing logic
            validateOrder(order);
            reserveInventory(order);
            processPayment(order);
 
            span.event("order-completed");
            return order;
        } catch (Exception e) {
            span.error(e);
            throw e;
        } finally {
            span.end();
        }
    }
}

WebFlux Questions

WebFlux enables non-blocking, reactive applications.

What is the difference between Spring MVC and Spring WebFlux?

Spring MVC uses a blocking, thread-per-request model where each incoming request occupies a thread from the pool until the response is sent. If you have 200 threads and 200 concurrent requests that each take 1 second, new requests must wait. Spring WebFlux uses a non-blocking, reactive model with event loops—a small number of threads (typically equal to CPU cores) handle thousands of concurrent requests by never blocking.

The key difference is in how I/O is handled. MVC threads block while waiting for database queries or HTTP calls to complete. WebFlux threads register callbacks and move on to handle other requests, getting notified when I/O completes. This makes WebFlux ideal for I/O-bound workloads with many concurrent connections.

flowchart LR
    subgraph MVC["Spring MVC (Blocking)"]
        direction LR
        REQ1["Request"] --> THREAD["Thread<br/>(blocked, waits)"]
        THREAD --> DB1["Database<br/>Query"]
        DB1 --> THREAD
        THREAD --> RES1["Response"]
    end
 
    subgraph WEBFLUX["Spring WebFlux (Non-Blocking)"]
        direction LR
        R1["Request 1"]
        R2["Request 2"]
        R3["Request N..."]
        LOOP["Event Loop<br/>(few threads)"]
        DB2["Database<br/>(async)"]
 
        R1 --> LOOP
        R2 --> LOOP
        R3 --> LOOP
        LOOP --> DB2
        DB2 --> LOOP
    end

MVC: Thread pool of 200 threads = 200 concurrent requests max

WebFlux: Few threads can handle thousands of concurrent requests

How do you write reactive controllers in WebFlux?

Reactive controllers return Mono<T> for single values and Flux<T> for multiple values. The reactive types are lazy—processing doesn't start until something subscribes. WebFlux handles subscription automatically when returning from controller methods. For streaming responses, use MediaType.TEXT_EVENT_STREAM_VALUE to send data as Server-Sent Events.

Error handling uses reactive operators like switchIfEmpty, onErrorResume, and timeout. These compose into a declarative pipeline that describes how to handle various scenarios without nested try-catch blocks.

@RestController
@RequestMapping("/api/orders")
@RequiredArgsConstructor
public class OrderController {
 
    private final OrderService orderService;
 
    // Return Mono for single value
    @GetMapping("/{id}")
    public Mono<Order> getOrder(@PathVariable String id) {
        return orderService.findById(id);
    }
 
    // Return Flux for multiple values (streaming)
    @GetMapping(produces = MediaType.TEXT_EVENT_STREAM_VALUE)
    public Flux<Order> streamOrders() {
        return orderService.findAll()
            .delayElements(Duration.ofMillis(100));  // Simulate streaming
    }
 
    // Reactive request body
    @PostMapping
    public Mono<Order> createOrder(@RequestBody Mono<CreateOrderRequest> request) {
        return request
            .flatMap(orderService::create)
            .doOnSuccess(order -> log.info("Created order: {}", order.getId()));
    }
 
    // Error handling
    @GetMapping("/{id}/details")
    public Mono<OrderDetails> getOrderDetails(@PathVariable String id) {
        return orderService.findById(id)
            .switchIfEmpty(Mono.error(new OrderNotFoundException(id)))
            .flatMap(this::enrichWithDetails)
            .timeout(Duration.ofSeconds(5))
            .onErrorResume(TimeoutException.class,
                e -> Mono.error(new ServiceUnavailableException("Timeout")));
    }
}

How do you use R2DBC for reactive database access?

R2DBC provides non-blocking database access, essential for fully reactive applications. Traditional JDBC blocks threads waiting for database responses, negating WebFlux benefits. R2DBC repositories extend ReactiveCrudRepository and return Mono and Flux types. Transactions use TransactionalOperator rather than @Transactional annotation for programmatic control.

The reactive transaction wraps the entire pipeline, ensuring atomicity across multiple database operations without blocking threads.

// Repository
public interface OrderRepository extends ReactiveCrudRepository<Order, String> {
 
    Flux<Order> findByCustomerId(String customerId);
 
    @Query("SELECT * FROM orders WHERE status = :status ORDER BY created_at DESC LIMIT :limit")
    Flux<Order> findRecentByStatus(String status, int limit);
}
 
// Service with transactions
@Service
@RequiredArgsConstructor
public class OrderService {
 
    private final OrderRepository orderRepository;
    private final TransactionalOperator transactionalOperator;
 
    public Mono<Order> createOrder(CreateOrderRequest request) {
        return Mono.just(request)
            .map(this::mapToOrder)
            .flatMap(orderRepository::save)
            .flatMap(this::reserveInventory)
            .as(transactionalOperator::transactional);  // Reactive transaction
    }
}
 
// Configuration
@Configuration
@EnableR2dbcRepositories
public class R2dbcConfig extends AbstractR2dbcConfiguration {
 
    @Override
    @Bean
    public ConnectionFactory connectionFactory() {
        return ConnectionFactories.get(ConnectionFactoryOptions.builder()
            .option(DRIVER, "postgresql")
            .option(HOST, "localhost")
            .option(PORT, 5432)
            .option(DATABASE, "orders")
            .option(USER, "user")
            .option(PASSWORD, "password")
            .build());
    }
}

How do you use WebClient for reactive HTTP calls?

WebClient is the reactive alternative to RestTemplate, providing non-blocking HTTP requests with a fluent API. Configure request interceptors for logging, authentication, or metrics. Response handling uses reactive operators—onStatus for error mapping, retryWhen for resilient retries with backoff.

For parallel calls, use Mono.zip to execute multiple requests concurrently and combine results. This is significantly more efficient than sequential blocking calls.

@Service
public class ExternalApiClient {
 
    private final WebClient webClient;
 
    public ExternalApiClient(WebClient.Builder builder) {
        this.webClient = builder
            .baseUrl("https://api.external.com")
            .defaultHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
            .filter(ExchangeFilterFunction.ofRequestProcessor(request -> {
                log.debug("Request: {} {}", request.method(), request.url());
                return Mono.just(request);
            }))
            .build();
    }
 
    public Mono<ExternalData> fetchData(String id) {
        return webClient.get()
            .uri("/data/{id}", id)
            .retrieve()
            .onStatus(HttpStatusCode::is4xxClientError,
                response -> Mono.error(new ClientException("Client error")))
            .onStatus(HttpStatusCode::is5xxServerError,
                response -> Mono.error(new ServerException("Server error")))
            .bodyToMono(ExternalData.class)
            .retryWhen(Retry.backoff(3, Duration.ofMillis(100))
                .filter(e -> e instanceof ServerException));
    }
 
    // Parallel calls
    public Mono<AggregatedData> fetchAggregated(String userId) {
        Mono<UserProfile> profileMono = fetchUserProfile(userId);
        Mono<List<Order>> ordersMono = fetchUserOrders(userId).collectList();
        Mono<Preferences> prefsMono = fetchPreferences(userId);
 
        return Mono.zip(profileMono, ordersMono, prefsMono)
            .map(tuple -> new AggregatedData(
                tuple.getT1(),
                tuple.getT2(),
                tuple.getT3()
            ));
    }
}

Production Actuator Questions

Production applications require robust monitoring and health checks.

How do you configure Actuator endpoints for production?

Production Actuator configuration balances visibility with security. Expose only necessary endpoints, secure sensitive ones behind authentication, and configure health indicators for Kubernetes probes. The show-details setting controls whether health check details are visible—use when_authorized to require authentication for detailed health information.

Custom base paths like /management separate operational endpoints from application endpoints, and disk space thresholds alert you before storage issues cause failures.

management:
  endpoints:
    web:
      exposure:
        include: health,info,metrics,prometheus
      base-path: /management
  endpoint:
    health:
      show-details: when_authorized
      probes:
        enabled: true  # Kubernetes probes
  health:
    diskspace:
      threshold: 10GB
  info:
    env:
      enabled: true
    git:
      mode: full
 
# Custom info
info:
  app:
    name: ${spring.application.name}
    version: @project.version@
    encoding: @project.build.sourceEncoding@

How do you create custom health indicators?

Custom health indicators check business-critical dependencies that aren't covered by auto-configured indicators. Implement HealthIndicator and return Health.up() or Health.down() with descriptive details. Health indicators run on every health check request, so keep them fast and handle timeouts appropriately.

Include relevant details like latency measurements and error reasons to aid troubleshooting when health checks fail.

// Custom health indicator
@Component
public class PaymentGatewayHealthIndicator implements HealthIndicator {
 
    private final PaymentGatewayClient client;
 
    @Override
    public Health health() {
        try {
            HealthCheckResponse response = client.healthCheck();
            if (response.isHealthy()) {
                return Health.up()
                    .withDetail("gateway", "Payment gateway is responsive")
                    .withDetail("latency", response.getLatencyMs() + "ms")
                    .build();
            } else {
                return Health.down()
                    .withDetail("gateway", "Payment gateway reports unhealthy")
                    .withDetail("reason", response.getReason())
                    .build();
            }
        } catch (Exception e) {
            return Health.down()
                .withDetail("gateway", "Cannot reach payment gateway")
                .withException(e)
                .build();
        }
    }
}

How do you add custom business metrics?

Custom metrics track business KPIs beyond technical metrics. Use MeterRegistry to create counters for events, gauges for current values, and timers for durations. Tag metrics with dimensions like order type and region for detailed analysis.

Register gauges in @PostConstruct to track values that change over time, like active order count. Use counters for monotonically increasing values and timers for measuring operation durations with automatic histogram generation.

// Custom metrics
@Component
@RequiredArgsConstructor
public class OrderMetrics {
 
    private final MeterRegistry registry;
    private final AtomicLong activeOrders = new AtomicLong(0);
 
    @PostConstruct
    public void init() {
        Gauge.builder("orders.active", activeOrders, AtomicLong::get)
            .description("Number of orders being processed")
            .register(registry);
    }
 
    public void recordOrderCreated(Order order) {
        registry.counter("orders.created",
            "type", order.getType().name(),
            "region", order.getRegion()
        ).increment();
 
        activeOrders.incrementAndGet();
    }
 
    public void recordOrderCompleted(Order order, long processingTimeMs) {
        registry.timer("orders.processing.time",
            "type", order.getType().name(),
            "status", order.getStatus().name()
        ).record(Duration.ofMillis(processingTimeMs));
 
        activeOrders.decrementAndGet();
    }
}

How do you implement graceful shutdown?

Graceful shutdown allows in-flight requests to complete before the application terminates. Enable it with server.shutdown=graceful and configure a timeout. For complex cleanup, implement SmartLifecycle to control shutdown order—beans with higher phase values shut down first.

The shutdown handler should stop accepting new work, wait for in-flight operations to complete, and then signal readiness for container termination. This prevents request failures during deployments.

server:
  shutdown: graceful
 
spring:
  lifecycle:
    timeout-per-shutdown-phase: 30s
@Component
@RequiredArgsConstructor
public class GracefulShutdownHandler implements SmartLifecycle {
 
    private final OrderProcessor orderProcessor;
    private boolean running = false;
 
    @Override
    public void start() {
        running = true;
    }
 
    @Override
    public void stop(Runnable callback) {
        log.info("Initiating graceful shutdown...");
 
        // Stop accepting new work
        orderProcessor.stopAcceptingOrders();
 
        // Wait for in-flight orders to complete
        try {
            orderProcessor.awaitCompletion(Duration.ofSeconds(25));
            log.info("All orders processed, shutting down");
        } catch (InterruptedException e) {
            log.warn("Shutdown interrupted, some orders may be incomplete");
            Thread.currentThread().interrupt();
        }
 
        running = false;
        callback.run();
    }
 
    @Override
    public boolean isRunning() {
        return running;
    }
 
    @Override
    public int getPhase() {
        return Integer.MAX_VALUE;  // Shut down last
    }
}

Performance Tuning Questions

Production applications require careful tuning for performance and scalability.

How do you configure thread pools for high throughput?

Thread pool configuration significantly impacts application performance. Tomcat's thread pool handles incoming requests—size it based on expected concurrency and available CPU cores. Async task executors handle @Async methods and should be sized separately from the request handling pool.

The CallerRunsPolicy rejection handler provides backpressure—when the queue is full, the calling thread executes the task, naturally slowing down producers. Configure graceful shutdown to wait for tasks to complete.

server:
  tomcat:
    threads:
      max: 200
      min-spare: 20
    accept-count: 100
    connection-timeout: 10s
 
spring:
  task:
    execution:
      pool:
        core-size: 8
        max-size: 50
        queue-capacity: 100
        keep-alive: 60s
      thread-name-prefix: async-
    scheduling:
      pool:
        size: 5
      thread-name-prefix: scheduled-
// Custom async executor
@Configuration
@EnableAsync
public class AsyncConfig implements AsyncConfigurer {
 
    @Override
    public Executor getAsyncExecutor() {
        ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
        executor.setCorePoolSize(10);
        executor.setMaxPoolSize(50);
        executor.setQueueCapacity(100);
        executor.setThreadNamePrefix("custom-async-");
        executor.setRejectedExecutionHandler(new CallerRunsPolicy());
        executor.setWaitForTasksToCompleteOnShutdown(true);
        executor.setAwaitTerminationSeconds(30);
        executor.initialize();
        return executor;
    }
 
    @Override
    public AsyncUncaughtExceptionHandler getAsyncUncaughtExceptionHandler() {
        return (throwable, method, params) -> {
            log.error("Async method {} threw exception: {}",
                method.getName(), throwable.getMessage(), throwable);
        };
    }
}
 
// Usage
@Service
public class NotificationService {
 
    @Async
    public CompletableFuture<Void> sendNotificationAsync(Notification notification) {
        // Non-blocking notification sending
        return CompletableFuture.runAsync(() -> {
            emailService.send(notification);
            pushService.send(notification);
        });
    }
}

How do you tune HikariCP connection pools?

Connection pool sizing directly affects database performance. Too few connections cause request queuing; too many waste resources and can overwhelm the database. HikariCP's leak detection helps identify connections that aren't being returned to the pool. The max-lifetime setting ensures connections are recycled before database-side timeouts close them unexpectedly.

Monitor pool metrics to understand actual usage patterns—idle connections consuming memory, threads waiting for connections, and connection acquisition times.

spring:
  datasource:
    hikari:
      maximum-pool-size: 20
      minimum-idle: 5
      connection-timeout: 30000
      idle-timeout: 600000
      max-lifetime: 1800000
      leak-detection-threshold: 60000
      pool-name: OrderServicePool
// Monitor connection pool
@Component
@RequiredArgsConstructor
public class ConnectionPoolMonitor {
 
    private final HikariDataSource dataSource;
    private final MeterRegistry registry;
 
    @Scheduled(fixedRate = 30000)
    public void reportMetrics() {
        HikariPoolMXBean poolMXBean = dataSource.getHikariPoolMXBean();
 
        registry.gauge("hikari.connections.active",
            poolMXBean, HikariPoolMXBean::getActiveConnections);
        registry.gauge("hikari.connections.idle",
            poolMXBean, HikariPoolMXBean::getIdleConnections);
        registry.gauge("hikari.connections.pending",
            poolMXBean, HikariPoolMXBean::getThreadsAwaitingConnection);
    }
}

What JVM settings should you use for production Spring Boot applications?

JVM tuning starts with garbage collector selection. G1GC provides balanced throughput and latency for most workloads. ZGC (Java 21+) offers sub-millisecond pause times for latency-sensitive applications. Setting -Xms equal to -Xmx prevents heap resizing overhead during runtime.

Enable heap dumps on OutOfMemoryError for post-mortem analysis, and configure GC logging for performance troubleshooting. Metaspace sizing prevents class metadata from causing unexpected OOM errors.

# Production JVM settings
java -XX:+UseG1GC \
     -XX:MaxGCPauseMillis=200 \
     -XX:+UseStringDeduplication \
     -Xms2g -Xmx2g \
     -XX:MetaspaceSize=256m \
     -XX:MaxMetaspaceSize=512m \
     -XX:+HeapDumpOnOutOfMemoryError \
     -XX:HeapDumpPath=/var/log/app/heapdump.hprof \
     -Xlog:gc*:file=/var/log/app/gc.log:time,uptime:filecount=5,filesize=100m \
     -jar app.jar
 
# For low-latency (Java 21+)
java -XX:+UseZGC \
     -XX:+ZGenerational \
     -Xms4g -Xmx4g \
     -jar app.jar

Quick Reference

TopicKey Points
Auto-configurationspring.factories/imports, @Conditional, ordering
Custom startersTwo-module structure, ConfigurationProperties, metadata
Bean lifecycleInstantiation → Population → Aware → Init → Destroy
Config precedenceCLI args > env vars > profile properties > application.yml
Spring Cloud ConfigConfig server, @RefreshScope, encryption
Service DiscoveryEureka, @LoadBalanced, health checks
GatewayRoutes, filters, rate limiting, circuit breakers
WebFlux vs MVCNon-blocking vs blocking, Mono/Flux, backpressure
ActuatorHealth indicators, custom metrics, securing endpoints
PerformanceThread pools, connection pools, async processing

Ready to ace your interview?

Get 550+ interview questions with detailed answers in our comprehensive PDF guides.

View PDF Guides