Java 24 shipped with 24 JEPs—a record since Java 11—yet most developers stopped tracking Java releases after version 8 or 11. The result is a widening knowledge gap between "Java developers" and developers who actually know modern Java. Stream Gatherers, virtual threads without pinning, quantum-resistant cryptography—these aren't experimental features anymore. They're production-ready capabilities that interviewers expect senior developers to know.
This guide covers the Java 24 features that actually appear in technical interviews, from Stream Gatherers to AOT Class Loading to the new quantum-resistant cryptography APIs.
Table of Contents
- Java 24 Overview Questions
- Stream Gatherers Questions
- Virtual Threads Questions
- AOT Class Loading Questions
- Pattern Matching Questions
- Class-File API Questions
- JVM Performance Questions
- Quantum Cryptography Questions
- Security and Deprecation Questions
Java 24 Overview Questions
Understanding the big picture of Java 24 helps frame answers to more specific questions.
What are the major features in Java 24?
Java 24 shipped March 18, 2025 with 24 JEPs—a record number. The headline features are Stream Gatherers (JEP 485) for custom stream operations, Virtual Threads without pinning (JEP 491) for true scalability, and Ahead-of-Time Class Loading (JEP 483) for dramatically faster startup.
Pattern Matching now works with primitive types (JEP 488). The Security Manager is permanently disabled (JEP 486). And Java added quantum-resistant cryptography (JEPs 496 and 497) to prepare for post-quantum security threats.
The theme across these features is maturity—Java 24 finalizes capabilities that were previewed over multiple releases while delivering significant performance optimizations.
Is Java 24 an LTS release?
Java 24 is not an LTS release—that comes with Java 25 in September 2025. Java 24 bridges the gap between Java 21 LTS and the upcoming Java 25 LTS, finalizing features that have been previewed across multiple releases. Many features previewed in Java 24—Scoped Values, Structured Concurrency, Flexible Constructor Bodies—will likely become final in Java 25.
Stream Gatherers Questions
Stream Gatherers are one of the most significant additions to the Stream API since Java 8.
What are Stream Gatherers and why were they added?
Stream Gatherers (JEP 485) are custom intermediate operations for streams—not terminal operations like collectors. They address a fundamental limitation in the original Stream API: you couldn't write your own intermediate operations. The built-in ones—map, filter, flatMap, distinct, sorted—had to cover all your needs, or you'd break out of the stream pipeline entirely.
Gatherers enable three capabilities that weren't possible before: stateful transformations where each element's processing depends on previous elements, one-to-many or many-to-one mappings within the stream, and short-circuiting based on arbitrary conditions.
How do you implement a custom Stream Gatherer?
Let's say you need to group consecutive elements by some condition—a "runs" operation. Before Java 24, you'd break out of the stream:
// Before Java 24: Break out of stream, process manually
List<Integer> numbers = List.of(1, 1, 2, 2, 2, 3, 1, 1);
List<List<Integer>> runs = new ArrayList<>();
List<Integer> currentRun = new ArrayList<>();
Integer lastValue = null;
for (Integer num : numbers) {
if (lastValue != null && !num.equals(lastValue)) {
runs.add(currentRun);
currentRun = new ArrayList<>();
}
currentRun.add(num);
lastValue = num;
}
if (!currentRun.isEmpty()) {
runs.add(currentRun);
}
// Result: [[1,1], [2,2,2], [3], [1,1]]With gatherers, you can implement this as a proper intermediate operation:
// Java 24: Custom gatherer for consecutive runs
Gatherer<Integer, ?, List<Integer>> consecutiveRuns() {
return Gatherer.ofSequential(
// Initializer: create state holder
() -> new Object() {
List<Integer> currentRun = new ArrayList<>();
Integer lastValue = null;
},
// Integrator: process each element
(state, element, downstream) -> {
if (state.lastValue != null && !element.equals(state.lastValue)) {
downstream.push(state.currentRun);
state.currentRun = new ArrayList<>();
}
state.currentRun.add(element);
state.lastValue = element;
return true; // Continue processing
},
// Finisher: emit final run
(state, downstream) -> {
if (!state.currentRun.isEmpty()) {
downstream.push(state.currentRun);
}
}
);
}
// Usage in stream pipeline
List<List<Integer>> runs = numbers.stream()
.gather(consecutiveRuns())
.toList();The gatherer integrates cleanly with other stream operations. You can chain it with filters, maps, and other gatherers. It participates in parallel execution if you use Gatherer.of() instead of ofSequential().
When would you use a Gatherer vs a Collector?
Gatherers produce intermediate results within the stream; collectors produce terminal results. Use gatherers when you need to continue processing after the transformation. Use collectors when you're aggregating to a final result.
For example, if you want to group consecutive elements and then filter those groups, use a gatherer. If you want to group elements by a key and return a Map, use a collector.
Virtual Threads Questions
Virtual threads are transforming how Java handles concurrency at scale.
How did Java 24 fix Virtual Thread pinning?
Virtual threads, introduced in Java 21, are lightweight threads managed by the JVM rather than the operating system. They're designed to scale to millions of concurrent tasks. But they had a critical limitation: when a virtual thread executed code inside a synchronized block, it would "pin" to its carrier thread, preventing other virtual threads from using that carrier.
The problem is architectural. When a virtual thread blocks—say, on I/O—it should unmount from its carrier thread so another virtual thread can run. But synchronized blocks in the JVM use monitor locks tied to the carrier thread's stack. If a virtual thread unmounts while holding a monitor lock, another virtual thread mounting on that carrier could see inconsistent state.
Java 24's JEP 491 restructures how virtual threads interact with monitors. Instead of using carrier-thread-specific locking, the JVM now tracks monitors in a way that allows virtual threads to unmount during blocking operations even while holding synchronized locks.
What code patterns caused Virtual Thread pinning before Java 24?
The classic pinning pattern was synchronized blocks combined with blocking operations:
// Before Java 24: This pattern caused pinning
public class ConnectionPool {
private final List<Connection> available = new ArrayList<>();
public synchronized Connection acquire() throws InterruptedException {
while (available.isEmpty()) {
wait(); // Virtual thread PINS here!
}
return available.remove(0);
}
public synchronized void release(Connection conn) {
available.add(conn);
notify();
}
}With Java 21-23, a virtual thread calling acquire() would pin to its carrier while waiting for a connection. If you had 10,000 virtual threads and only 10 carrier threads, you'd quickly exhaust carriers—defeating the purpose of virtual threads entirely.
// Java 24: Same code, no pinning
// Virtual threads now properly unmount during wait()
// even though they're inside a synchronized block
try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
IntStream.range(0, 100_000).forEach(i -> {
executor.submit(() -> {
Connection conn = pool.acquire(); // No pinning!
try {
// Use connection
} finally {
pool.release(conn);
}
});
});
}The fix is transparent—existing code benefits automatically without changes.
Should you still prefer ReentrantLock over synchronized with Virtual Threads?
With Java 24, synchronized is fine for virtual threads. The main reasons to prefer ReentrantLock are features like tryLock(), interruptible locking, and condition variables—not pinning avoidance. If you don't need those features, synchronized is simpler and now works correctly with virtual threads.
AOT Class Loading Questions
AOT Class Loading addresses one of Java's long-standing complaints: slow startup times.
What is Ahead-of-Time Class Loading and how does it work?
AOT Class Loading (JEP 483) is part of Project Leyden, Java's initiative to improve startup time and reduce resource consumption. It addresses a fundamental overhead in Java: every time an application starts, the JVM must load, link, and initialize classes from scratch. For large applications with thousands of classes, this takes significant time.
The solution works in two phases. First, you run your application with a special flag that records which classes are loaded and in what order. The JVM captures this information along with the linked state of those classes—resolved references, verified bytecode, prepared class structures—and stores it in a cache file.
On subsequent runs, the JVM loads this cache and instantly has all those classes in a loaded-and-linked state. It skips the disk I/O to read class files, the parsing of class file format, the verification passes, and the resolution of symbolic references.
# Training run: record class loading behavior
java -XX:AOTMode=record -XX:AOTConfiguration=app.aotconf \
-jar myapplication.jar
# The application runs normally, but the JVM records
# which classes are loaded, in what order, how they link
# Subsequent runs: use the cached classes
java -XX:AOTMode=load -XX:AOTConfiguration=app.aotconf \
-jar myapplication.jar
# Classes are instantly available - up to 42% faster startupWhat are the limitations of AOT Class Loading?
There are important caveats to understand. The cache must match the exact classes your application uses—if dependencies change, you need a new training run. The cache is JVM-version-specific. And the improvement is most dramatic for applications with many classes and short runtimes (CLI tools, serverless functions) rather than long-running servers where startup is amortized.
How does AOT Class Loading compare to GraalVM native image?
AOT class loading keeps the full JVM—you retain dynamic class loading, reflection, and JIT compilation. Native image compiles to a standalone binary with ahead-of-time compilation, trading flexibility for smaller memory footprint and instant startup.
Choose AOT class loading when you need runtime flexibility (reflection, dynamic proxies, runtime class generation). Choose native image when you need the smallest possible footprint and fastest possible startup, and can accept the configuration overhead for reflection and resources.
Pattern Matching Questions
Pattern matching continues to evolve with each Java release.
What is Primitive Pattern Matching in Java 24?
JEP 488 extends pattern matching to primitive types, completing a logical gap in Java's pattern matching story. Since Java 16, you could write if (obj instanceof String s), but you couldn't write if (num instanceof byte b) to check if a value fits in a smaller primitive type. This created asymmetry between how we handle reference types and primitive types.
The feature works with instanceof, switch, and record patterns. The key concept is "safe casting"—the pattern matches only if the value can be converted to the target type without loss of information.
// Check if a value fits in a smaller type
int value = 127;
if (value instanceof byte b) {
// Matches! 127 fits in a byte (-128 to 127)
System.out.println("Byte value: " + b);
}
int largeValue = 128;
if (largeValue instanceof byte b) {
// Does NOT match - 128 overflows byte
System.out.println("This won't print");
} else {
System.out.println("Value too large for byte");
}How do you use Primitive Patterns in switch expressions?
In switch expressions, primitive patterns enable elegant handling of numeric ranges:
double score = 85.5;
String grade = switch (score) {
case double d when d >= 90 -> "A";
case double d when d >= 80 -> "B";
case double d when d >= 70 -> "C";
case double d when d >= 60 -> "D";
default -> "F";
};The real power emerges with record patterns. You can destructure records containing primitives with type refinement:
sealed interface Measurement permits Temperature, Pressure {}
record Temperature(double celsius) implements Measurement {}
record Pressure(int pascals) implements Measurement {}
void process(Measurement m) {
switch (m) {
case Temperature(double c) when c instanceof int freezing && freezing <= 0 ->
System.out.println("Below freezing: " + freezing + "°C");
case Temperature(double c) ->
System.out.println("Temperature: " + c + "°C");
case Pressure(int p) when p instanceof short lowPressure ->
System.out.println("Low pressure: " + lowPressure + " Pa");
case Pressure(int p) ->
System.out.println("Pressure: " + p + " Pa");
}
}What constitutes a "safe" conversion in primitive patterns?
The value must be exactly representable in the target type. For floating point to integer, there must be no fractional part. For larger to smaller integers, the value must be in range. For integer to floating point, the value must not lose precision.
The pattern c instanceof int freezing only matches if the double c is exactly representable as an int—no fractional part, within int range. This is a safe narrowing check built into the language.
Class-File API Questions
The Class-File API addresses ecosystem-wide tooling concerns.
What is the Class-File API and why was it added?
The Class-File API (JEP 484) provides a standard Java SE API for reading, writing, and transforming compiled class files. This addresses a significant ecosystem problem: the ASM library.
ASM has been the de facto standard for bytecode manipulation for over 20 years. Spring, Hibernate, Mockito, and countless other frameworks depend on it. The problem? ASM is maintained separately from the JDK and must be updated after each Java release to understand new class file features. This creates a gap—you can use new Java features, but your bytecode manipulation tools break until ASM catches up.
The Class-File API solves this by being part of the JDK itself. It's always synchronized with the class file format of that JDK version. When Java 25 adds new class file features, the Class-File API in Java 25 already understands them.
How do you use the Class-File API?
// Reading a class file
ClassFile cf = ClassFile.of();
ClassModel model = cf.parse(Path.of("MyClass.class"));
// Examining the class
model.methods().forEach(method -> {
System.out.println("Method: " + method.methodName().stringValue());
method.code().ifPresent(code -> {
code.forEach(instruction -> {
System.out.println(" " + instruction);
});
});
});
// Transforming a class
byte[] transformed = cf.transform(model, (builder, element) -> {
if (element instanceof MethodModel mm &&
mm.methodName().stringValue().equals("oldName")) {
// Skip this method - effectively removing it
} else {
builder.with(element);
}
});The API uses a "streaming" model—you process elements as they're encountered, which is memory-efficient for large class files. It's designed for transformation: you receive elements and can modify, remove, or add new ones as you build the output.
Will ASM be replaced by the Class-File API?
ASM is unlikely to go away in the near term—too much code depends on it. But new tools will likely prefer the standard API, and ASM may eventually become a compatibility wrapper around Class-File API. For new projects, prefer the standard Class-File API.
JVM Performance Questions
Java 24 includes several JVM-level optimizations.
What are Compact Object Headers and why do they matter?
Compact Object Headers (JEP 450), currently experimental in Java 24, reduces object header size from 96-128 bits to 64 bits on 64-bit JVMs. Every Java object has a header containing a mark word for locking and garbage collection metadata, a class pointer identifying the object's type, and optionally an array length for array objects.
In standard HotSpot on 64-bit JVMs, the mark word is 64 bits and the class pointer is 32 bits (compressed by default), totaling 96 bits or 12 bytes minimum. Compact headers compress this to 64 bits total by reducing the class pointer from 32 bits to 22 bits and reorganizing the mark word. This limits the number of unique classes to about 4 million—plenty for virtually any application—while saving 4-8 bytes per object.
// Consider an application with millions of small objects
record Coordinate(int x, int y) {} // 8 bytes of data
// Standard headers: 12 bytes header + 8 bytes data = 20 bytes
// With padding to 8-byte boundary: 24 bytes per object
// Compact headers: 8 bytes header + 8 bytes data = 16 bytes
// That's 33% memory reduction for this object!
// At scale:
// 10 million Coordinate objects:
// - Standard: 240 MB
// - Compact: 160 MB
// - Savings: 80 MBHow do you enable Compact Object Headers?
Because it's experimental, you enable it explicitly:
java -XX:+UnlockExperimentalVMOptions -XX:+UseCompactObjectHeaders \
-jar myapplication.jarThe impact is most dramatic for applications with many small objects—caches, collections, graph structures. A large HashMap with millions of entries sees significant memory reduction because each Node object benefits from smaller headers.
What is Project Lilliput?
Lilliput is the umbrella project for object header reduction. Compact Object Headers is one output of that project. Lilliput continues to explore further reductions, potentially to 32 bits for some objects in future releases.
Quantum Cryptography Questions
Java 24 prepares applications for the post-quantum era.
What quantum-resistant cryptography does Java 24 add?
Java 24 adds two post-quantum cryptographic algorithms: ML-KEM (JEP 496) for key encapsulation and ML-DSA (JEP 497) for digital signatures. These are standardized by NIST as FIPS 203 and FIPS 204 respectively, and they're designed to resist attacks from quantum computers.
The threat model is "harvest now, decrypt later." Adversaries can capture encrypted communications today and store them. When sufficiently powerful quantum computers exist—estimates range from 10-30 years—they could use Shor's algorithm to break RSA and elliptic curve cryptography, decrypting those stored communications. Organizations with long-lived secrets (government, healthcare, financial) need to transition to quantum-resistant algorithms before quantum computers arrive.
How do you use ML-KEM for key encapsulation?
ML-KEM (Module-Lattice-Based Key Encapsulation Mechanism) replaces key exchange algorithms like Diffie-Hellman. It's based on the hardness of lattice problems, which remain difficult even for quantum computers:
// Generate an ML-KEM key pair
KeyPairGenerator kpg = KeyPairGenerator.getInstance("ML-KEM");
kpg.initialize(NamedParameterSpec.ML_KEM_768); // 768, 512, or 1024
KeyPair keyPair = kpg.generateKeyPair();
// Encapsulate a shared secret (sender side)
KEM kem = KEM.getInstance("ML-KEM");
KEM.Encapsulator encapsulator = kem.newEncapsulator(keyPair.getPublic());
KEM.Encapsulated encapsulated = encapsulator.encapsulate();
byte[] ciphertext = encapsulated.encapsulation(); // Send to recipient
SecretKey sharedSecret = encapsulated.key(); // Use for encryption
// Decapsulate (recipient side)
KEM.Decapsulator decapsulator = kem.newDecapsulator(keyPair.getPrivate());
SecretKey recoveredSecret = decapsulator.decapsulate(ciphertext);
// Both sides now have the same shared secretHow do you use ML-DSA for digital signatures?
ML-DSA (Module-Lattice-Based Digital Signature Algorithm) replaces signature algorithms like RSA and ECDSA:
// Generate ML-DSA key pair
KeyPairGenerator kpg = KeyPairGenerator.getInstance("ML-DSA");
kpg.initialize(NamedParameterSpec.ML_DSA_65);
KeyPair keyPair = kpg.generateKeyPair();
// Sign a message
Signature signer = Signature.getInstance("ML-DSA");
signer.initSign(keyPair.getPrivate());
signer.update("Important message".getBytes());
byte[] signature = signer.sign();
// Verify the signature
Signature verifier = Signature.getInstance("ML-DSA");
verifier.initVerify(keyPair.getPublic());
verifier.update("Important message".getBytes());
boolean valid = verifier.verify(signature);The algorithms are larger than classical alternatives—ML-KEM public keys are around 1KB, compared to 32 bytes for X25519. This affects transmission and storage but not security.
Should you switch to ML-KEM and ML-DSA immediately?
For new systems with long-lived secrets, consider it. For most applications, a hybrid approach (classical + PQC) during transition is recommended. The immediate priority is inventory: know where you use cryptography so you can plan migration. These are NIST-standardized algorithms, not experimental—they're ready for production use.
Security and Deprecation Questions
Understanding what's being removed is as important as knowing what's new.
What happened to the Security Manager in Java 24?
Java 24 permanently disables the Security Manager (JEP 486). The Security Manager was Java's original sandboxing mechanism, designed when Java Applets ran untrusted code in browsers. It let you restrict what code could do—file access, network connections, reflection. But it was complex to configure correctly, imposed performance overhead, and was rarely used properly outside applet contexts.
With applets long dead and the Security Manager poorly suited to modern containerized deployments, Java has been deprecating it since JDK 17. Java 24 makes it non-functional—you cannot enable it at startup or runtime. A future release will remove the API entirely.
What should you do if your code uses Security Manager?
If you're maintaining legacy code that uses Security Manager for sandboxing, you need alternative approaches: containers, process isolation, or restricted class loaders. If your code just has boilerplate if (System.getSecurityManager() != null) checks, those can be removed—they're now dead code.
Quick Reference
| Feature | JEP | Status | Interview Priority |
|---|---|---|---|
| Stream Gatherers | 485 | Final | High |
| Virtual Threads without Pinning | 491 | Final | High |
| AOT Class Loading | 483 | Final | Medium-High |
| Class-File API | 484 | Final | Medium |
| Primitive Pattern Matching | 488 | Preview (2nd) | Medium |
| Compact Object Headers | 450 | Experimental | Medium |
| Generational Shenandoah | 404 | Experimental | Low-Medium |
| Quantum Crypto (ML-KEM) | 496 | Final | Medium |
| Quantum Crypto (ML-DSA) | 497 | Final | Medium |
| Flexible Constructor Bodies | 492 | Preview (3rd) | Low-Medium |
| Structured Concurrency | 499 | Preview (4th) | Medium |
| Scoped Values | 487 | Preview (4th) | Medium |
| Security Manager Disabled | 486 | Final | Low (but know it) |
| Key Derivation Function API | 478 | Preview | Low |
Related Articles
If you found this helpful, check out these related guides:
- System Design Interview Guide - Scalability, reliability, and distributed systems
- REST API Interview Guide - API design principles and best practices
- Spring Boot Interview Guide - Spring framework essentials for Java developers
