Java 24 shipped with 24 JEPs—a record since Java 11—yet most developers stopped tracking Java releases after version 8 or 11. The result? A widening knowledge gap between "Java developers" and developers who actually know modern Java. Stream Gatherers, virtual threads without pinning, quantum-resistant cryptography—these aren't experimental features anymore. They're production-ready capabilities that interviewers expect senior developers to know.
This guide covers the Java 24 features that actually appear in technical interviews. For each feature, I'll explain not just what it does, but why it matters and how to discuss it intelligently.
The 30-Second Overview
When an interviewer asks "What's new in Java 24?", here's what separates candidates who've kept current from those running on outdated knowledge:
Java 24 shipped March 18, 2025 with 24 JEPs—a record number. The headline features are Stream Gatherers for custom stream operations, Virtual Threads without pinning for true scalability, and Ahead-of-Time Class Loading for dramatically faster startup. Pattern Matching now works with primitive types. The Security Manager is permanently disabled. And Java added quantum-resistant cryptography to prepare for post-quantum security threats.
The theme across these features is maturity—Java 24 finalizes capabilities that were previewed over multiple releases while delivering significant performance optimizations. If you're still explaining concurrent programming in terms of thread pools and ExecutorService, you're describing patterns these features make obsolete.
The 2-Minute Deep Dive
Java 24 represents a transition point in Java's evolution. It's not an LTS release—that comes with Java 25 in September 2025—but it finalizes several features that have been in preview for years and introduces optimizations that will carry forward.
The concurrency story is particularly compelling. Virtual threads arrived in Java 21, but they had a significant limitation: synchronized blocks would "pin" the virtual thread to its carrier thread, blocking other virtual threads from using that carrier. Netflix famously encountered this issue at scale. Java 24's JEP 491 eliminates pinning, making virtual threads truly lightweight in all scenarios.
Stream Gatherers (JEP 485) fill a gap that's existed since streams were introduced in Java 8. The built-in intermediate operations—map, filter, flatMap—cover common cases, but complex transformations required collecting to a list and processing separately. Gatherers let you write custom intermediate operations that maintain state, produce multiple outputs per input, or terminate early.
The performance improvements deserve attention too. Ahead-of-Time Class Loading (JEP 483) addresses a long-standing complaint about Java: slow startup times. By pre-loading and linking classes during a training run, subsequent application starts skip the expensive class loading phase entirely. The JDK developers report up to 42% faster startup for some applications.
Compact Object Headers (JEP 450) tackles memory efficiency at the JVM level. Every Java object carries header bytes for identity hash code and type information. Java 24 compresses these headers from 96-128 bits down to 64 bits—a significant reduction when you have millions of objects.
Let me walk through each of these in the depth interviewers expect.
Question 1: What Are Stream Gatherers and Why Were They Added?
This question immediately reveals whether a candidate understands streams beyond the basics.
Weak answer: "Gatherers are like custom collectors for streams."
Strong answer: Stream Gatherers (JEP 485) are custom intermediate operations for streams—not terminal operations like collectors. They address a fundamental limitation in the original Stream API: you couldn't write your own intermediate operations. The built-in ones—map, filter, flatMap, distinct, sorted—had to cover all your needs, or you'd break out of the stream pipeline entirely.
Gatherers enable three capabilities that weren't possible before. First, stateful transformations where each element's processing depends on previous elements. Second, one-to-many or many-to-one mappings within the stream. Third, short-circuiting based on arbitrary conditions.
Let me show you the difference with a real-world example. Say you need to group consecutive elements by some condition—a "runs" operation:
// Before Java 24: Break out of stream, process manually
List<Integer> numbers = List.of(1, 1, 2, 2, 2, 3, 1, 1);
List<List<Integer>> runs = new ArrayList<>();
List<Integer> currentRun = new ArrayList<>();
Integer lastValue = null;
for (Integer num : numbers) {
if (lastValue != null && !num.equals(lastValue)) {
runs.add(currentRun);
currentRun = new ArrayList<>();
}
currentRun.add(num);
lastValue = num;
}
if (!currentRun.isEmpty()) {
runs.add(currentRun);
}
// Result: [[1,1], [2,2,2], [3], [1,1]]This imperative code works but breaks the stream paradigm. With gatherers, you can implement this as a proper intermediate operation:
// Java 24: Custom gatherer for consecutive runs
Gatherer<Integer, ?, List<Integer>> consecutiveRuns() {
return Gatherer.ofSequential(
// Initializer: create state holder
() -> new Object() {
List<Integer> currentRun = new ArrayList<>();
Integer lastValue = null;
},
// Integrator: process each element
(state, element, downstream) -> {
if (state.lastValue != null && !element.equals(state.lastValue)) {
downstream.push(state.currentRun);
state.currentRun = new ArrayList<>();
}
state.currentRun.add(element);
state.lastValue = element;
return true; // Continue processing
},
// Finisher: emit final run
(state, downstream) -> {
if (!state.currentRun.isEmpty()) {
downstream.push(state.currentRun);
}
}
);
}
// Usage in stream pipeline
List<List<Integer>> runs = numbers.stream()
.gather(consecutiveRuns())
.toList();The gatherer integrates cleanly with other stream operations. You can chain it with filters, maps, and other gatherers. It participates in parallel execution if you use Gatherer.of() instead of ofSequential(). And it's reusable across your codebase.
What interviewers are looking for: Understanding that gatherers are intermediate (not terminal) operations that enable stateful, many-to-many transformations. Candidates who can articulate why the existing API was insufficient demonstrate real stream expertise.
Common follow-up: "When would you use a gatherer vs. a collector?" Gatherers produce intermediate results within the stream; collectors produce terminal results. Use gatherers when you need to continue processing after the transformation. Use collectors when you're aggregating to a final result.
Question 2: How Did Java 24 Fix Virtual Thread Pinning?
This question tests understanding of both virtual threads and their practical limitations.
Weak answer: "Virtual threads don't pin to carriers anymore."
Strong answer: Virtual threads, introduced in Java 21, are lightweight threads managed by the JVM rather than the operating system. They're designed to scale to millions of concurrent tasks. But they had a critical limitation: when a virtual thread executed code inside a synchronized block, it would "pin" to its carrier thread, preventing other virtual threads from using that carrier.
The problem is architectural. When a virtual thread blocks—say, on I/O—it should unmount from its carrier thread so another virtual thread can run. But synchronized blocks in the JVM use monitor locks tied to the carrier thread's stack. If a virtual thread unmounts while holding a monitor lock, another virtual thread mounting on that carrier could see inconsistent state.
Java 24's JEP 491 restructures how virtual threads interact with monitors. Instead of using carrier-thread-specific locking, the JVM now tracks monitors in a way that allows virtual threads to unmount during blocking operations even while holding synchronized locks.
Here's where it matters in practice:
// Before Java 24: This pattern caused pinning
public class ConnectionPool {
private final List<Connection> available = new ArrayList<>();
public synchronized Connection acquire() throws InterruptedException {
while (available.isEmpty()) {
wait(); // Virtual thread PINS here!
}
return available.remove(0);
}
public synchronized void release(Connection conn) {
available.add(conn);
notify();
}
}With Java 21-23, a virtual thread calling acquire() would pin to its carrier while waiting for a connection. If you had 10,000 virtual threads and only 10 carrier threads, you'd quickly exhaust carriers—defeating the purpose of virtual threads entirely.
// Java 24: Same code, no pinning
// Virtual threads now properly unmount during wait()
// even though they're inside a synchronized block
// You can now safely use synchronized with virtual threads
try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
IntStream.range(0, 100_000).forEach(i -> {
executor.submit(() -> {
Connection conn = pool.acquire(); // No pinning!
try {
// Use connection
} finally {
pool.release(conn);
}
});
});
}The fix is transparent—existing code benefits automatically without changes. Netflix, who reported significant production issues from pinning, can now use synchronized blocks freely with virtual threads.
What interviewers are looking for: Understanding why pinning was a problem (carrier thread exhaustion) and how it manifested (synchronized + blocking operations). Candidates who mention the Netflix case or can explain the monitor lock architecture show real-world awareness.
Common follow-up: "Should you still prefer ReentrantLock over synchronized?" With Java 24, synchronized is fine for virtual threads. The main reasons to prefer ReentrantLock are features like tryLock(), interruptible locking, and condition variables—not pinning avoidance.
Question 3: Explain Ahead-of-Time Class Loading and Linking
This question tests knowledge of JVM internals and startup optimization.
Weak answer: "It precompiles classes to make startup faster."
Strong answer: AOT Class Loading (JEP 483) is part of Project Leyden, Java's initiative to improve startup time and reduce resource consumption. It addresses a fundamental overhead in Java: every time an application starts, the JVM must load, link, and initialize classes from scratch. For large applications with thousands of classes, this takes significant time.
The solution works in two phases. First, you run your application with a special flag that records which classes are loaded and in what order. The JVM captures this information along with the linked state of those classes—resolved references, verified bytecode, prepared class structures—and stores it in a cache file.
On subsequent runs, the JVM loads this cache and instantly has all those classes in a loaded-and-linked state. It skips the disk I/O to read class files, the parsing of class file format, the verification passes, and the resolution of symbolic references. All that work was done once and reused.
# Training run: record class loading behavior
java -XX:AOTMode=record -XX:AOTConfiguration=app.aotconf \
-jar myapplication.jar
# The application runs normally, but the JVM records
# which classes are loaded, in what order, how they link
# Subsequent runs: use the cached classes
java -XX:AOTMode=load -XX:AOTConfiguration=app.aotconf \
-jar myapplication.jar
# Classes are instantly available - up to 42% faster startupThe key insight is that class loading isn't just about reading bytes from disk. The JVM performs verification to ensure bytecode is valid, resolution to connect symbolic references to actual classes and methods, and preparation to allocate static fields and method tables. AOT caching preserves all of this work.
There are important caveats. The cache must match the exact classes your application uses—if dependencies change, you need a new training run. The cache is JVM-version-specific. And the improvement is most dramatic for applications with many classes and short runtimes (CLI tools, serverless functions) rather than long-running servers where startup is amortized.
What interviewers are looking for: Understanding the difference between this and GraalVM native images (this is still bytecode, not native code). Awareness of why class loading is expensive (verification, resolution, not just disk I/O). Practical sense of when this matters (short-lived processes more than long-running servers).
Common follow-up: "How does this compare to GraalVM native image?" AOT class loading keeps the full JVM—you retain dynamic class loading, reflection, and JIT compilation. Native image compiles to a standalone binary with ahead-of-time compilation, trading flexibility for smaller memory footprint and instant startup. Choose based on whether you need runtime flexibility.
Question 4: What's New with Pattern Matching for Primitives?
This question connects Java 24 to the ongoing pattern matching evolution.
Weak answer: "You can use instanceof with primitive types now."
Strong answer: JEP 488 extends pattern matching to primitive types, completing a logical gap in Java's pattern matching story. Since Java 16, you could write if (obj instanceof String s), but you couldn't write if (num instanceof byte b) to check if a value fits in a smaller primitive type. This created asymmetry between how we handle reference types and primitive types.
The feature works with instanceof, switch, and record patterns. The key concept is "safe casting"—the pattern matches only if the value can be converted to the target type without loss of information.
// Check if a value fits in a smaller type
int value = 127;
if (value instanceof byte b) {
// Matches! 127 fits in a byte (-128 to 127)
System.out.println("Byte value: " + b);
}
int largeValue = 128;
if (largeValue instanceof byte b) {
// Does NOT match - 128 overflows byte
System.out.println("This won't print");
} else {
System.out.println("Value too large for byte");
}In switch expressions, this enables elegant handling of numeric ranges:
double score = 85.5;
String grade = switch (score) {
case double d when d >= 90 -> "A";
case double d when d >= 80 -> "B";
case double d when d >= 70 -> "C";
case double d when d >= 60 -> "D";
default -> "F";
};But the real power emerges with record patterns. You can now destructure records containing primitives with type refinement:
sealed interface Measurement permits Temperature, Pressure {}
record Temperature(double celsius) implements Measurement {}
record Pressure(int pascals) implements Measurement {}
void process(Measurement m) {
switch (m) {
case Temperature(double c) when c instanceof int freezing && freezing <= 0 ->
System.out.println("Below freezing: " + freezing + "°C");
case Temperature(double c) ->
System.out.println("Temperature: " + c + "°C");
case Pressure(int p) when p instanceof short lowPressure ->
System.out.println("Low pressure: " + lowPressure + " Pa");
case Pressure(int p) ->
System.out.println("Pressure: " + p + " Pa");
}
}The pattern c instanceof int freezing only matches if the double c is exactly representable as an int—no fractional part, within int range. This is a safe narrowing check built into the language.
What interviewers are looking for: Understanding that primitive patterns check for "safe" conversion without information loss. Knowing this is still a preview feature (second preview in Java 24). Ability to connect this to the broader pattern matching evolution (instanceof in 16, switch in 21, records in 21).
Common follow-up: "What constitutes a 'safe' conversion?" The value must be exactly representable in the target type. For floating point to integer, no fractional part. For larger to smaller integers, must be in range. For integer to floating point, must not lose precision.
Question 5: What Is the Class-File API and Why Does It Matter?
This question tests awareness of Java tooling ecosystem changes.
Weak answer: "It's a new way to read class files."
Strong answer: The Class-File API (JEP 484) provides a standard Java SE API for reading, writing, and transforming compiled class files. This might sound mundane, but it addresses a significant ecosystem problem: the ASM library.
ASM has been the de facto standard for bytecode manipulation for over 20 years. Spring, Hibernate, Mockito, and countless other frameworks depend on it. The problem? ASM is maintained separately from the JDK and must be updated after each Java release to understand new class file features. This creates a gap—you can use new Java features, but your bytecode manipulation tools break until ASM catches up.
The Class-File API solves this by being part of the JDK itself. It's always synchronized with the class file format of that JDK version. When Java 25 adds new class file features, the Class-File API in Java 25 already understands them.
// Reading a class file
ClassFile cf = ClassFile.of();
ClassModel model = cf.parse(Path.of("MyClass.class"));
// Examining the class
model.methods().forEach(method -> {
System.out.println("Method: " + method.methodName().stringValue());
method.code().ifPresent(code -> {
code.forEach(instruction -> {
System.out.println(" " + instruction);
});
});
});
// Transforming a class
byte[] transformed = cf.transform(model, (builder, element) -> {
if (element instanceof MethodModel mm &&
mm.methodName().stringValue().equals("oldName")) {
// Skip this method - effectively removing it
} else {
builder.with(element);
}
});The API uses a "streaming" model—you process elements as they're encountered, which is memory-efficient for large class files. It's also designed for transformation: you receive elements and can modify, remove, or add new ones as you build the output.
For interview purposes, the key insight is that this API is primarily for framework and tooling authors. Application developers rarely manipulate bytecode directly. But understanding why it exists—breaking the ASM version-lag problem—shows awareness of the Java ecosystem beyond application code.
What interviewers are looking for: Understanding the problem it solves (ASM version lag) rather than just the API itself. Awareness that this is a tooling/framework concern, not typical application development. Knowing it's now final after preview in Java 22-23.
Common follow-up: "Will ASM go away?" Unlikely in the near term—too much code depends on it. But new tools will likely prefer the standard API, and ASM may eventually become a compatibility wrapper around Class-File API.
Question 6: Explain Compact Object Headers and Their Impact
This question tests understanding of JVM memory layout.
Weak answer: "Object headers are smaller now."
Strong answer: Compact Object Headers (JEP 450), currently experimental in Java 24, reduces object header size from 96-128 bits to 64 bits on 64-bit JVMs. To understand why this matters, you need to know what's in an object header.
Every Java object has a header containing three things: a mark word for locking and garbage collection metadata, a class pointer identifying the object's type, and optionally an array length for array objects. In standard HotSpot on 64-bit JVMs, the mark word is 64 bits and the class pointer is 32 bits (compressed by default), totaling 96 bits or 12 bytes minimum.
Compact headers compress this to 64 bits total by reducing the class pointer from 32 bits to 22 bits and reorganizing the mark word. This limits the number of unique classes to about 4 million—plenty for virtually any application—while saving 4-8 bytes per object.
// Consider an application with millions of small objects
record Coordinate(int x, int y) {} // 8 bytes of data
// Standard headers: 12 bytes header + 8 bytes data = 20 bytes
// With padding to 8-byte boundary: 24 bytes per object
// Compact headers: 8 bytes header + 8 bytes data = 16 bytes
// That's 33% memory reduction for this object!
// At scale:
// 10 million Coordinate objects:
// - Standard: 240 MB
// - Compact: 160 MB
// - Savings: 80 MBThe impact is most dramatic for applications with many small objects—caches, collections, graph structures. A large HashMap with millions of entries sees significant memory reduction because each Node object benefits from smaller headers.
Because it's experimental, you enable it explicitly:
java -XX:+UnlockExperimentalVMOptions -XX:+UseCompactObjectHeaders \
-jar myapplication.jarThe feature interacts with other JVM internals. Identity hash codes, locking, and GC metadata still need space in the header, so the implementation carefully reorganizes these fields. Some advanced locking scenarios may have different performance characteristics.
What interviewers are looking for: Knowing what's actually in object headers (mark word, class pointer). Understanding why small objects are disproportionately affected by header size. Awareness that this is experimental and requires explicit enabling.
Common follow-up: "What's Project Lilliput?" Lilliput is the umbrella project for object header reduction. Compact Object Headers is one output of that project. Lilliput continues to explore further reductions, potentially to 32 bits for some objects.
Question 7: What Quantum-Resistant Cryptography Does Java 24 Add?
This question tests awareness of security evolution.
Weak answer: "It adds encryption that quantum computers can't break."
Strong answer: Java 24 adds two post-quantum cryptographic algorithms: ML-KEM (JEP 496) for key encapsulation and ML-DSA (JEP 497) for digital signatures. These are standardized by NIST as FIPS 203 and FIPS 204 respectively, and they're designed to resist attacks from quantum computers.
The threat model is "harvest now, decrypt later." Adversaries can capture encrypted communications today and store them. When sufficiently powerful quantum computers exist—estimates range from 10-30 years—they could use Shor's algorithm to break RSA and elliptic curve cryptography, decrypting those stored communications. Organizations with long-lived secrets (government, healthcare, financial) need to transition to quantum-resistant algorithms before quantum computers arrive.
ML-KEM (Module-Lattice-Based Key Encapsulation Mechanism) replaces key exchange algorithms like Diffie-Hellman. It's based on the hardness of lattice problems, which remain difficult even for quantum computers:
// Generate an ML-KEM key pair
KeyPairGenerator kpg = KeyPairGenerator.getInstance("ML-KEM");
kpg.initialize(NamedParameterSpec.ML_KEM_768); // 768, 512, or 1024
KeyPair keyPair = kpg.generateKeyPair();
// Encapsulate a shared secret (sender side)
KEM kem = KEM.getInstance("ML-KEM");
KEM.Encapsulator encapsulator = kem.newEncapsulator(keyPair.getPublic());
KEM.Encapsulated encapsulated = encapsulator.encapsulate();
byte[] ciphertext = encapsulated.encapsulation(); // Send to recipient
SecretKey sharedSecret = encapsulated.key(); // Use for encryption
// Decapsulate (recipient side)
KEM.Decapsulator decapsulator = kem.newDecapsulator(keyPair.getPrivate());
SecretKey recoveredSecret = decapsulator.decapsulate(ciphertext);
// Both sides now have the same shared secretML-DSA (Module-Lattice-Based Digital Signature Algorithm) replaces signature algorithms like RSA and ECDSA for signing and verification:
// Generate ML-DSA key pair
KeyPairGenerator kpg = KeyPairGenerator.getInstance("ML-DSA");
kpg.initialize(NamedParameterSpec.ML_DSA_65);
KeyPair keyPair = kpg.generateKeyPair();
// Sign a message
Signature signer = Signature.getInstance("ML-DSA");
signer.initSign(keyPair.getPrivate());
signer.update("Important message".getBytes());
byte[] signature = signer.sign();
// Verify the signature
Signature verifier = Signature.getInstance("ML-DSA");
verifier.initVerify(keyPair.getPublic());
verifier.update("Important message".getBytes());
boolean valid = verifier.verify(signature);The algorithms are larger than classical alternatives—ML-KEM public keys are around 1KB, compared to 32 bytes for X25519. This affects transmission and storage but not security.
What interviewers are looking for: Understanding the "harvest now, decrypt later" threat model. Knowing these are NIST-standardized, not experimental. Awareness that migration to PQC is a multi-year enterprise concern, not a simple library swap.
Common follow-up: "Should we switch to ML-KEM immediately?" For new systems with long-lived secrets, consider it. For most applications, a hybrid approach (classical + PQC) during transition is recommended. The immediate priority is inventory: know where you use cryptography so you can plan migration.
What About the Security Manager Removal?
Java 24 permanently disables the Security Manager (JEP 486). This deserves mention because it's a significant architectural change that may come up in interviews.
The Security Manager was Java's original sandboxing mechanism, designed when Java Applets ran untrusted code in browsers. It let you restrict what code could do—file access, network connections, reflection. But it was complex to configure correctly, imposed performance overhead, and was rarely used properly outside applet contexts.
With applets long dead and the Security Manager poorly suited to modern containerized deployments, Java has been deprecating it since JDK 17. Java 24 makes it non-functional—you cannot enable it at startup or runtime. A future release will remove the API entirely.
The interview relevance: if you're maintaining legacy code that uses Security Manager for sandboxing, you need alternative approaches (containers, process isolation, restricted class loaders). If your code just has boilerplate if (System.getSecurityManager() != null) checks, those can be removed.
Quick Reference Table
| Feature | JEP | Status | Interview Priority |
|---|---|---|---|
| Stream Gatherers | 485 | Final | High |
| Virtual Threads without Pinning | 491 | Final | High |
| AOT Class Loading | 483 | Final | Medium-High |
| Class-File API | 484 | Final | Medium |
| Primitive Pattern Matching | 488 | Preview (2nd) | Medium |
| Compact Object Headers | 450 | Experimental | Medium |
| Generational Shenandoah | 404 | Experimental | Low-Medium |
| Quantum Crypto (ML-KEM) | 496 | Final | Medium |
| Quantum Crypto (ML-DSA) | 497 | Final | Medium |
| Flexible Constructor Bodies | 492 | Preview (3rd) | Low-Medium |
| Structured Concurrency | 499 | Preview (4th) | Medium |
| Scoped Values | 487 | Preview (4th) | Medium |
| Security Manager Disabled | 486 | Final | Low (but know it) |
| Key Derivation Function API | 478 | Preview | Low |
Practice Questions with Answers
Question 1: Your application uses synchronized blocks extensively and you're migrating to virtual threads. What changed in Java 24 that affects this decision?
Answer: Java 24 (JEP 491) eliminates virtual thread pinning in synchronized blocks. In Java 21-23, virtual threads would pin to carrier threads during synchronized + blocking operations, defeating scalability. Java 24 allows virtual threads to properly unmount even within synchronized blocks. You can now migrate without rewriting synchronized blocks to use ReentrantLock.
Question 2: You need a stream operation that groups consecutive equal elements. The Stream API doesn't have this built-in. How would you implement it in Java 24?
Answer: Use Stream Gatherers (JEP 485). Create a custom gatherer with Gatherer.ofSequential() that maintains state (current group, last seen value), emits completed groups when values change, and finishes with the final group. The gatherer integrates with the stream pipeline via stream.gather(yourGatherer()) and can be chained with other operations.
Question 3: Your serverless function has slow cold starts. What Java 24 feature addresses this, and what are its limitations?
Answer: AOT Class Loading (JEP 483) addresses startup time by pre-loading and linking classes during a training run. Run once with -XX:AOTMode=record to capture the cache, then use -XX:AOTMode=load for fast starts. Limitations: cache is JVM-version-specific, must match your actual class usage, requires a representative training run. Benefits are most dramatic for short-lived processes with many classes.
Question 4: Explain why primitive pattern matching uses instanceof byte b rather than just a cast.
Answer: Primitive pattern matching (instanceof byte b) is a safe narrowing check—it only matches if the value is exactly representable as the target type without information loss. A cast (byte) value would silently truncate. Pattern matching expresses the question "does this value fit in a byte?" which is what you usually want when processing data of uncertain range.
Related Articles
If you found this helpful, check out these related guides:
- System Design Interview Guide - Scalability, reliability, and distributed systems
- REST API Interview Guide - API design principles and best practices
Next Steps
Java 24 bridges the gap between Java 21 LTS and the upcoming Java 25 LTS, finalizing features that have been previewed across multiple releases. The candidates who stand out in interviews aren't just memorizing features—they understand why these changes matter and can discuss when to apply them.
If you're preparing for senior Java interviews, our comprehensive question bank covers everything from core language features through Java 24's latest additions. We update continuously as Java evolves.
Java 25 LTS arrives in September 2025. The features previewed in Java 24—Scoped Values, Structured Concurrency, Flexible Constructor Bodies—will likely become final. Start learning them now so you're ahead when they become the standard.
