Mastering Java 21 Virtual Threads (Project Loom) – A Hands-On Journey

For years, Java developers have been told:
“If you want scalability, you must go reactive.”

That meant learning complex frameworks like Reactor or RxJava, wrapping our heads around Mono, Flux, callbacks, and sometimes losing the joy of writing simple, sequential code.

With Java 21, that story changes.

Enter Virtual Threads (Project Loom), lightweight, millions-per-JVM threads that make blocking code scale like async/reactive code, but with the simplicity of good old Java methods.

In this blog, I’ll walk you through a showcase project we built as a team to learn, practice, and adopt virtual threads in real services.


🎯 Our Goals

We set ourselves 3 simple objectives:

  1. Comparison – virtual threads Vs Platform Thread
  2. Migration – move 2 existing services to virtual threads.
  3. Benchmarking – measure the performance difference.

🛠️ The Project Structure

We created a Gradle multi-project repo:

java21-virtual-threads-showcase/
│── comparision/      # small demos
│── services/      # order-service + payment-service
│── benchmarks/    # performance tests
│── docs/          # learnings + results

👉 You can download the full project here.


🧪 Step 1: Virtual Thread Workshop

We started simple:

public class SimpleVirtualThreadDemo {
    public static void main(String[] args) throws InterruptedException {
        Thread.startVirtualThread(() -> {
            System.out.println("Hello from Virtual Thread! " + Thread.currentThread());
        });
        Thread.sleep(300);
    }
}

Running this was magical – we got a thread per task, without the overhead of traditional thread pools.

We then compared:

  • Executors.newFixedThreadPool(100) vs
  • Executors.newVirtualThreadPerTaskExecutor()

And saw how 10,000 blocking tasks run comfortably on virtual threads, while platform threads started to choke.


🏗️ Step 2: Migrating Real Services

We took two small Spring Boot services:

  • Payment Service → simulates DB + gateway calls.
  • Order Service → fetches an order and calls the payment service (HTTP).

Normally, we’d either:

  • Use a big thread pool, or
  • Go reactive with WebClient.

But with Loom, we simply wrapped blocking calls in a virtual-thread executor:

@Bean(destroyMethod = "close")
ExecutorService virtualExecutor() {
    return Executors.newVirtualThreadPerTaskExecutor();
}

And offloaded blocking calls:

Future<PaymentDto> paymentFuture = vexec.submit(() -> fetchPayment(id));

Suddenly, our old blocking style code was scalable again.


📊 Step 3: Benchmarks

We wrote a small benchmark to simulate 10,000 tasks sleeping for 50ms:

runTasks(Executors.newFixedThreadPool(100), 10_000, 50);  
runTasks(Executors.newVirtualThreadPerTaskExecutor(), 10_000, 50);

Results on my laptop:

  • Platform Threads (100 pool) → ~5000ms
  • Virtual Threads (per task) → ~600ms

That’s an 8x boost – without touching RxJava or Reactor.


💡 Key Learnings

  • Virtual threads don’t make the CPU work faster. They shine for I/O-bound workloads.
  • Code remains sequential and readable, no callbacks, no reactive operators.
  • Observability is simpler, check /actuator/threaddump and see thousands of cheap threads.
  • We still need backpressure & rate-limiting, but not giant thread pools.

🎉 Final Thoughts

After this exercise:

👉 We don’t have to “go reactive” for scalability anymore.
👉 Virtual threads let us write clean, blocking-style code and still handle thousands of concurrent requests.
👉 Migration effort is minimal, just switch executors, and you’re already gaining.

For us, Java 21 Virtual Threads = simplicity + scalability.
If you’ve been hesitant about Project Loom, now’s the time to dive in.

Check out GitHub Repository: java21-virtual-threads-showcase

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top