Skip to content

Full Reactive Stack – Introduction

Build a full end-to-end reactive application with this practical guide that will take you through a real web application example.

This post is a blog version of one part of the Full Reactive Stack with Spring Boot, WebFlux, Angular, RxJS, and Eventsource Guide. The complete code sources are on GitHub.

Guide Overview

This guide has four main chapters and an appendix:

  1. Introduction (this post) - Understanding reactive programming and its benefits
  2. The Backend - Building with Spring Boot, WebFlux and MongoDB
  3. The Angular Frontend - Creating a reactive client application
  4. Conclusions - Comparing reactive vs blocking approaches
  5. Appendix - Additional resources and references

All the source code (spring-boot, Angular, Docker) is available on GitHub: Full-Reactive Stack repository. If you find it useful, please give it a star!

What We’ll Build

This guide focuses on the capabilities of Spring WebFlux and Spring Boot 2 to create a Reactive Web Application. It is supported by a code example that you can build step by step.

To provide realistic examples where Spring is not just the client of the Reactive API, you will complete the stack with a client application in Angular 9. To make it reactive, you’ll use Server-Sent Events (SSE) to communicate the backend with the frontend. See the figure below for a quick view of the stack we’ll build.

This Guide aims to teach Reactive Web development by building an application from scratch (or navigating through the code if you prefer). The chosen technologies are:

  • Spring Boot 2.3.
  • Spring WebFlux (also Spring Web to compare).
  • Spring Data JPA Reactive Repositories
  • Angular 9
  • RxJS
  • EventSource API.
Full Reactive Stack – Quick View
Full Reactive Stack – Quick View

Reactive Web Patterns

Reactive patterns and non-blocking programming techniques are widely extended technologies nowadays. However, when we focus on the Web layer and exposed APIs, the vast majority of applications still use blocking strategies.

To try to clarify a bit better how this new Reactive Web approach differs from other ones, let’s compare it with two existing techniques:

  • The Servlet 3.0 specification introduced asynchronous support. That means you can optimize the usage of container threads using Callable and Spring’s DeferredResult as a response from controllers, however, from the client’s perspective, they’re still blocking calls. Besides, the API’s imperative way is far from being developer-friendly, so they are seldom used.

  • From the web client’s point of view, you can also rely on asynchronous patterns. This is the most popular way of performing a request to the server, using promises. Clients usually perform requests and wait for responses in different threads to avoid blocking the main flow. The response arrives in the background, and then a callback function processes the data and can update the HTML’s DOM. Again, even asynchronous, the additional thread is also blocked, waiting for a response that might take a long time to complete and receiving all the data simultaneously.

In a Reactive Web approach, threads don’t block until all the data is available. Instead, every layer can provide data as soon as they process it. Therefore, web clients can start doing their part sooner, so we improve the user’s experience. It’s also more efficient since the different layers can process information continuously instead of being idle until a whole chunk of data comes.

WebFlux and Project Reactor

Spring 5 provides a non-blocking, reactive alternative to the Spring Web MVC approach: Spring WebFlux. It is based on the Project Reactor (also part of the Spring family), which follows the Reactive Streams Specification. That means WebFlux follows the standard concepts of Reactive Programming: Publishers, Subscribers, Observables, etc.

In the next chapter, we’ll cover WebFlux and Project Reactor in more detail.

Reactive Web: Advantages

So, WebFlux implements a reactive, non-blocking API. That sounds pretty cool, but what exactly does that mean? Why should we care about non-blocking web calls?

Handling Slow Connections

The first reason is slow Internet connections. Having a slow connection is annoying, but it gets even worse with classic blocking HTTP calls. Suppose you search for the best mobile phones on your favorite online store app. You enter the query terms and press ‘Search.’ With a slow connection, you must wait 15 seconds staring at a spinner or blank page. Then, the entire first page with results appears all at once.

That happens because the web server doesn’t care about your slow connection, so it’s responding with a -let’s say- 300 kilobytes response of complete JSON with summaries, descriptions, and prices of the first 50 items that match your criteria. On your side -the client’s side-, the problem is related to classic web client approaches that use blocking HTTP calls (even in different threads), that wait until the full HTTP response is received to process the data, and then render the page.

A better alternative would be for the server, instead of returning the 50 items together, to send the items to the client one by one as soon as they’re found. The client could then adapt its interface to react to these individual item pushes and render them as they come. Switching only one of these two sides of the communication wouldn’t fix this situation; both would need to use a different approach.

Handling Server Load

Something similar to the slow connection issue would happen if the server is busy. Let’s imagine many users searching for products on that online store simultaneously, and the database’s capacity is not ready. The database becomes slower, and the queries take a few seconds to fetch 50 results. Now, the bottleneck in the client-server interaction is not the web interface but the database connection. However, the result is the same. The user has to wait until the query is fully completed to get the list of products.

A better outcome could be achieved if the database connection, instead of returning query results at once (blocking), would open streams with the database clients (another backend layer) and return results as it finds them.

Better User Experience

With a reactive web interface, the user sees three items on the screen per second instead of all at once after fifteen seconds. That’s the first non-blocking advantage: faster data availability, which in our example means better user experience. We, as developers, should aim for that and move away from blocking calls. According to a study from Google, up to 53% of mobile users abandon a site if it takes more than 3 seconds to load.

In a full-reactive stack scenario, the database retrieves results as soon as they’re available. The same applies to the web interface (let’s say HTTP). The Business logic layer should also be prepared to process items reactively, e.g., by using reactive libraries. Finally, to close the loop and benefit from a full-reactive stack, the client’s side should be able to process the data as soon as it arrives, following a subscriber pattern.

Improved Thread Usage

There is an extra advantage that reactive web interfaces share with other asynchronous approaches: better thread usage. In a non-blocking system, you don’t have threads waiting to be completed. Instead, the server parks the client thread quickly so fewer server threads are occupied. Following a Publish-Subscribe pattern, web clients will be notified when new data becomes available. This advantage is very relevant if you have a server application that may perform slowly sometimes, thus accumulating a lot of blocked threads and eventually being unable to process more.

Is Non-Blocking the same as Reactive?

It depends on whom you ask that question. ‘Reactive’ is a generic concept describing your acting triggered by a previous action. You could argue that programming techniques that follow a non-blocking approach are reactive programming styles because you usually utilize a callback function that reacts to the result. In that broad scope of the ‘reactive’ word, using a CompletableFuture, or any other tool based on promises with callbacks would be considered reactive programming. However, people commonly refer to these libraries and APIs as asynchronous programming.

It’s more common to talk about reactive programming when our code uses one of the existing frameworks that follow the Reactive Streams spec. Some popular reactive frameworks are ReactiveX (RxJS, RxJava, etc.), Akka, and Project Reactor (the one used by WebFlux). Java 9 also introduced the reactive streams specification with the Flow API so the implementors can choose the Java API standards for their libraries. However, that hasn’t worked so far, and most reactive libraries still use their custom classes. In any case, all these frameworks implement (with slight differences in naming) patterns like Observables, Publishers, and Subscribers.

Additionally, these reactive frameworks adopt the backpressure concept. The main idea of backpressure is that the subscriber controls the stream, so the consumer can signal the publisher to stop producing data instead of having to accumulate it in buffers. Project Reactor implements backpressure, as we’ll see and demonstrate later in this guide.

The Sample Application

To showcase the reactive capabilities, you’ll create a client web application that receives quotes from the book Don Quixote (yes, you can tell I’m Spanish). You’ll open a Server-Sent Events channel instead of asking for the quotes using a standard blocking call. If you know about WebSockets, you can see it as a similar technology where communication is unidirectional (server to client).

The backend application will send these events (quotes) once a subscriber (our client application) exists. This part is supported by WebFlux returning a Flux object (don’t worry for now about the terms; we’ll cover that). On the Angular app’s end, you’ll model the connection using an EventSource object, mapped to an RxJS’s Observable<Queue>.

The Angular component subscribes to the Observable, adds the new element to an array, and re-renders the UI.

While we build the Full Reactive Stack, we’ll also create a classic blocking approach. We’ll use it to compare both strategies from the implementation and runtime perspectives.

Next Steps

We’ll dive into the implementation details in the rest of the chapters of this guide.

Continue Reading This Series

This article is part of the Full Reactive Stack series:

  1. Part 1: Introduction (this post)
  2. Part 2: The Backend - Spring Boot, WebFlux and MongoDB
  3. Part 3: The Angular Frontend
  4. Part 4: Conclusions

Want the complete guide? Get This Blog Series and More as a Comprehensive eBook—Available Now for Purchase!

Master Microservices with Spring Boot

A hands-on guide to building production-ready microservices

Get the Book

ENJOYED THIS ARTICLE?

Get More Architecture Insights

Join hundreds of software architects and engineers getting practical insights delivered to their inbox.

Back to Blog