Reverse proxies are integral architectural components for running Java applications in enterprise environments. As per the 2021 Web Server Survey conducted by W3Techs, over 75% of production Java applications leverage a reverse proxy server. Apache Tomcat has a long-standing reputation as a widely used open-source servlet engine and web container. This guide provides an exhaustive, end-to-end look at deploying Apache Tomcat applications with a secure, high performance reverse proxy.
Why Do We Need Reverse Proxies With Tomcat?
While Apache Tomcat excels at serving Java web applications, it lacks several enterprise-grade features for security, load distribution and caching. Native Tomcat does not support:
- Authentication protocols like OAuth, LDAP, Single Sign-On (SSO)
- Server healthchecks for auto-scaling
- Out-of-the-box DDoS protections
- Dynamic caching for optimized content delivery
Integrating Tomcat with a dedicated reverse proxy server enables these capabilities while simplifying application management.
Key Benefits of Using a Reverse Proxy
Here are some specific benefits of fronting Tomcat with a capable reverse proxy:
Security
- Whitelist source IP ranges
- Integrate firewall and WAF protections
- Terminate HTTPS connections
Load Distribution
- Scale horizontally across multiple Tomcat instances
- Add server health checks
- Prevent overloaded backends through queueing
Caching and Compression
- Cache static assets and API responses
- Reduce bandwidth usage through compression
- Minify resources like CSS, JS for faster loading
Observability
- Centralized access logs in one place
- Set up custom tracking and metrics collection
With embedded capabilities like these, a reverse proxy is an essential component for running Java applications successfully in production environments.
Recommended Reverse Proxy Solutions
There are several open source and commercial solutions to choose from when selecting a reverse proxy for Tomcat:
Open Source
- Apache HTTP Server
- Nginx
- HAProxy
Cloud Services
- Amazon Elastic/Application Load Balancer
- Azure Application Gateway
- Cloudflare Spectrum
In this comprehensive guide, we will focus on configuring Apache HTTPD – a high performance web server with extensive reverse proxy capabilities.
Apache powers over 30% of all web servers on the Internet. With the Worker Multi-Processing Module (MPM) and efficient connection pooling, Apache can handle thousands of concurrent proxied requests without breaking a sweat.
Let‘s get started with setting up Apache HTTPD as a flexible, production-ready reverse proxy for Tomcat!
Prerequisites
For this guide, we need:
- Ubuntu 20.04 LTS
- Apache Tomcat 10+ installed from apt/tarball
- Apache HTTPD 2.4+ from apt
Here is a quick setup:
# Install latest OpenJDK LTS
sudo apt install openjdk-17-jdk
# Download and extract Apache Tomcat
wget https://archive.apache.org/dist/tomcat/tomcat-10/v10.1.4/bin/apache-tomcat-10.1.4.tar.gz
tar -xzf apache-tomcat-10.1.4.tar.gz
# Install Apache HTTPD
sudo apt install apache2
With the backend and proxy servers ready, let‘s configure Apache to forward requests to Tomcat.
Step 1 – Configure Apache for Reverse Proxy
The Apache mod_proxy module does all the heavy-lifting for reverse proxy capabilities. Enable proxying in Apache:
sudo a2enmod proxy
sudo a2enmod proxy_http
sudo systemctl restart apache2
Next, open the default virtual host file:
sudo vim /etc/apache2/sites-available/000-default.conf
Add the following ProxyPass directive to forward requests:
<VirtualHost *:80>
ServerName www.example.com
# ... SSL, log configs
ProxyRequests Off
ProxyPreserveHost On
<Proxy *>
Require all granted
</Proxy>
# Replace with actual Tomcat address
ProxyPass / http://localhost:8080/
ProxyPassReverse / http://localhost:8080/
</VirtualHost>
This basic proxy configuration forwards every request from Apache to the Tomcat instance on port 8080.
Let‘s break down what‘s happening for clarity:
- A client sends a request to Apache proxy on port 80
- Rules defined in
ProxyPassroute this request to Tomcat - Tomcat handles the request and sends back the response
- The response passes through Apache again before reaching the client
So essentially, Apache is functioning as a middleman – receiving all requests initially and proxying them interally to the Tomcat application server.

With this reverse proxy architecture, Apache both insulates and enhances backend Tomcat servers with additional security, caching, monitoring capablities.
Step 2 – Set Up SSL Encryption
Traffic flowing via the reverse proxy can be encrypted using Transport Layer Security (TLS). This prevents sensitive data from being exposed or tampered with during transit.
Let‘s configure SSL using a free certificate provider called Certbot. First install Certbot‘s Apache plugin using snap:
sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot
Next, request an SSL certificate specifying the proxy‘s domain:
sudo certbot --apache -d example.com
Certbot will automatically enable the required SSL configurations in Apache.
To enforce TLS connections, update the virtual host to listen on HTTPS(443) instead of HTTP(80):
<VirtualHost *:443>
# ... SSL Certificate configs added by Certbot
ProxyPass / http://localhost:8080/
ProxyPassReverse / http://localhost:8080/
</VirtualHost>
Clients can now securely access the application at https://example.com with traffic encrypted end-to-end between the user and Apache proxy! Certificates issued by Certbot are valid for 90 days, after which they auto-renew.
Step 3 – Configure Load Balancing
In most real-world scenarios, we will be running multiple Tomcat instances to distribute workload. Apache can seamlessly proxy requests across a cluster of backends.
Enable the following load balancing configuration:
<Proxy "balancer://myappcluster">
BalancerMember http://localhost:8080
BalancerMember http://localhost:8081
BalancerMember http://localhost:8082
</Proxy>
ProxyPass / balancer://myappcluster
We define a balancer pool named myappcluster consisting of the various Tomcat members. The ProxyPass is updated to point to this balancer.
Apache HTTPD will now share incoming requests evenly across the available servers using a default round-robin algorithm.
Additional members can be introduced without clients noticing any difference – exhibiting one of key virtues of using reverse proxies!
Implementing Session Replication
When scaling to multiple Tomcat nodes, replicating user sessions is vital for reliability. This can achieved through solutions like:
- Tomcat Clustering – peers replicate sessions via multicast
- Shared Memcached Instance – sessions persisted in external distributed cache
- Sticky Sessions – requests from one user always routed to same backend
Choosing an optimal session replication strategy is crucial when architecting for scale, failover capabilities.
Step 4 – Enable Caching and Compression
To optimize proxy performance, we leverage Apache‘s native support for caching and content compression.
First, install the required modules:
sudo a2enmod cache
sudo a2enmod headers
sudo a2enmod deflate
Add cache expiry rules for static resources:
<Location "/scripts">
ExpiresActive on
ExpiresDefault "access plus 1 hour"
</Location>
<Location "/images">
ExpiresActive on
ExpiresDefault "access plus 30 days"
</Location>
Cache-Control headers are also injected accordingly telling clients to cache for the defined intervals.
Compression and minification can likewise be enabled for text assets:
<Location "/css">
AddOutputFilterByType DEFLATE text/css
Header append Vary Accept-Encoding
</Location>
The reverse proxy layer provides a very effective means for reducing origin server load through advanced caching policies.
Diagnosing Issues with Proxy Traffic
While abstracting infrastructure complexity, reverse proxies also introduce an additional potential point of failure. Some common issues encountered:
| Issue | Indications | Mitigations |
|---|---|---|
| Connectivity Problems | HTTP 502/504 errors | Ensure firewall access, server health |
| Traffic Misrouting | 404 File Not Found | Validate ProxyPass rules |
| Overloaded Backends | Slow loading, timeouts | Scale up/out backends, Use healthchecks |
| SSL Termination Problems | Mixed content warnings | Use ProxyPassReverse for full URL rewrite |
Analyzing key diagnostic sources helps uncover proxy bottlenecks:
1. Apache Error Log – Logs 500/502/504 backend failures
2. Apache Access Log – Inspects incoming traffic and request latencies
3. Tomcat Logs – Verifies backend processing and application errors
Learning to read through these proxied traffic logs pays rich dividends when trying to pinpoint issues.
Here are some useful log directives for troubleshooting:
Apache Error Log
ErrorLog ${APACHE_LOG_DIR}/proxy_error.log
LogLevel warn
Apache Access Log
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
CustomLog ${APACHE_LOG_DIR}/proxy_access.log combined
Detailed application-level tracing can also be enabled systemwide using utilities like OpenTracing.
High Availability Considerations
In production environments, adequate redundancy needs to be provisioned across the entire proxy pipeline – from the DNS layer to database tier.
Some high availability (HA) capabilities to evaluate:

- Geographically distributed proxy servers
- Hot standbys for automatic failover
- Multi-CDN deployment for DNS round robin
- Backup data stores with replication
Planning for HA avoids the proxy becoming a single point of failure.
Closing Thoughts
Implementing a performant and secure reverse proxy unlocks immense value for serving Java applications using Apache Tomcat. The proxy insulation provides flexibility around backend choices for programming languages and databases.
Spending time to properly analyze logs, metrics for the proxy and origin infrastructure provides invaluable visibility into deployed applications. Gradually incorporating best practices around high availability, DevOps processes, and infrastructure as code hardens the overall robustness of proxy-based architectures.


