How to Use Locks in Multi-Threaded Java Programs

I once shipped a service that looked perfect in tests, then failed under real load because two threads updated the same cache entry at the same time. The bug was rare, it vanished under a debugger, and it only appeared when traffic spiked. That experience is why I’m meticulous about locks in Java today. You can build fast, readable systems with threads, but you have to guard shared state with care. Locks are the mechanism I reach for when I need clear, explicit control over which thread owns a critical section and when other threads must wait.

In this guide, I’ll show you how to use the Lock API in Java, when a plain synchronized block is not enough, and how to structure your code so it stays safe as your system grows. You’ll see runnable examples with ReentrantLock and ReadWriteLock, patterns for timeouts and interruption, and the pitfalls that cause deadlocks or slowdowns. I’ll also point out where I avoid locks entirely and use other concurrency tools. If you build services, batch jobs, or event-driven systems, you can apply these patterns immediately.

Why locks exist: protecting critical sections without magic

A lock is simply a rule: only one thread (or a controlled set of threads) can enter a critical section at a time. That sounds obvious, but it addresses two invisible problems: data races and memory visibility. A data race happens when two threads read and write the same data without coordination. Memory visibility issues happen when one thread updates a value but another thread keeps reading a stale version. Locks solve both by placing a hard boundary around access to shared state.

I like to explain it using a very simple analogy: imagine a whiteboard in an office where multiple people record delivery times. If everyone writes at once, the board becomes unreadable. A lock is the key to the room. You can step in, update the board, and step out. When you leave, everyone else knows they can see the latest value. That visibility part is critical; Java’s Lock implementations ensure happens-before ordering, so other threads observe your changes once the lock is released.

You should use locks when these conditions are true:

  • Multiple threads mutate the same data structure or object state.
  • You cannot make that state immutable or thread-confined.
  • You need finer control than synchronized gives you, such as timeouts, interruption, or fairness.

You should avoid locks when you can use lock-free tools like atomic variables, concurrent collections, or immutable data structures. Locks are powerful, but they demand discipline. In a large codebase, the explicitness of Lock is a strength because it makes ownership visible and reviewable.

The Lock interface and safe acquisition patterns

The Lock interface sits in java.util.concurrent.locks and gives you more control than synchronized. I think of it as a manual transmission for concurrency: more control, more responsibility. These are the methods I use most:

Method

What I use it for

lock()

Block until the lock is available

unlock()

Release the lock (must be in a finally block)

tryLock()

Attempt to lock without blocking

tryLock(timeout, unit)

Wait for a bounded time

lockInterruptibly()

Allow a blocked thread to be interrupted

newCondition()

Create wait/notify style coordinationThe safest pattern is always: acquire, try, finally, release. It is not optional. I don’t allow code review to pass if unlock is not in a finally block. A lock is like a door that must be closed even if an exception happens.

Here is the smallest safe template I use:

import java.util.concurrent.locks.Lock;

import java.util.concurrent.locks.ReentrantLock;

public class SafeCounter {

private final Lock lock = new ReentrantLock();

private int value = 0;

public void increment() {

lock.lock();

try {

value++;

} finally {

lock.unlock();

}

}

public int get() {

lock.lock();

try {

return value;

} finally {

lock.unlock();

}

}

}

When I want a bounded wait, I use tryLock with a timeout and handle the failure explicitly. This makes it easier to keep a system responsive under load:

import java.util.concurrent.TimeUnit;

import java.util.concurrent.locks.Lock;

import java.util.concurrent.locks.ReentrantLock;

public class ReservationStore {

private final Lock lock = new ReentrantLock();

private int reserved = 0;

public boolean reserve(int count) throws InterruptedException {

if (lock.tryLock(50, TimeUnit.MILLISECONDS)) {

try {

reserved += count;

return true;

} finally {

lock.unlock();

}

}

return false;

}

}

I treat timeouts as a design choice, not a band-aid. You should only use them if the caller can safely handle a failed acquisition, for example by retrying, deferring work, or switching to a different partition.

ReentrantLock: explicit control, reentry, and fairness

ReentrantLock is the workhorse of the Lock API. It lets the same thread acquire the lock multiple times without deadlocking itself. This is helpful when a public method calls another method that also uses the lock. I use it when I need explicit lock management or advanced features that synchronized does not provide.

Here is a full runnable example of two workers competing for a single lock:

import java.util.concurrent.locks.ReentrantLock;

class Worker implements Runnable {

private final ReentrantLock lock;

private final String name;

Worker(ReentrantLock lock, String name) {

this.lock = lock;

this.name = name;

}

@Override

public void run() {

lock.lock();

try {

System.out.println(name + " acquired lock");

Thread.sleep(1000); // Simulate work

System.out.println(name + " finished work");

} catch (InterruptedException e) {

Thread.currentThread().interrupt();

} finally {

lock.unlock();

}

}

}

public class LockDemo {

public static void main(String[] args) {

ReentrantLock lock = new ReentrantLock();

Thread t1 = new Thread(new Worker(lock, "Thread-1"));

Thread t2 = new Thread(new Worker(lock, "Thread-2"));

t1.start();

t2.start();

}

}

Two details matter here. First, the finally block always releases the lock, even if the thread is interrupted. Second, the thread interruption is preserved by calling Thread.currentThread().interrupt().

ReentrantLock also supports a fairness policy. If you pass true to the constructor, threads acquire the lock in roughly FIFO order. This makes sense when you have many threads and you want to prevent starvation. The tradeoff is throughput: a fair lock can reduce peak performance by around 5–15% in typical JVM workloads because it prevents barging. I use fair locks when a long-lived service must ensure that no request waits forever, even under sustained load.

ReentrantLock fairLock = new ReentrantLock(true);

I recommend using unfair locks by default because they are faster, and switching to fair locks only when you see starvation in production traces.

ReadWriteLock: letting readers run without blocking each other

If your workload is read-heavy, a ReadWriteLock often gives better concurrency than a single exclusive lock. Multiple readers can enter at once, while writers still require exclusive access. This pattern shows up in configuration stores, cached data, and in-memory indexes.

Here is a runnable example using ReentrantReadWriteLock:

import java.util.ArrayList;

import java.util.List;

import java.util.concurrent.locks.Lock;

import java.util.concurrent.locks.ReadWriteLock;

import java.util.concurrent.locks.ReentrantReadWriteLock;

class SharedData {

private final List list = new ArrayList();

private final ReadWriteLock rwLock = new ReentrantReadWriteLock();

private final Lock readLock = rwLock.readLock();

private final Lock writeLock = rwLock.writeLock();

public void add(String value) {

writeLock.lock();

try {

list.add(value);

System.out.println(Thread.currentThread().getName() + " added: " + value);

} finally {

writeLock.unlock();

}

}

public void read(int index) {

readLock.lock();

try {

if (index < list.size()) {

System.out.println(Thread.currentThread().getName() + " read: " + list.get(index));

}

} finally {

readLock.unlock();

}

}

}

public class ReadWriteDemo {

public static void main(String[] args) {

SharedData data = new SharedData();

Thread writer1 = new Thread(() -> data.add("Hi"), "Writer-1");

Thread writer2 = new Thread(() -> data.add("Hello"), "Writer-2");

Thread reader1 = new Thread(() -> data.read(0), "Reader-1");

Thread reader2 = new Thread(() -> data.read(1), "Reader-2");

writer1.start();

writer2.start();

reader1.start();

reader2.start();

}

}

The main decision with a ReadWriteLock is read/write ratio. If reads outweigh writes by at least 5–10x, you often get better throughput. If your workload writes frequently, the overhead of the read-write policy can make it slower than a single ReentrantLock. I also avoid it when writers must have low latency, because a swarm of readers can delay them.

A practical rule I use: choose ReadWriteLock for cached data that updates every few seconds but is read thousands of times per second. Stick with ReentrantLock for anything that writes often or must commit quickly.

Conditions: coordinating threads without busy waiting

Sometimes you don’t just need mutual exclusion; you need coordination. Conditions are the Lock-based equivalent of wait() and notify(), but they are safer because they are tied to a specific Lock instance. I use them when one thread must wait for a state change while holding the same lock that guards that state.

Here is a small queue example that blocks when empty and wakes when new data arrives:

import java.util.ArrayDeque;

import java.util.Queue;

import java.util.concurrent.locks.Condition;

import java.util.concurrent.locks.Lock;

import java.util.concurrent.locks.ReentrantLock;

public class BlockingQueueExample {

private final Queue queue = new ArrayDeque();

private final Lock lock = new ReentrantLock();

private final Condition notEmpty = lock.newCondition();

public void put(String value) {

lock.lock();

try {

queue.add(value);

notEmpty.signal(); // Wake one waiting consumer

} finally {

lock.unlock();

}

}

public String take() throws InterruptedException {

lock.lock();

try {

while (queue.isEmpty()) {

notEmpty.await();

}

return queue.remove();

} finally {

lock.unlock();

}

}

}

Two practices matter here. First, the waiting condition is always checked in a while loop, not if, because spurious wakeups can happen. Second, I signal only after the state is updated. If you signal first, the waiting thread can wake and still see old state.

You should reach for Conditions when you need controlled blocking and wakeup. If you just need a thread-safe queue, I recommend BlockingQueue from the standard library instead. It is tuned and well-tested. Conditions are best when you need custom coordination logic.

Deadlocks, ordering rules, and the mistakes I still watch for

Deadlocks are the failure mode everyone fears, and they show up in the same few patterns. My rule is simple: if you ever need more than one lock, define a strict order and never violate it. That order should be obvious from the code, not hidden in a wiki page.

Common deadlock patterns I see:

  • Nested locks acquired in different orders across threads.
  • Holding a lock while making a call that can re-enter your code.
  • Waiting on a Condition but forgetting that another thread needs the same lock to signal.

I enforce these countermeasures in every codebase I touch:

  • A lock acquisition order chart for any subsystem with more than two locks.
  • Small lock scopes: never hold a lock while doing I/O, network calls, or logging.
  • Time-bounded acquisition for cross-service code paths, typically 50–200 ms.
  • One lock per data structure unless a profiling pass proves that striping is worth it.

Here is a quick example of consistent ordering across two locks:

import java.util.concurrent.locks.Lock;

import java.util.concurrent.locks.ReentrantLock;

public class TransferService {

private final Lock accountLockA = new ReentrantLock();

private final Lock accountLockB = new ReentrantLock();

public void transferAtoB(int amount) {

accountLockA.lock();

try {

accountLockB.lock();

try {

// transfer logic

} finally {

accountLockB.unlock();

}

} finally {

accountLockA.unlock();

}

}

}

If another method ever needs both locks, it must acquire accountLockA before accountLockB too. I document that in code comments or naming conventions, because that is where future readers will see it.

Performance considerations and modern Java workflows

Locks are not just about correctness; they shape your performance profile. I look for three costs: contention, context switches, and blocking time. If you see many threads competing for a lock, throughput drops because threads spend time waiting instead of working. A single hot lock can reduce a service’s throughput by 30–60% in a high contention path.

My usual performance playbook looks like this:

  • Measure contention with Java Flight Recorder or async-profiler before changing anything.
  • Reduce the critical section to only the necessary state changes.
  • Consider partitioning: multiple locks for independent shards of data.
  • Use ReadWriteLock when reads overwhelm writes by a wide margin.

In modern Java (21+ and beyond), virtual threads change the calculus a bit. Virtual threads can make blocking cheaper because the JVM can park and resume them without tying up OS threads. That does not remove the need for locks, but it makes it safer to block in some places where you previously would avoid it. I still keep critical sections small and avoid blocking under a lock if I can. A virtual thread that blocks under a lock can still delay all other threads that need that same lock.

I also use AI-assisted tooling in 2026 to scan for lock order violations and to flag methods that hold locks across I/O boundaries. These tools are not a replacement for reviews, but they catch the “obvious in hindsight” patterns before they reach production.

Choosing between synchronized and Lock

The synchronized keyword is still a good choice. It is simple, safe, and optimized by the JVM. I use it when:

  • I need a single, straightforward lock with no advanced behavior.
  • The critical section is tiny and clearly scoped.
  • The class will be used by a small team who value minimal ceremony.

I switch to Lock when I need:

  • Timeout or try-based acquisition to keep latency predictable.
  • Interruptible lock acquisition for cancellation and shutdown.
  • More than one condition variable on the same lock.
  • Fairness, lock polling, or explicit monitoring.

In practice, I often start with synchronized and move to Lock when I hit real requirements. What I avoid is premature complexity. Concurrency is hard enough without adding features you don’t use.

Advanced acquisition patterns I rely on

Beyond the basic templates, there are a few patterns that keep systems safe under real-world load.

Interruptible locking for clean shutdowns

A service that never shuts down cleanly is a service you will eventually regret. If threads are blocked acquiring a lock during shutdown, lockInterruptibly() lets you respond to interrupts and exit quickly.

import java.util.concurrent.locks.ReentrantLock;

public class InterruptibleWorker implements Runnable {

private final ReentrantLock lock;

public InterruptibleWorker(ReentrantLock lock) { this.lock = lock; }

@Override

public void run() {

try {

lock.lockInterruptibly();

try {

// do work

} finally {

lock.unlock();

}

} catch (InterruptedException e) {

Thread.currentThread().interrupt();

// exit cleanly

}

}

}

I use this pattern in batch workers and long-lived background jobs. It keeps deployments and restarts predictable, especially in containerized environments.

Timeouts with backoff

If you use tryLock in a hot path, add jitter and backoff. Otherwise, you create a thundering herd of threads that hammer the lock repeatedly.

import java.util.concurrent.TimeUnit;

import java.util.concurrent.locks.ReentrantLock;

public class BackoffStore {

private final ReentrantLock lock = new ReentrantLock();

public boolean updateWithBackoff() throws InterruptedException {

for (int i = 0; i < 5; i++) {

if (lock.tryLock(20, TimeUnit.MILLISECONDS)) {

try {

// update state

return true;

} finally {

lock.unlock();

}

}

Thread.sleep(5L * (i + 1)); // simple backoff

}

return false;

}

}

I only do this when the caller has a safe fallback. If a failed lock means corrupted state or partial results, a retry loop can be worse than blocking.

Lock scope narrowing

If a critical section grows over time, I pause and refactor. A typical anti-pattern is locking around expensive work that doesn’t need to be synchronized. I make the data update atomic and move expensive work outside the lock.

public class MetricsStore {

private final ReentrantLock lock = new ReentrantLock();

private long count;

public void record() {

lock.lock();

try {

count++;

} finally {

lock.unlock();

}

// expensive logging or batching outside lock

}

}

This single change can reduce contention dramatically, especially when logging is slow under load.

Real-world scenario: thread-safe cache with TTL

Let me show a more realistic example: a small in-memory cache with expiration. You could use a concurrent map and scheduled cleanup, but I’ll show a lock-based approach that makes the critical section explicit.

import java.time.Instant;

import java.util.HashMap;

import java.util.Map;

import java.util.concurrent.locks.ReentrantLock;

public class TtlCache {

private static class Entry {

final V value;

final long expiresAt;

Entry(V value, long expiresAt) {

this.value = value;

this.expiresAt = expiresAt;

}

}

private final Map<K, Entry> map = new HashMap();

private final ReentrantLock lock = new ReentrantLock();

public void put(K key, V value, long ttlMillis) {

long expiresAt = Instant.now().toEpochMilli() + ttlMillis;

lock.lock();

try {

map.put(key, new Entry(value, expiresAt));

} finally {

lock.unlock();

}

}

public V get(K key) {

lock.lock();

try {

Entry entry = map.get(key);

if (entry == null) return null;

if (entry.expiresAt < Instant.now().toEpochMilli()) {

map.remove(key);

return null;

}

return entry.value;

} finally {

lock.unlock();

}

}

public int size() {

lock.lock();

try {

return map.size();

} finally {

lock.unlock();

}

}

}

This is simple and safe, but I would also note the edges:

  • Instant.now() is relatively expensive; consider using a clock abstraction or caching the time for batch operations.
  • If cache hits are extremely hot, you may want to move to a concurrent map and per-entry atomic updates.
  • If many threads call get() frequently, a ReadWriteLock can reduce contention.

The point is not that this is the “best” cache. The point is that locks make your intention explicit and easy to review.

Edge cases that break lock-based designs

There are a few edge cases I see again and again.

Locking across callbacks

If you hold a lock and call user-provided code, you create a reentry risk. That code might call back into your locked code and deadlock itself. I prefer to copy the required state under lock, then release and call outside the lock.

Mixing intrinsic and explicit locks

Do not mix synchronized on this with ReentrantLock on the same data. You will create a false sense of safety. Pick one locking strategy per shared state.

Forgetting to release in error paths

I still see code where exceptions cause an early return and skip unlocking. The fix is always the same: unlock in finally.

Holding locks during blocking I/O

This is the easiest way to kill throughput. A blocked I/O call can hold the lock for seconds, which cascades to timeouts and backlogs. If you need I/O, capture state under lock, release it, then do the I/O.

Practical scenarios: when I choose each tool

I get asked “when do I use lock X?” constantly, so here is my rough decision guide:

  • I use synchronized for small, simple objects, or when there is no need for timeouts or interruptions.
  • I use ReentrantLock for services that need explicit control, timeouts, or interruption.
  • I use ReadWriteLock for cached data or registries where reads are dominant.
  • I use StampedLock (not shown above) when reads are extremely frequent and I can tolerate more complex code.
  • I use Atomic* classes when the state is a single value or can be updated with CAS.
  • I use concurrent collections (ConcurrentHashMap, ConcurrentLinkedQueue) whenever they solve the problem directly.

The best design is the simplest one that meets correctness and performance goals. The worst design is a complex lock scheme that nobody understands.

Lock striping: scaling a hot lock without rewriting everything

If a single lock becomes hot, I sometimes use lock striping: a set of locks protecting different shards of the data. This spreads contention across multiple locks and can yield large throughput gains.

Here is a simple striped counter map:

import java.util.concurrent.locks.ReentrantLock;

public class StripedCounter {

private static class Stripe {

final ReentrantLock lock = new ReentrantLock();

long value;

}

private final Stripe[] stripes;

public StripedCounter(int stripeCount) {

stripes = new Stripe[stripeCount];

for (int i = 0; i < stripeCount; i++) {

stripes[i] = new Stripe();

}

}

private Stripe stripeForKey(Object key) {

int idx = (key.hashCode() & 0x7fffffff) % stripes.length;

return stripes[idx];

}

public void increment(Object key) {

Stripe stripe = stripeForKey(key);

stripe.lock.lock();

try {

stripe.value++;

} finally {

stripe.lock.unlock();

}

}

public long sum() {

long total = 0;

// lock all stripes in order to avoid deadlocks

for (Stripe stripe : stripes) {

stripe.lock.lock();

}

try {

for (Stripe stripe : stripes) {

total += stripe.value;

}

} finally {

for (int i = stripes.length - 1; i >= 0; i--) {

stripes[i].lock.unlock();

}

}

return total;

}

}

The tradeoff is complexity. You also need a strict lock order when aggregating across stripes. Still, for high-contention counters, this can reduce lock wait time by multiples.

ReadWriteLock pitfalls and how I mitigate them

ReadWriteLock is useful but easy to misuse.

  • Writer starvation: With a heavy stream of readers, writers can be delayed. I mitigate this with fair mode or by limiting reader concurrency in hot paths.
  • Upgrading locks: You cannot safely “upgrade” from a read lock to a write lock without releasing the read lock first. That creates a race window. I avoid upgrade patterns and design methods to acquire the right lock from the start.
  • Overhead: ReadWriteLock has more bookkeeping than a simple lock. I only use it if I’ve measured a read-heavy workload.

If you want optimistic reads with a cheaper path, consider StampedLock (not shown here). It provides a non-blocking read that can be validated later. It’s powerful, but the API is more complex and not reentrant.

Conditions in real systems: bounded buffers and state machines

Conditions shine when you build bounded buffers or state machines. Here’s a more realistic bounded queue that blocks on full and empty states.

import java.util.ArrayDeque;

import java.util.Queue;

import java.util.concurrent.locks.Condition;

import java.util.concurrent.locks.ReentrantLock;

public class BoundedQueue {

private final Queue queue = new ArrayDeque();

private final int capacity;

private final ReentrantLock lock = new ReentrantLock();

private final Condition notEmpty = lock.newCondition();

private final Condition notFull = lock.newCondition();

public BoundedQueue(int capacity) {

this.capacity = capacity;

}

public void put(T item) throws InterruptedException {

lock.lock();

try {

while (queue.size() == capacity) {

notFull.await();

}

queue.add(item);

notEmpty.signal();

} finally {

lock.unlock();

}

}

public T take() throws InterruptedException {

lock.lock();

try {

while (queue.isEmpty()) {

notEmpty.await();

}

T item = queue.remove();

notFull.signal();

return item;

} finally {

lock.unlock();

}

}

}

This is correct, but if you don’t need custom behavior, the standard ArrayBlockingQueue is simpler and more optimized. I only reach for Condition-based designs when I need extra logic or instrumentation around state transitions.

Diagnosing lock issues in production

I treat lock problems as observability problems. If you can’t see them, you can’t fix them. Here are the signals I collect:

  • Lock hold time percentiles (p50/p95/p99) to detect long critical sections.
  • Lock contention counts to see hot spots.
  • Thread dumps during incidents to identify blocked stacks.
  • Slow request traces with annotations around lock acquisition.

If I suspect a lock-based bottleneck, I use a profiler to validate it. The common mistake is to guess and refactor prematurely. Measure first, then change.

Common pitfalls checklist (the ones I still review for)

I keep a mental checklist whenever I review concurrency code:

  • Is every lock() matched with unlock() in a finally block?
  • Is any lock held across I/O, logging, or external calls?
  • Are there nested locks without a documented ordering rule?
  • Is tryLock failure handled safely and clearly?
  • Are Condition.await() calls wrapped in while, not if?
  • Does any code mix intrinsic locks and explicit locks on the same state?

If any of these are violated, I pause and fix before moving on.

Alternatives to locks: when I deliberately avoid them

There are many cases where I refuse to use locks even if they would “work.”

  • Single-writer designs: A single thread owns the state; other threads communicate via queues. This eliminates shared state entirely.
  • Immutable objects: Replace an object with a new one rather than mutating it. Then swap a single reference atomically.
  • Concurrent collections: ConcurrentHashMap, CopyOnWriteArrayList, BlockingQueue are often all you need.
  • Atomics: AtomicInteger, LongAdder, AtomicReference are great for simple counters or state pointers.

If you can avoid locks, you should. But if you can’t, use them with intention and discipline.

Comparing approaches: classic vs modern in practice

Here’s how I frame the choice between traditional locking and modern alternatives:

Scenario

Traditional locking

Modern alternative —

— Single counter updated by many threads

ReentrantLock

LongAdder or AtomicLong Shared config with heavy reads

ReadWriteLock

Immutable snapshot + AtomicReference Producer/consumer queue

Condition-based queue

BlockingQueue Hot mutable map

Striped locks

ConcurrentHashMap Complex state machine

Lock + Conditions

Actor-style single-thread loop

I like to start simple and modern, then fall back to explicit locks when I need stronger guarantees or a tighter critical section.

A deeper look at memory visibility and happens-before

Locks aren’t only about mutual exclusion. They also provide memory visibility guarantees. When a thread releases a lock, all writes in that critical section become visible to any thread that later acquires the same lock. This is the happens-before relationship. Without it, one thread could read a stale value even if there is no data race.

This matters for:

  • Flags or state transitions (e.g., “ready” flags).
  • Caches where you build a value then publish it to readers.
  • Multi-step updates that must appear atomic.

When I read code, I always check that the state is both protected from races and visible under the Java Memory Model. Locks give you both if used correctly.

Handling lock contention with partitioning and sharding

If contention is high, I look for natural partitions in the data. For example, if a cache key space is large, I split it into shards and lock per shard. If I have per-customer state, I use a lock per customer or per bucket of customers. This keeps the critical sections independent and avoids global bottlenecks.

A practical rule: if 90% of contention comes from 10% of keys, sharding often pays off. It’s more code, but it can transform performance without a large redesign.

Testing lock-based code

Concurrency bugs are tricky because they are timing-dependent. I use three layers of testing:

  • Unit tests for correctness of small critical sections.
  • Stress tests that run many threads and randomized sequences.
  • Soak tests that run for minutes or hours to uncover rare races.

I also use tooling that can systematically explore interleavings where possible. Even a basic randomized test can expose hidden races if you run it enough times.

A practical checklist for adding a new lock

When I add a lock to a class, I run this checklist to avoid mistakes:

  • What exact state does this lock protect? I document it in the class or field comment.
  • Is this lock ever held across I/O or external calls? If yes, I refactor.
  • Will any method need multiple locks? If yes, I define ordering rules now.
  • Should a caller be able to time out or interrupt? If yes, I use tryLock or lockInterruptibly.
  • Is a concurrent collection or atomic type simpler? If yes, I choose that first.

This seems simple, but it prevents a surprising number of bugs.

Putting it all together: a realistic service scenario

Imagine a service that tracks user sessions with a last-access timestamp and a count of active sessions. You have frequent reads, occasional writes, and a need to expire old sessions. Here is a lock-based approach that is safe and easy to reason about.

import java.time.Instant;

import java.util.HashMap;

import java.util.Map;

import java.util.concurrent.locks.ReentrantReadWriteLock;

public class SessionStore {

private static class Session {

long lastAccess;

Session(long lastAccess) { this.lastAccess = lastAccess; }

}

private final Map sessions = new HashMap();

private final ReentrantReadWriteLock rw = new ReentrantReadWriteLock();

public void touch(String sessionId) {

rw.writeLock().lock();

try {

Session s = sessions.get(sessionId);

if (s == null) {

sessions.put(sessionId, new Session(Instant.now().toEpochMilli()));

} else {

s.lastAccess = Instant.now().toEpochMilli();

}

} finally {

rw.writeLock().unlock();

}

}

public boolean isActive(String sessionId, long maxIdleMillis) {

rw.readLock().lock();

try {

Session s = sessions.get(sessionId);

if (s == null) return false;

return (Instant.now().toEpochMilli() - s.lastAccess) <= maxIdleMillis;

} finally {

rw.readLock().unlock();

}

}

public int size() {

rw.readLock().lock();

try {

return sessions.size();

} finally {

rw.readLock().unlock();

}

}

}

This code is safe, but I would still consider alternatives: an immutable snapshot map updated periodically, or a concurrent map with atomic updates. I choose this design when I want explicit, easy-to-review locking semantics.

Summary: my guiding principles

If you take only a few ideas from this guide, let them be these:

  • Locks are about correctness and visibility, not just exclusivity.
  • Keep critical sections small and obvious.
  • Always release in finally and avoid blocking I/O under a lock.
  • Use timeouts and interrupts when you need responsive shutdowns or bounded latency.
  • Measure contention before optimizing; don’t guess.
  • Prefer simpler tools when they solve the problem.

I use locks regularly, but I treat them with respect. When done well, they make concurrency explicit and safe. When done poorly, they create hidden bottlenecks and rare, painful bugs. My goal is to make lock ownership and scope so clear that the next engineer can reason about it without reading my mind. That clarity is what keeps systems reliable under real-world load.

Scroll to Top