I still see teams lose hours to “impossible” numeric bugs that turn out to be one line of comparison logic. The classic pattern is sorting prices, scores, or sensor readings and then discovering that a few values refuse to land where you expect—especially when NaN, infinities, or signed zero sneak in. Another common one: a comparator built from subtraction works in staging but breaks in production when values get large or oddly distributed.
When I’m comparing floating‑point numbers in Java and the result will influence ordering (sorting, min/max selection, priority queues, deduping, cache keys), I reach for Double.compare(d1, d2). It’s small, it’s fast, and it encodes a total ordering that stays consistent across your codebase.
I’ll walk you through what Double.compare() guarantees, where it behaves differently than naïve </> checks, and how I use it in real systems—sorting domain objects, handling NaN, dealing with -0.0, and choosing when not to use it (like tolerance-based equality).
What Double.compare() Really Does (and Why I Prefer It)
The method signature is simple:
public static int compare(double d1, double d2)
The return value follows the standard comparator contract:
- Returns
0whend1is numerically equal tod2. - Returns a negative number when
d1is numerically less thand2. - Returns a positive number when
d1is numerically greater thand2.
So why not just write:
return d1 d2 ? 1 : 0);
Because real floating‑point data includes edge cases that you will eventually hit:
NaN(Not-a-Number) isn’t “less than”, “greater than”, or “equal to” anything using normal comparisons.-0.0and0.0compare as equal with==, but sometimes you actually want a stable, total ordering.- Sorting and ordered collections depend on consistent comparison rules. If your comparator behaves strangely, collections behave strangely.
Double.compare() bakes in well-defined ordering rules that are consistent and safe to use anywhere you need a comparator.
Return Values and the Ordering Contract You Can Rely On
A comparator is more than “is A bigger than B?” It’s a contract that must be consistent, especially for sorting and for structures like TreeSet, TreeMap, heaps, and binary searches.
Here’s the practical contract I keep in mind:
- Anti-symmetry:
sign(compare(a, b)) == -sign(compare(b, a)). - Transitivity: if
a > bandb > c, thena > c. - Consistency: calling it multiple times doesn’t flip results.
For everyday numeric values, Double.compare(d1, d2) matches what you expect from math. The tricky part is: it also defines a stable ordering for values that don’t behave like normal numbers.
A basic runnable example
This mirrors the shape I use in quick sanity checks:
public class DoubleCompareBasic {
public static void main(String[] args) {
double d1 = 1023d;
double d2 = 1023d;
int result = Double.compare(d1, d2);
if (result == 0) {
System.out.println("d1 == d2");
} else if (result < 0) {
System.out.println("d1 < d2");
} else {
System.out.println("d1 > d2");
}
}
}
If you swap values, the sign flips exactly as you’d expect.
Why I avoid subtraction-based comparators
You’ll see code like this in the wild:
// Don‘t do this
return (int) (d1 - d2);
This is broken in multiple ways:
- It loses information due to truncation.
- It can overflow if you do the same trick with integers (people often copy/paste the pattern).
- For doubles, the subtraction can produce
NaN,Infinity, or-Infinityand your cast turns that into nonsense.
Even when it “works,” it can violate comparator transitivity, and that’s where sorting can throw exceptions or silently misorder elements.
Floating-Point Edge Cases: NaN, Infinities, and Signed Zero
This is the section that saves teams the most time, because it’s where “I didn’t know doubles could do that” shows up.
NaN: the value that breaks normal comparisons
In IEEE 754 floating point (what Java uses), NaN has special comparison behavior:
NaN == NaNisfalse.NaN < anyNumberisfalse.NaN > anyNumberisfalse.
So if you write custom comparator logic using < and >, you can easily end up treating NaN as “equal” to everything (because neither < nor > is true), or you can end up with inconsistent outcomes.
Double.compare() gives you a deterministic place to put NaN in an ordering.
Infinities: real values, extreme behavior
Infinities actually do compare in the way your intuition wants:
Double.NEGATIVE_INFINITYis less than every finite value.Double.POSITIVE_INFINITYis greater than every finite value.
The mistakes I see with infinities are usually indirect:
- Division by zero creates infinities in surprising places (
1.0 / 0.0gives positive infinity). - Overflow in intermediate computations turns a large number into infinity, and then ordering logic “mysteriously” changes.
Double.compare() behaves consistently here too.
-0.0 vs 0.0: equal, but not identical
This one surprises people:
0.0 == -0.0istrue.- But the bit patterns differ, and some numeric operations can preserve the sign.
When I need stable ordering (especially for sorting), having -0.0 and 0.0 land consistently matters. Double.compare() defines an order between them.
A runnable edge-case demo
Run this and look at the output. I keep a snippet like this around for debugging “why did sorting do that?” issues.
import java.util.Arrays;
public class DoubleCompareEdgeCases {
public static void main(String[] args) {
double[] values = {
3.5,
Double.NaN,
-0.0,
0.0,
Double.POSITIVE_INFINITY,
Double.NEGATIVE_INFINITY,
2.0
};
System.out.println("Original: " + Arrays.toString(values));
Double[] boxed = Arrays.stream(values).boxed().toArray(Double[]::new);
Arrays.sort(boxed, Double::compare);
System.out.println("Sorted: " + Arrays.toString(boxed));
System.out.println("Compare -0.0 vs 0.0: " + Double.compare(-0.0, 0.0));
System.out.println("Compare 0.0 vs -0.0: " + Double.compare(0.0, -0.0));
System.out.println("Compare NaN vs 1.0: " + Double.compare(Double.NaN, 1.0));
System.out.println("Compare 1.0 vs NaN: " + Double.compare(1.0, Double.NaN));
}
}
What I’m looking for:
NaNshould not behave like “equal to everything” in the sort.-0.0and0.0should land in a consistent order.
If you’ve ever had a TreeSet that “loses” values or behaves inconsistently, odds are high you were feeding it a comparator that didn’t define a total order in the presence of NaN.
How Double.compare() Relates to ==, Double.equals(), and Double.compareTo()
A lot of confusion comes from mixing three different ideas:
1) Numeric equality (== for primitives)
2) Object equality (Double.equals() for boxed values)
3) Ordering (compare, compareTo, or a Comparator)
Here’s how I keep it straight.
Primitive == (numeric equality with special cases)
0.0 == -0.0istrue.NaN == NaNisfalse.
This makes == a poor fit for “are these values effectively the same?” when your values might contain NaN. It’s also a poor fit for ordered collections, because you don’t get an ordering—only equality.
Double.equals() (bit-level-ish semantics for boxed values)
For boxed Double, .equals() treats NaN as equal to NaN values and it distinguishes 0.0 from -0.0.
That’s often what you want for keys in hash-based structures (HashMap, HashSet) if you intentionally want those distinctions. But it’s not “math equality.” It’s “object equality with special floating-point rules.”
Practical implications I’ve seen:
- If you put
0.0and-0.0into aHashSet, you can end up with two entries (because.equals()can treat them as different). - If you put multiple
NaNvalues into aHashSet, they don’t explode the set size (because.equals()can treat NaNs as equal).
Double.compareTo() vs Double.compare()
Double.compare(d1, d2)works on primitives.new Double(a).compareTo(new Double(b))(orDouble.valueOf(a).compareTo(Double.valueOf(b))) works on boxed values.
In practice, when I’m sorting boxed Double values, I usually prefer Double::compare anyway because it’s clear, avoids surprises, and works nicely as a method reference.
Total Ordering: Why It Matters More Than You Think
When I say “total ordering,” I mean: every pair of values has a defined order, even the weird ones. That’s the property that keeps the following operations sane:
Arrays.sort/Collections.sortPriorityQueueorderingTreeMap/TreeSetCollections.binarySearch- dedupe-by-order patterns (sort then collapse)
If your comparator is not a total ordering, the failures are rarely obvious. Instead you see symptoms:
- Elements appear “missing” in
TreeSet. TreeMapoverwrites entries unexpectedly.- Sorting seems nondeterministic across runs.
- Binary search returns inconsistent indices.
My rule: if a double is involved in ordering, I default to Double.compare() unless I can clearly justify a different ordering.
Sorting in Real Code: Comparators for Domain Objects
Comparing raw doubles is rarely the end goal. Usually you’re sorting objects: trades, products, telemetry events, ranking models, or shipping rates.
Here’s the pattern I recommend in modern Java:
- Use
Comparator.comparingDouble(...)for readability. - Under the hood, it uses a safe double comparison.
- Chain comparisons with
.thenComparing(...)to break ties.
Example: sorting sensor readings by severity
Imagine each reading has a score (higher is worse) and you want stable ordering by score, then by timestamp.
import java.time.Instant;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.List;
public class SensorReadingSort {
public static void main(String[] args) {
List readings = new ArrayList();
readings.add(new SensorReading("pump-7", Double.NaN, Instant.parse("2026-01-10T10:15:30Z")));
readings.add(new SensorReading("pump-7", 98.2, Instant.parse("2026-01-10T10:15:31Z")));
readings.add(new SensorReading("pump-7", 98.2, Instant.parse("2026-01-10T10:15:29Z")));
readings.add(new SensorReading("pump-7", -0.0, Instant.parse("2026-01-10T10:15:32Z")));
readings.add(new SensorReading("pump-7", 0.0, Instant.parse("2026-01-10T10:15:33Z")));
Comparator byScoreThenTime =
Comparator.comparingDouble(SensorReading::score)
.thenComparing(SensorReading::timestamp);
readings.sort(byScoreThenTime);
for (SensorReading r : readings) {
System.out.println(r);
}
}
record SensorReading(String deviceId, double score, Instant timestamp) {}
}
If you decide you want NaN last (or first), you can make that explicit instead of relying on default ordering.
Explicit NaN placement: push unknown scores to the end
In ranking systems, I often treat NaN as “unknown” and place it last.
import java.util.Comparator;
public class NaNLastComparator {
public static Comparator nanLast() {
return (a, b) -> {
boolean aNaN = a.isNaN();
boolean bNaN = b.isNaN();
if (aNaN && bNaN) return 0;
if (aNaN) return 1; // a after b
if (bNaN) return -1; // a before b
return Double.compare(a, b);
};
}
}
This is one of those cases where I don’t just trust defaults. I encode the business rule: “unknown goes last.”
Traditional vs modern comparator style (what I’d choose)
Traditional approach
—
Collections.sort(list, (a,b) -> a b ? 1 : 0))
list.sort(Double::compare) list.sort((x,y) -> Double.compare(x.value, y.value))
list.sort(Comparator.comparingDouble(Type::value)) Nested if blocks
.thenComparing(...) chain NaN explicitly Usually forgotten
I still use Double.compare() directly inside custom comparators, but Comparator.comparingDouble makes intent obvious and keeps code tidy.
Practical Sorting Recipes I Use in Production
Once you’ve committed to a total ordering, the next step is making that ordering match the business meaning. Here are patterns I keep reusing.
Sorting descending (highest first)
For “top scores first,” I don’t negate the compare result (it’s easy to get wrong when you later add tie-breakers). I prefer reversed():
Comparator highScoreFirst =
Comparator.comparingDouble(SensorReading::score).reversed()
.thenComparing(SensorReading::timestamp);
If I need custom NaN placement, I’ll wrap the comparator and then reverse the finite part (or define a separate “descending” rule explicitly).
Stable tie-breakers: always pick a deterministic second key
If two objects can have identical scores, I always add a tie-breaker that can’t be identical for distinct objects (timestamp, ID, sequence number). This prevents subtle nondeterminism in logs and tests.
Comparator byPriceThenId =
Comparator.comparingDouble(Order::price)
.thenComparing(Order::id);
Sorting “unknown” values last: null and NaN together
In domain models, “missing” often shows up as null (boxed Double) or as NaN (primitive result of computation). I like to handle both in one place:
import java.util.Comparator;
public final class DoubleOrdering {
private DoubleOrdering() {}
public static Comparator nullsAndNaNsLast() {
return Comparator.nullsLast((a, b) -> {
boolean aNaN = a.isNaN();
boolean bNaN = b.isNaN();
if (aNaN && bNaN) return 0;
if (aNaN) return 1;
if (bNaN) return -1;
return Double.compare(a, b);
});
}
}
Then usage becomes obvious:
list.sort(DoubleOrdering.nullsAndNaNsLast());
When teams struggle with numeric bugs, a big part of the fix is making this kind of ordering reusable and consistent.
PriorityQueue and “Top K” Selection with Double.compare()
Sorting the entire list is not always the best approach. If you only need the top 100 scores out of a million, a heap is typically a better fit.
Example: keep the top K highest scores
I implement this as a min-heap of size K. The root is the smallest among the top K; if a new value is larger, it replaces the root.
import java.util.Comparator;
import java.util.PriorityQueue;
public class TopK {
public static PriorityQueue topKHighest(int k) {
return new PriorityQueue(k, Double::compare); // min-heap
}
public static void offerTopK(PriorityQueue heap, double value, int k) {
if (heap.size() < k) {
heap.offer(value);
return;
}
if (Double.compare(value, heap.peek()) > 0) {
heap.poll();
heap.offer(value);
}
}
}
If NaN is possible and you want it treated as “unknown,” decide whether you want it ignored, last, or first. I often reject it before offering:
if (Double.isNaN(value)) return;
The key point: PriorityQueue assumes your comparator behaves well. Double.compare() keeps that assumption safe.
Using Double.compare() for Min/Max, Clamping, and Thresholds
Sorting isn’t the only place comparison matters.
Min/Max selection (without branching surprises)
For max selection:
double best = a;
if (Double.compare(b, best) > 0) best = b;
If NaN is in the mix and your semantics are “ignore NaN,” handle that explicitly:
static double maxIgnoringNaN(double a, double b) {
if (Double.isNaN(a)) return b;
if (Double.isNaN(b)) return a;
return Double.compare(a, b) >= 0 ? a : b;
}
Threshold comparisons: pick the rule up front
When I see this:
if (value >= threshold) { ... }
I ask one question: “What do we want if value is NaN?”
- If
NaNshould fail the check (most common), the plain comparison is fine. - If
NaNshould be treated as unknown and logged, handle it first.
I don’t always use Double.compare() here, but I do want consistency. The more your system treats floating-point as “data with edge cases,” the fewer surprises you get.
Binary Search and Sorted Data Structures
If you sort using one comparator but search using a different comparison logic, you can create off-by-one bugs that look like “binary search is broken.” It’s not. Your ordering is inconsistent.
Example: sort then binarySearch with the same ordering
For primitive arrays, Arrays.sort(double[]) sorts in natural ascending order and Arrays.binarySearch(double[], key) uses the same rules.
For boxed lists or custom ordering (like NaN last), keep the comparator consistent:
// Sort
list.sort(DoubleOrdering.nullsAndNaNsLast());
// Search
int idx = java.util.Collections.binarySearch(list, target, DoubleOrdering.nullsAndNaNsLast());
If you’ve ever searched for a value and got a negative insertion point that “makes no sense,” the mismatch between sorting and searching is a prime suspect.
When Double.compare() Is the Wrong Tool: Tolerances and Decimal Money
Double.compare() answers one question: “how do these two floating‑point values order under a total ordering?”
Sometimes that’s not what you mean.
Tolerance-based equality (epsilon comparisons)
If you’re comparing measurements, geospatial coordinates, or computed values, you often care about “close enough.”
For example, if I’m checking whether a CPU usage estimate is stable, I might treat differences under 0.0001 as equal. Double.compare() will not do that—it treats every representable value as distinct.
Here’s a pattern I actually use:
public class DoubleTolerance {
public static int compareWithTolerance(double a, double b, double tolerance) {
double diff = a - b;
if (Math.abs(diff) <= tolerance) return 0;
return diff < 0 ? -1 : 1;
}
public static void main(String[] args) {
double expectedLatency = 12.0000;
double observedLatency = 12.00007;
System.out.println(compareWithTolerance(observedLatency, expectedLatency, 0.0001));
System.out.println(Double.compare(observedLatency, expectedLatency));
}
}
If tolerance is part of the business meaning, encode it directly. Don’t pretend strict ordering equals “same.”
A better “close enough” check: relative tolerance
Absolute tolerance is great when values have a known scale. If values vary widely (say 0.001 to 1,000,000), relative tolerance is often safer.
static boolean almostEqual(double a, double b, double relTol, double absTol) {
if (Double.isNaN(a) || Double.isNaN(b)) return false;
if (Double.isInfinite(a) || Double.isInfinite(b)) return a == b;
double diff = Math.abs(a - b);
if (diff <= absTol) return true;
double largest = Math.max(Math.abs(a), Math.abs(b));
return diff <= largest * relTol;
}
I keep both absTol and relTol because real data often has both small-value noise and large-value scale.
Money and exact decimals
I’m blunt about this: if you’re doing currency math, I avoid double unless the requirements explicitly accept rounding errors.
- Use
BigDecimalfor exact decimal amounts. - Or store minor units as
long(cents) when that fits.
In pricing and billing systems, I’ve watched double comparisons create phantom pennies that break reconciliation. Double.compare() will be correct for the bit patterns you have, but it won’t fix the underlying representation choice.
Common Mistakes I See (and the Fix I Apply)
1) Using == for computed doubles
If values are computed through multiple steps, == often fails because binary floating point can’t represent many decimal fractions exactly.
What I do instead:
- For ordering: use
Double.compare(). - For “close enough”: use a tolerance comparison.
- For money: use
BigDecimalor integer minor units.
2) Forgetting NaN exists
If your data can include missing values, division by zero, invalid parsing, or placeholder math, NaN will show up.
My fix:
- Decide a rule: should unknown values sort first or last?
- Encode it in one comparator helper.
- Reuse that helper everywhere.
3) Mixing boxed Double with primitives and ignoring nulls
Double.compare(double, double) takes primitives. Your domain models might use Double for “optional.”
If null is possible, I never write:
// This can throw NullPointerException
list.sort((a, b) -> Double.compare(a, b));
Instead, I make null-handling explicit:
import java.util.Comparator;
import java.util.List;
public class NullableDoubleSort {
public static void main(String[] args) {
List riskScores = List.of(0.7, null, 0.2, Double.NaN, 0.9);
// Nulls last, NaN last among non-null
Comparator comparator =
Comparator.nullsLast((a, b) -> {
boolean aNaN = a.isNaN();
boolean bNaN = b.isNaN();
if (aNaN && bNaN) return 0;
if (aNaN) return 1;
if (bNaN) return -1;
return Double.compare(a, b);
});
riskScores.stream().sorted(comparator).forEach(System.out::println);
}
}
4) Writing a comparator that isn’t consistent with equals
If you use a comparator inside TreeSet/TreeMap, equality is defined by compare(a,b)==0, not by .equals().
That’s not automatically wrong, but you should choose intentionally.
Example: tolerance-based comparators are dangerous for TreeSet because you may “merge” distinct values into one bucket. I keep tolerance comparisons for checks and alerts, not for ordered sets.
5) Sorting by a derived floating-point value that can change
This isn’t a Double.compare() bug, but it often looks like one:
- You compute a score.
- You sort objects by score.
- Later you mutate the underlying inputs.
- Suddenly your “sorted” list doesn’t appear sorted.
My fix is always the same: treat the sort key as immutable during the sort window (or compute the key once and store it).
Debugging Checklist: When Ordering Looks “Wrong”
When I’m on-call and someone says “sorting is broken,” I run a short mental checklist.
1) Print the raw values including edge-case markers
I don’t trust pretty formatting that hides -0.0 or coerces NaN to a string that gets lost in logs. I print with explicit checks:
static String describe(double d) {
if (Double.isNaN(d)) return "NaN";
if (d == Double.POSITIVE_INFINITY) return "+Infinity";
if (d == Double.NEGATIVE_INFINITY) return "-Infinity";
if (d == 0.0 && Double.doubleToRawLongBits(d) == Double.doubleToRawLongBits(-0.0)) {
return "-0.0";
}
return Double.toString(d);
}
2) Verify comparator anti-symmetry
If I suspect comparator bugs, I test random pairs and assert:
compare(a,b)andcompare(b,a)have opposite signs.
3) Confirm you sort and search using the same ordering
If binary search or a TreeMap lookup fails, I check that the comparator is identical across operations.
4) Watch for NaN introduction points
Where do NaNs come from?
0.0 / 0.0Math.sqrt(-1)- parsing invalid input and defaulting to
NaN - subtracting infinities (
+Inf - +InfisNaN)
The fix is often upstream: validate input, guard operations, or mark missing values explicitly.
Testing Double.compare() Behavior (Fast, Focused, and Worth It)
Comparators are on the short list of “tiny code that can break huge systems.” I like to write small tests that lock down behavior—especially when I wrap Double.compare() with business rules like “NaN last.”
Here’s what I typically test:
- finite ordering
- signed zero ordering (if it matters to the business)
- NaN placement
- consistency when used in
TreeSet/TreeMap
Minimal JUnit-style tests (conceptual but runnable with JUnit)
import static org.junit.jupiter.api.Assertions.*;
import java.util.Comparator;
import org.junit.jupiter.api.Test;
public class DoubleOrderingTest {
@Test
void compareOrdersFiniteValues() {
assertTrue(Double.compare(1.0, 2.0) < 0);
assertTrue(Double.compare(2.0, 1.0) > 0);
assertEquals(0, Double.compare(2.0, 2.0));
}
@Test
void compareIsAntiSymmetric() {
double a = -3.25;
double b = 9.5;
assertEquals(-Integer.signum(Double.compare(a, b)), Integer.signum(Double.compare(b, a)));
}
@Test
void customNullNaNLastComparatorBehaves() {
Comparator c = DoubleOrdering.nullsAndNaNsLast();
assertTrue(c.compare(1.0, Double.NaN) < 0);
assertTrue(c.compare(Double.NaN, 1.0) > 0);
assertTrue(c.compare(null, 1.0) > 0);
}
}
If your system uses an ordered structure for caching or dedupe (like a TreeMap keyed by score), these tests pay back quickly.
Performance Notes and How I Validate Behavior in 2026-Style Workflows
Double.compare() itself is tiny. In most apps, the cost is lost in the noise compared to allocations, I/O, database calls, JSON parsing, and so on.
Where performance can matter:
- Sorting millions of boxed
Doublevalues: comparator calls and boxing overhead can add up. - Hot loops in analytics: repeated comparisons plus branch behavior can matter.
If I’m sorting a big numeric dataset, I prefer primitive arrays (double[]) and Arrays.sort(double[]) when possible. When I’m sorting objects, I focus on avoiding extra allocations and keeping comparator logic simple.
What I optimize first (before obsessing over the comparator)
1) Avoid boxing: If you can keep data as primitives, do it.
2) Avoid re-computing sort keys: Precompute derived scores once.
3) Avoid sorting when you only need top K: Use a heap.
4) Avoid needless resorting: If data changes incrementally, consider maintaining a heap or using partial updates.
A micro-benchmark mindset (without pretending to be exact)
In my experience, comparator and boxing overhead show up when you do large sorts repeatedly—think “many times per second” or “huge lists.” If you suspect it’s slow, measure it with a proper harness. Modern teams typically use a micro-benchmark tool (and run it on a stable machine profile) to compare:
Double::comparevs custom branching comparator- boxed
Double[]vs primitivedouble[] - stream sorting vs in-place list sorting
I also validate correctness with small, focused tests that include:
NaN,Infinity,-Infinity-0.0and0.0- a mix of normal values
And yes, I often ask an AI assistant to generate edge-case sets or to review comparator logic, but I still run the tests myself. Comparators are on the short list of “smallest code, biggest blast radius.”
Alternatives and Related Tools: When I Reach for Something Else
Double.compare() is a foundation, but not the only tool.
Comparator.comparingDouble(...) (my default for objects)
It reads well and removes boilerplate:
list.sort(Comparator.comparingDouble(Product::rating));
DoubleSummaryStatistics for aggregates
If I need min/max/avg and counts, I use built-in aggregators. Then I handle NaN explicitly if needed.
BigDecimal and compareTo for exact decimals
For money and many “human decimals,” BigDecimal.compareTo is the correct comparison tool:
amountA.compareTo(amountB)
It has its own set of gotchas (scale, construction from strings vs doubles), but it aligns better with decimal business rules.
ULP-based comparisons for numeric algorithms
If I’m doing numeric methods where I care about representable spacing, I sometimes use ULP-based logic (Math.ulp) or bit-level comparisons. That’s specialized, but it’s worth mentioning because it’s the kind of “deep fix” that saves scientific and financial modeling code.
A Practical “Decision Guide” I Use
When someone asks me “how should we compare doubles here?”, I answer with a small decision tree:
1) Do you need ordering (sorting, TreeMap, PriorityQueue)?
– Yes → start with Double.compare().
2) Do you need business-specific rules for unknown/missing?
– Yes → wrap Double.compare() with explicit null/NaN placement.
3) Do you mean equality with tolerance (“close enough”)?
– Yes → write a tolerance function; don’t use it inside TreeSet/TreeMap unless you really understand the consequences.
4) Is it money or exact decimal meaning?
– Yes → use BigDecimal or integer minor units.
That’s it. Most comparison bugs come from skipping step 2.
Quick Reference: Copy/Paste Patterns
Here are compact patterns I keep around.
Sort doubles safely
list.sort(Double::compare);
Sort objects by a double field
list.sort(Comparator.comparingDouble(MyType::score));
Sort nulls last, NaNs last
list.sort(DoubleOrdering.nullsAndNaNsLast());
Compare with tolerance (for checks, not ordered sets)
int c = DoubleTolerance.compareWithTolerance(a, b, 1e-6);
Max ignoring NaN
double m = maxIgnoringNaN(a, b);
Closing Thoughts
Double.compare() isn’t exciting, but it’s one of those “quiet correctness” APIs that makes real systems reliable. The moment your data includes NaN, signed zero, or infinities—and it will—hand-rolled comparators built on </> start to leak surprises into sorting, searching, and data structures.
The practical payoff is simple: pick a total ordering (Double.compare()), make your business rules explicit (where to place NaN and null), and keep tolerance-based logic separate from ordering logic. Do that, and a whole class of numeric “ghost bugs” just stops happening.



