When a finance team asked me why their monthly revenue totals drifted by a few cents, the issue wasn’t the math—it was the column type. They were summing values stored in FLOAT. That tiny mismatch showed up only after thousands of rows, which made it hard to spot in tests and painful to explain in production. I’ve seen the same problem with sensor data that “looked fine” but aggregated into odd spikes, and with pricing APIs that failed equality checks because 0.1 + 0.2 didn’t exactly equal 0.3. Those experiences forced me to be very precise about NUMERIC, DECIMAL, and FLOAT in SQL Server.
If you’re choosing types for money, measurements, or analytics, these differences matter. I’ll walk you through how SQL Server represents each type, what precision actually means, where performance shifts, and how I decide in practice. I’ll also show complete, runnable T‑SQL examples and the kinds of bugs I see most often so you can avoid them.
The mental model I use: exact vs approximate
I start with a simple question: do you need exact decimal values or are small rounding differences acceptable? NUMERIC and DECIMAL are exact. FLOAT is approximate. That single distinction drives almost every real‑world decision.
- Exact types (NUMERIC, DECIMAL) store decimal digits precisely as you specify. That makes them the default choice for money, quantities, counts, and any value that must be exact across inserts, updates, and aggregates.
- Approximate type (FLOAT) stores a binary floating‑point approximation. It’s great for scientific data, signals, and values that are naturally approximate to begin with. It’s a poor choice for currency or values where equality comparisons must be reliable.
If you remember only one rule: when you need exact decimals, use DECIMAL/NUMERIC. When you need speed and wide range with acceptable rounding noise, use FLOAT.
NUMERIC vs DECIMAL: they’re the same in SQL Server
A lot of people assume NUMERIC and DECIMAL are different. In SQL Server, they are synonyms. Same storage, same behavior, same precision/scale rules.
Both types use the syntax:
DECIMAL(p, s)
NUMERIC(p, s)
- p (precision) = total digits you can store (1–38)
- s (scale) = digits to the right of the decimal point (0–p)
So DECIMAL(10,2) stores up to 8 digits left of the decimal and 2 to the right. NUMERIC(10,2) behaves identically. I typically choose DECIMAL because more people recognize it, but either is fine.
Why exact types matter
Exact types store values as scaled integers internally. That makes them reliable for money and accounting. Adding 0.10 and 0.20 results in 0.30 exactly, every time.
Here’s a short, runnable example:
DECLARE @a DECIMAL(10,2) = 0.10;
DECLARE @b DECIMAL(10,2) = 0.20;
SELECT @a + @b AS SumExact; -- returns 0.30
If your application uses cents, milliliters, or unit quantities that must stay precise, DECIMAL/NUMERIC is the safe path.
FLOAT: binary approximation with big range
FLOAT is SQL Server’s approximate numeric type. Internally it uses binary floating‑point (IEEE‑754). That gives you a huge range and fast math, but it also means some decimals cannot be represented exactly.
SQL Server allows FLOAT(n) where n is the number of bits of precision. If you omit n, FLOAT(53) is the default, equivalent to an 8‑byte double‑precision float.
DECLARE @x FLOAT = 0.1;
DECLARE @y FLOAT = 0.2;
SELECT @x + @y AS SumApprox; -- might display 0.30000000000000004
That output depends on formatting, but the internal value is approximate. That’s fine for statistical or scientific data, not fine for exact comparisons or currency.
A quick analogy
Think of DECIMAL as storing the exact digits you write down on paper. FLOAT is more like storing a precise measurement on a ruler that isn’t marked for every possible decimal. It’s still highly accurate, but not exact for every decimal value.
Precision, scale, and storage: what changes under the hood
Understanding how SQL Server stores these types helps you avoid surprises.
DECIMAL/NUMERIC storage
SQL Server stores DECIMAL/NUMERIC values in a fixed‑length binary format sized to the precision. The more digits you allow, the more bytes it uses.
Rough storage rules you can use mentally:
- precision 1–9: 5 bytes
- precision 10–19: 9 bytes
- precision 20–28: 13 bytes
- precision 29–38: 17 bytes
Example:
CREATE TABLE dbo.Payments
(
PaymentId INT IDENTITY(1,1) PRIMARY KEY,
Amount DECIMAL(19,4) NOT NULL
);
DECIMAL(19,4) uses 9 bytes, which is a good standard for currency with four decimal places.
FLOAT storage
FLOAT uses either 4 or 8 bytes depending on precision. FLOAT(24) or less uses 4 bytes. FLOAT(53) uses 8 bytes. REAL is a synonym for FLOAT(24).
Why this matters
- Exact types can use more space at high precision, but they support reliable comparisons and exact sums.
- FLOAT uses fixed 4/8 bytes and is very fast for large analytic workloads, but you must accept small rounding differences.
In practice, I make the storage tradeoff in favor of correctness for finance and identifiers. For pure numeric analytics or scientific data, I accept float’s rounding behavior.
How comparisons behave (and why bugs appear)
Equality checks with DECIMAL are reliable. With FLOAT they can be surprising. Here’s the classic example with a real label instead of a toy string:
DECLARE @expected FLOAT = 0.3;
DECLARE @actual FLOAT = 0.1 + 0.2;
SELECT CASE WHEN @expected = @actual THEN ‘Equal‘ ELSE ‘Not Equal‘ END AS CompareResult;
I regularly see tests fail because developers assume float values are exact. They aren’t. If you must compare FLOAT values, compare within a tolerance:
DECLARE @expected FLOAT = 0.3;
DECLARE @actual FLOAT = 0.1 + 0.2;
DECLARE @epsilon FLOAT = 0.0000001;
SELECT CASE WHEN ABS(@expected - @actual) < @epsilon THEN 'Close' ELSE 'Far' END AS CompareResult;
That pattern is normal in analytics or physics data. It is unacceptable for money. That’s why I steer finance columns to DECIMAL.
The decision checklist I use
When I’m designing a schema, I ask myself four questions. The answers point straight to the right type.
- Is exact decimal representation required?
– Yes → DECIMAL/NUMERIC
– No → FLOAT
- Will you aggregate or compare values for equality?
– Frequent equality checks → DECIMAL/NUMERIC
– Aggregate within tolerance → FLOAT OK
- What is the expected range and scale?
– Known range, exact cents or units → DECIMAL(p, s)
– Unknown or huge range → FLOAT
- Is this value user‑facing or transactional?
– User sees it or money is involved → DECIMAL/NUMERIC
– Background calculation or scientific signal → FLOAT
Here’s a real example: if I’m storing temperature readings from an IoT device at 1‑second intervals, I’ll likely use FLOAT to save space and handle wide ranges quickly. If I’m storing the invoice amounts for those devices, it’s DECIMAL(19,4) all day.
Practical T‑SQL examples you can run
Let’s create a table and compare behavior in a few inserts and aggregates.
DROP TABLE IF EXISTS dbo.Measurements;
GO
CREATE TABLE dbo.Measurements
(
Id INT IDENTITY(1,1) PRIMARY KEY,
ExactValue DECIMAL(10,4) NOT NULL,
ApproxValue FLOAT NOT NULL
);
GO
INSERT INTO dbo.Measurements (ExactValue, ApproxValue)
VALUES
(0.1, 0.1),
(0.2, 0.2),
(0.3, 0.3);
GO
SELECT
SUM(ExactValue) AS ExactSum,
SUM(ApproxValue) AS ApproxSum
FROM dbo.Measurements;
If you run that, you’ll typically see ExactSum = 0.6000 while ApproxSum might show a tiny artifact, depending on output formatting. That’s the key difference in practice.
Equality test with a realistic price
DECLARE @price DECIMAL(10,2) = 19.99;
DECLARE @total DECIMAL(10,2) = 9.99 + 10.00;
SELECT CASE WHEN @price = @total THEN ‘Match‘ ELSE ‘Mismatch‘ END AS PriceCheck;
No surprises here. The same logic on FLOAT can give you a surprise.
Performance considerations in 2026 terms
I still see the misconception that FLOAT is always faster. It often is for large, analytic workloads, but the performance difference rarely matters for OLTP workloads unless you’re pushing extreme scale.
Here’s how I frame it:
- Transactional systems (payments, orders, inventory): DECIMAL is worth the tiny overhead. The correctness and auditability pay for themselves.
- Analytics and data science: FLOAT is often faster and smaller, which matters in wide fact tables and big scans. You still need to handle tolerance in comparisons.
- Indexing: DECIMAL columns can be indexed efficiently and support deterministic range queries. FLOAT indexes can be used too, but be careful with equality comparisons because you might miss values due to rounding.
In modern SQL Server workloads with AI‑assisted monitoring, I look at query plans and actual execution times. For typical OLTP queries, the difference is usually within a few milliseconds per operation. For large analytic queries, float can reduce memory and tempdb pressure meaningfully.
Common mistakes I see (and how you should avoid them)
These mistakes show up in code reviews every year. If you avoid them, you’ll save yourself a lot of rework.
1) Storing money in FLOAT
I still see it in legacy systems. It’s the quickest way to get reconciliation errors and lost cents. Store money in DECIMAL with a defined scale.
2) Defaulting to DECIMAL(18,2) without thinking
DECIMAL(18,2) is a common default, but it can be wrong for currencies with more than 2 decimal places or for high‑precision rates. If you store FX rates or crypto prices, you might need DECIMAL(28,10) or higher.
3) Using NUMERIC for “bigger ints”
Some developers use NUMERIC as a stand‑in for large integers. That’s fine, but BIGINT is usually more efficient for integer counts. Save DECIMAL for true fractional numbers.
4) Mixing types in arithmetic
SQL Server promotes types in expressions. If you mix FLOAT and DECIMAL, the result can become FLOAT, which reintroduces approximation. I always cast explicitly when precision matters.
DECLARE @exact DECIMAL(10,4) = 12.3456;
DECLARE @approx FLOAT = 0.1;
SELECT CAST(@exact AS DECIMAL(10,4)) + CAST(@approx AS DECIMAL(10,4)) AS SafeSum;
5) Forgetting scale when rounding is business‑critical
If you store DECIMAL(10,2) and then apply taxes or discounts, you can accidentally round too early. I often store values with higher precision and round only at display time.
When I choose each type
Here are patterns from real systems I’ve designed or reviewed.
Choose DECIMAL/NUMERIC when:
- You store prices, money, or invoices
- You do strict equality checks on values
- You require deterministic rounding
- You need stable aggregates for compliance or audit
Example:
CREATE TABLE dbo.Invoices
(
InvoiceId INT IDENTITY(1,1) PRIMARY KEY,
Subtotal DECIMAL(19,4) NOT NULL,
Tax DECIMAL(19,4) NOT NULL,
Total DECIMAL(19,4) NOT NULL,
CurrencyCode CHAR(3) NOT NULL
);
Choose FLOAT when:
- Values are derived from sensors or scientific calculations
- You handle huge ranges or extremely small values
- Slight rounding noise is acceptable
- You run large analytic scans where storage and compute are tight
Example:
CREATE TABLE dbo.SensorReadings
(
ReadingId BIGINT IDENTITY(1,1) PRIMARY KEY,
SensorId INT NOT NULL,
RecordedAt DATETIME2 NOT NULL,
Temperature FLOAT NOT NULL,
Humidity FLOAT NOT NULL
);
That table works great for time‑series analytics, but I wouldn’t use it for billing calculations.
Edge cases and real‑world scenarios
Currency with more than 2 decimals
Some currencies and many digital assets require 4–8 decimal places. I store those in DECIMAL with sufficient scale and keep rounding rules in the application layer.
Example for an exchange rate:
CREATE TABLE dbo.ExchangeRates
(
RateDate DATE NOT NULL,
BaseCurrency CHAR(3) NOT NULL,
QuoteCurrency CHAR(3) NOT NULL,
Rate DECIMAL(28,10) NOT NULL,
PRIMARY KEY (RateDate, BaseCurrency, QuoteCurrency)
);
Large scientific ranges
Astrophysics or chemistry data may require huge ranges where DECIMAL would be too large or too slow. FLOAT or even FLOAT(24) for space efficiency makes sense there.
Aggregates over billions of rows
Approximate types can help reduce memory usage in hash aggregations or reduce tempdb spill. I measure and choose based on workload, but I still avoid FLOAT if the output is user‑facing or financially significant.
Casting and conversion rules you should know
SQL Server converts between numeric types according to precedence. FLOAT has higher precedence than DECIMAL, which means mixed expressions can become FLOAT unexpectedly.
If you do this:
SELECT 100.00 * 0.15; -- FLOAT if 0.15 is interpreted as FLOAT
You may get a float result unless you explicitly cast. I prefer this pattern:
SELECT CAST(100.00 AS DECIMAL(10,2)) * CAST(0.15 AS DECIMAL(10,2)) AS Discount;
It’s explicit, predictable, and safe for business calculations.
A quick comparison table
Here’s the comparison I keep in my notes for quick decisions:
DECIMAL/NUMERIC
—
Exact, fixed
Up to 38 digits
5–17 bytes
Reliable
Money, exact units
NUMERIC = DECIMAL
Traditional vs modern practices
I still see older patterns in database schemas that don’t align with current best practices. Here’s how I frame it for teams modernizing systems in 2026.
Traditional
—
FLOAT for “faster math”
DECIMAL everywhere
Implicit conversions
Direct = comparisons on float
In modern systems, I also rely on AI‑assisted query review tools to spot implicit conversions and precision loss. That doesn’t replace thoughtful design, but it catches issues early.
A short diagnostic checklist for existing schemas
If you’re auditing an existing database, I run through these questions:
- Are any money or price columns stored as FLOAT?
- Are there equality comparisons against FLOAT in queries or stored procedures?
- Are aggregates over FLOAT used for financial reporting?
- Are DECIMAL columns sized larger than needed, causing unnecessary storage?
- Are DECIMAL columns sized too small, causing rounding or truncation?
That quick pass helps me identify the worst risks before deeper tuning.
Rounding, truncation, and why scale choices matter
Most production bugs around DECIMAL aren’t caused by the type itself; they’re caused by premature rounding or undersized scale. I think about two moments: when values are computed, and when they’re stored. If either step trims digits too early, you lock in error that can compound later.
Example: tax calculation with early rounding
Imagine your system calculates tax per line item and rounds to two decimals immediately. The invoice might be accurate, but the overall order total could differ from a tax‑on‑subtotal calculation.
DECLARE @unitPrice DECIMAL(10,4) = 19.9950;
DECLARE @qty INT = 3;
DECLARE @taxRate DECIMAL(6,4) = 0.0825;
-- Line‑level tax with early rounding
DECLARE @lineSubtotal DECIMAL(12,4) = @unitPrice * @qty;
DECLARE @lineTax DECIMAL(12,2) = ROUND(@lineSubtotal * @taxRate, 2);
-- Total‑level tax with rounding at the end
DECLARE @orderTax DECIMAL(12,2) = ROUND(@lineSubtotal * @taxRate, 2);
SELECT @lineSubtotal AS LineSubtotal, @lineTax AS LineTax, @orderTax AS OrderTax;
The math looks simple, but the rounding moment changes the final result. I store the intermediate results in higher scale (like DECIMAL(19,6) or more), then apply rounding at the final step required by the business rule.
Example: currency with 4 decimals but totals in 2
Some businesses store unit prices in 4 decimals but settle in 2. If your schema stores only 2 decimal places, you lose precision immediately. I prefer:
CREATE TABLE dbo.LineItems
(
LineItemId INT IDENTITY(1,1) PRIMARY KEY,
UnitPrice DECIMAL(19,4) NOT NULL,
Quantity INT NOT NULL,
LineTotal AS (ROUND(UnitPrice * Quantity, 2)) PERSISTED
);
I store at 4 decimals, but I compute the customer‑visible total as a persisted computed column. It’s accurate, deterministic, and easy to index if needed.
Display vs storage: avoid confusing formatting with precision
A common confusion is “but SQL Server showed me 0.3.” That display is a formatting choice, not a promise of exactness. When you select a FLOAT value, the client might round for display. Internally, the value is still approximate.
If you want to see the underlying behavior, try this:
DECLARE @f FLOAT = 0.1 + 0.2;
SELECT @f AS AsFloat, CONVERT(DECIMAL(38, 20), @f) AS AsDecimal;
That conversion often reveals tiny differences you never saw in your UI. It’s a good diagnostic step when someone insists “it’s the same.”
Choosing precision and scale: the practical method I use
People ask me, “How do you pick DECIMAL(p, s) without overthinking?” I use a three‑step method:
- Define the largest possible value you expect.
– Example: maximum invoice is $10,000,000.00.
- Decide how many decimals you truly need.
– Example: cents = 2.
- Add a safety margin.
– Example: make room for growth or currency variation.
In practice, DECIMAL(19,4) is a solid default for money because it gives you up to 15 digits left of the decimal and 4 on the right, which covers large totals and supports fractional cents for intermediate calculations. For rates, I often use DECIMAL(28,10) or DECIMAL(38,12) depending on how many multiplications are involved.
SQL Server arithmetic rules that influence type choice
SQL Server has deterministic rules for how it computes precision and scale in expressions, and those rules can surprise you. For example:
- Adding two DECIMAL values results in a precision and scale based on both operands. The output scale can be larger than either input scale.
- Multiplication can increase the scale by adding the scales of the operands.
- Division can expand scale more than you expect and may require explicit casting to avoid overflow.
Here’s a quick illustration:
DECLARE @a DECIMAL(10,4) = 123.4567;
DECLARE @b DECIMAL(10,4) = 0.0003;
-- Without explicit casting, SQL Server decides the precision/scale
SELECT @a * @b AS AutoScale;
-- With explicit casting, you control the result
SELECT CAST(@a * @b AS DECIMAL(20,10)) AS ControlledScale;
If you’re doing chained calculations in a stored procedure, I recommend explicitly casting the result at key points. That avoids silent truncation or overflow errors later.
Working with FLOAT safely: tolerance and rounding strategies
Sometimes FLOAT is the right choice, but you still need to handle it carefully. Here’s how I do it in practice.
1) Use tolerance for comparisons
Never use direct equality for floats. Instead:
DECLARE @a FLOAT = 0.1 + 0.2;
DECLARE @b FLOAT = 0.3;
DECLARE @eps FLOAT = 1e-12;
SELECT CASE WHEN ABS(@a - @b) < @eps THEN 'Match' ELSE 'Mismatch' END AS FloatCompare;
Pick an epsilon that makes sense for your domain. If you measure distance in meters and your sensor accuracy is ±0.01, then eps = 0.01 might be appropriate.
2) Round to a sensible scale for reporting
For reports and dashboards, you can round floats to a fixed scale for display consistency. Don’t confuse this with making them exact. It just smooths presentation:
SELECT ROUND(Temperature, 2) AS TempRounded
FROM dbo.SensorReadings;
3) Avoid FLOAT for identity-like values
I have seen people store IDs or codes as floats for “space efficiency.” It’s a bad idea. Float can’t reliably represent all integers beyond a certain size. Use INT, BIGINT, or DECIMAL with scale 0 if you truly need a numeric identifier.
Migration advice: moving from FLOAT to DECIMAL
If you inherit a system that used FLOAT for money, don’t just swap the type and hope for the best. I do it in three steps:
- Measure the scope of error.
– Compare sums and key aggregates using ROUND in staging.
- Add new DECIMAL columns.
– Populate them using controlled rounding rules.
- Update code and queries.
– Move calculations and comparisons to the new columns.
Here’s a sample migration approach:
ALTER TABLE dbo.Payments ADD AmountExact DECIMAL(19,4) NULL;
UPDATE dbo.Payments
SET AmountExact = ROUND(AmountFloat, 4);
-- Validate totals
SELECT
SUM(AmountFloat) AS SumFloat,
SUM(AmountExact) AS SumExact
FROM dbo.Payments;
I use a staging environment to reconcile differences and agree on the rounding policy with business stakeholders. Once that’s done, I swap columns or update the application to use the new exact values.
Indexing and query behavior: subtle differences
Most people focus on storage and precision, but indexing and query behavior can also differ.
- Range queries: Both DECIMAL and FLOAT perform well with range filters, but DECIMAL provides more predictable boundaries (e.g.,
Amount >= 100.00). - Equality lookups: DECIMAL is deterministic. FLOAT can miss rows because the stored value might not be exactly the same as the literal in your query.
- Computed columns: If you use persisted computed columns and need deterministic behavior, DECIMAL is safer.
If you must index a FLOAT and need to query for “exact” values, I often recommend storing an additional rounded DECIMAL column for indexing and equality checks.
Real‑world scenarios with concrete choices
Here’s how I make decisions in familiar domains.
E‑commerce pricing
- Product price: DECIMAL(19,4)
- Discount rate: DECIMAL(8,6)
- Computed line total: DECIMAL(19,4) or a computed column
Why? Customer‑facing prices must be exact. Discount calculations need enough scale to avoid drift. I store with high precision and round at checkout.
Telemetry and IoT
- Sensor reading: FLOAT
- Battery voltage: FLOAT
- Calibration constants: DECIMAL(18,10)
Why? Readings are inherently approximate. Calibration constants often come from controlled measurements and need exact storage for reproducibility.
Banking and accounting
- Ledger amount: DECIMAL(19,4)
- Exchange rate: DECIMAL(28,10)
- Interest calculations: DECIMAL(28,12) or higher
Why? Auditable precision matters, and most calculations require intermediate scales that exceed what the final ledger stores.
Manufacturing and inventory
- Quantity on hand: DECIMAL(19,6) if fractional units are possible
- Weight: DECIMAL(19,6)
- Machine calibration: DECIMAL(28,10)
Why? You need exact tracking of partial units and precise calibration values that are stable across runs.
Why SQL Server FLOAT causes “0.1 + 0.2” issues
This is a classic example because it reveals how floats work. The value 0.1 in decimal is a repeating fraction in binary. That means the float representation is already a tiny approximation. When you add 0.1 and 0.2, you add two approximations and the result is another approximation. That’s not a bug. It’s how binary floating point works.
In financial systems, even a tiny repeated error can compound. In scientific systems, that tiny error is often acceptable because measurement error is already present. The mistake is using FLOAT where the domain expects exactness.
Alternatives and related types (and why they matter)
When people ask about NUMERIC, DECIMAL, and FLOAT, they sometimes forget other types that influence the choice.
INT and BIGINT
If the value is always a whole number, use INT or BIGINT. They’re smaller, faster, and simpler than DECIMAL with scale 0.
MONEY and SMALLMONEY
SQL Server has money types, but I avoid them for most new designs. They have fixed scale (4 decimal places) and can be less explicit than DECIMAL. DECIMAL gives you more control and clearer intent.
REAL
REAL is just FLOAT(24). It uses 4 bytes and has less precision. I rarely use it unless I truly need to minimize storage for massive datasets.
Debugging precision issues in production
If you suspect a numeric type issue in production, here’s the order I use:
- Find columns that use FLOAT in money‑like domains.
- Compare raw values with formatted values.
- Aggregate with higher precision and compare totals.
- Inspect the app code for implicit casts and string conversions.
A quick diagnostic query to find FLOAT columns:
SELECT
t.name AS TableName,
c.name AS ColumnName,
ty.name AS DataType
FROM sys.columns c
JOIN sys.types ty ON c.usertypeid = ty.usertypeid
JOIN sys.tables t ON c.objectid = t.objectid
WHERE ty.name IN (‘float‘, ‘real‘);
This often reveals hidden landmines in older schemas.
How I teach teams to pick the right type
When I’m onboarding a team, I teach a simple decision map:
- If the value is money, quantity, or user‑facing, default to DECIMAL.
- If the value is scientific measurement or derived analytics, consider FLOAT.
- If the value is always whole, use INT/BIGINT.
Then I add two rules:
- Never mix FLOAT with DECIMAL in financial calculations.
- Never rely on equality checks with FLOAT.
That’s usually enough to avoid the big mistakes.
A safer pattern for mixed calculations
Sometimes you can’t avoid mixing types, such as when reading from an external sensor that sends floats but you need to bill based on aggregated totals. In that case, I convert early to DECIMAL with a chosen scale and keep it that way:
DECLARE @raw FLOAT = 12.3456789;
DECLARE @safe DECIMAL(19,6) = ROUND(@raw, 6);
SELECT @safe AS SafeForBilling;
That creates a controlled boundary: the float stays at the edge, and your business logic stays exact.
Testing strategies that catch precision bugs
Precision bugs often slip into production because tests use small datasets. I use two patterns to catch them:
1) Large row count tests
Create a synthetic dataset with thousands or millions of rows and compare sums across types. Float drift shows up quickly.
DROP TABLE IF EXISTS dbo.TestPrecision;
GO
CREATE TABLE dbo.TestPrecision
(
Id INT IDENTITY(1,1) PRIMARY KEY,
ExactValue DECIMAL(10,2) NOT NULL,
ApproxValue FLOAT NOT NULL
);
INSERT INTO dbo.TestPrecision (ExactValue, ApproxValue)
SELECT TOP (100000)
0.01,
0.01
FROM sys.objects a CROSS JOIN sys.objects b;
SELECT
SUM(ExactValue) AS ExactSum,
SUM(ApproxValue) AS ApproxSum
FROM dbo.TestPrecision;
2) Round‑trip tests
If your application reads and writes values through JSON or external APIs, check for precision loss on round‑trip. Floats can lose digits when serialized.
The story behind “floating” errors in reporting
I’ve seen executive dashboards show off‑by‑pennies issues that took weeks to trace. The root cause is usually a combination of:
- floats used in a staging table,
- implicit conversions in a view,
- and a final SUM that gets rounded on the report side.
The fix is almost always to promote the type to DECIMAL earlier and be explicit about rounding and scale. Once that change is in place, the report reconciles with the ledger and the variance disappears.
Type choice in data warehousing and analytics
In a data warehouse, you often see floats used for speed and space. That’s valid for raw metrics (like temperature, distance, or click‑through rates). But for financial aggregates, I still use DECIMAL. This pattern works well:
- Raw fact table: FLOAT for non‑financial metrics
- Financial fact table: DECIMAL for money
- Aggregate tables: DECIMAL for money, FLOAT for statistical measures
This hybrid approach keeps analytics fast without sacrificing correctness where it matters.
A deeper comparison: precision vs range
It helps to separate the ideas of precision and range:
- Precision is how many digits of accuracy you get.
- Range is how big or small the number can be.
DECIMAL gives you great precision at a defined scale but a limited range (up to 38 digits total). FLOAT gives you a huge range but less guaranteed precision for certain decimals. If your values can be massive or tiny (like astronomical distance or microseconds), FLOAT is often the only practical choice.
Practical rules of thumb I actually use
Here are the rules I teach and follow in everyday work:
- Money → DECIMAL, always.
- Rates → DECIMAL with higher scale.
- Sensor data → FLOAT, unless it feeds billing.
- Identifiers and counts → INT/BIGINT.
- Mixed calculations → cast early to the exact type you want.
These five rules solve 90% of the real problems.
A focused troubleshooting checklist for float drift
If you see unexpected decimals in a report, here’s the checklist I run:
- Identify if any columns in the pipeline are FLOAT.
- Inspect intermediate views or computed columns for implicit casts.
- Check whether aggregations happen before or after rounding.
- Confirm how the client tool formats floats.
- Compare results with DECIMAL conversions at key steps.
This is fast, practical, and usually points to the culprit within an hour.
When to deliberately avoid DECIMAL
DECIMAL isn’t always the best choice. I avoid it when:
- I’m storing high‑frequency telemetry where storage cost is critical.
- The values are inherently approximate or noisy.
- The calculations are statistical and don’t require exact equality.
- The numbers span a range that would require excessive precision to cover.
In those cases, FLOAT is not just acceptable—it’s optimal.
A note on client applications and ORMs
Even if your database types are correct, ORMs and client libraries can reintroduce issues if they map types poorly.
- Some ORMs map SQL DECIMAL to floating types in code. That’s dangerous for money.
- Some JSON serializers convert DECIMAL to string, which preserves precision but needs careful parsing.
- Some languages have no built‑in decimal type (or treat it as a library), which can lead to accidental float usage.
When I see precision bugs, I check the entire pipeline: DB column type, ORM mapping, application type, and serialization format.
A concise “what should I pick?” summary
If you only read one section, make it this:
- DECIMAL/NUMERIC when exactness matters (money, quantities, legal or audit requirements).
- FLOAT when approximation is acceptable and the range is huge (science, analytics, sensors).
- INT/BIGINT when you don’t need fractions at all.
My recommendation in plain terms
If you’re building a system that handles money, invoices, totals, or anything people can dispute, choose DECIMAL (or NUMERIC). It’s stable, exact, and defensible. If you’re capturing measurements or doing analytics where tiny rounding differences don’t change decisions, use FLOAT. It’s fast and flexible. And if you’re not sure, err on the side of DECIMAL—the cost of a few extra bytes is smaller than the cost of explaining a rounding bug to finance.
Final takeaway
NUMERIC and DECIMAL are two names for the same exact numeric type in SQL Server. FLOAT is an approximate type with huge range and fast computation but inevitable rounding behavior. The choice isn’t about which is “better.” It’s about whether your domain demands exactness. If it does, choose DECIMAL/NUMERIC. If it doesn’t, FLOAT can be a performance win. That one decision can save you weeks of debugging and a lot of uncomfortable conversations later.
If you want, I can also provide a quick worksheet to help you pick precision and scale for specific use cases or review an existing schema for risky type choices.



