BIGINT is a numeric data type in PostgreSQL used for storing large whole numbers, both positive and negative. With 8 bytes (64 bits) of storage, it can represent values from -9 quadrillion to 9 quadrillion, enabling calculations at enormous scales. This guide explores the BIGINT data type in depth – when to use it, maximum capacities, use cases, performance considerations, and examples of BIGINT in action.
Numeric Data Types Overview
PostgreSQL provides a full suite of numeric data types capable of representing integers, decimals, floating point values, and more. The table below summarizes key details:
| Data Type | Storage | Range | Use Cases |
|---|---|---|---|
| SMALLINT | 2 bytes | -32,768 to 32,767 | Small whole numbers |
| INTEGER | 4 bytes | -2 billion to 2 billion | Typical whole numbers |
| BIGINT | 8 bytes | -9 quadrillion to 9 quadrillion | Very large whole numbers |
| DECIMAL/NUMERIC | Variable | Up to 131072 digits before the decimal point; up to 16383 digits after the decimal point | Exact, high precision values |
| FLOAT/REAL | 4 or 8 bytes | 6 decimal digits precision | Inexact, floating point values |
BIGINT sits between the smaller INTEGER type and the high-precision DECIMAL/NUMERIC type. It occupies double the storage of INTEGER but can represent much larger integers without precision loss.
When to Use BIGINT
The expanded capacity of BIGINT comes at the cost of additional storage needs. It should only be used when values are reasonably expected to exceed the limits of smaller integer types like INTEGER. Common use cases include:
Primary Keys
Tables requiring a very high number of rows in extreme scale applications may exhaust the storage of a regular INTEGER auto-incrementing primary key. Changing the column to BIGINT extends the maximum capacity.
Analytics
Metrics like page views, ad impressions and other high volume event data can quickly scale beyond INTEGER‘s range, making BIGINT a safe choice for these analytic dataset columns.
Financial Data
While DECIMAL is preferred for most financial figures, BIGINT can be useful for representing large sums like total company assets, ledger balances and statistical aggregations.
Scientific Data
Astronomical, physics, and other scientific data often involves extremely large numbers that push INTEGER limits. BIGINT enables exact storage and calculations on these numbers.
In general, BIGINT should be used where the business logic indicates values will become large over time or require substantial arithmetic ranges. Otherwise INTEGER or SMALLINT may be sufficient.
BIGINT vs INTEGER Performance
BIGINT requires double the storage per value compared to INTEGER (8 bytes vs 4 bytes). This can have performance implications:
- Larger database files on disk
- More memory needed to cache data
- Slower queries due to increased I/O and data volume
For maximum database efficiency, identify tables and columns where BIGINT is strictly required rather than electing to use it universally.
Testing with realistic data volumes can quantify if BIGINT‘s overhead is acceptable. In many big data pipelines, storage and compute scale linearly and can accommodate some BIGINT usage smoothly.
Using BIGINT in PostgreSQL
Working with the BIGINT data type is straightforward in PostgreSQL. It can be used seamlessly in place of INTEGER with a few simple conversions.
Creating a BIGINT Column
CREATE TABLE orders (
order_id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
quantity BIGINT NOT NULL
);
This demonstrates creating a BIGINT primary key column order_id that autoincrements and a BIGINT quantity field that has no upper limit.
Inserting BIGINT Values
INSERT INTO sensors (sensor_id, value) VALUES
(101, 9007199254740991),
(102, -9007199254740991);
This shows inserting some very large BIGINT values that would overflow the maximum INTEGER range.
Operating on BIGINT Columns
SELECT (start_reading + end_reading) AS total
FROM sensor_logs;
Basic arithmetic works seamlessly on BIGINT columns. PostgreSQL will return a BIGINT result with no risk of overflow.
Type Casting to BIGINT
SELECT CAST(column_a AS BIGINT)
FROM data_table;
Tools like CAST let other numeric types be converted to BIGINT when needed.
As you can see, BIGINT integrates smoothly into SQL code without significant syntax changes. Developers used to working with INTEGER can easily leverage BIGINT.
Guidelines for Using BIGINT
Here are some key guidelines around effective BIGINT usage in PostgreSQL:
-
Only use BIGINT on low-cardinality columns where storage needs will scale directly in line with business data growth. Using it universally risks substantial storage overhead.
-
Perform extensive testing with realistic future data volumes before deploying BIGINT, and monitor actual usage after launch.
-
Consider using DECIMAL instead for high-precision financial or scientific data rather than defaulting always to BIGINT.
-
Be selective about using BIGINT on columns that are frequently referenced in joins, sorts, grouping and other queries. The increased data volume can noticeably impact performance at scale.
Evaluating where INT vs BIGINT vs NUMERIC types fit best is an important aspect of PostgreSQL data modeling. Storage size, data intent and expected query usage help inform the ideal types to apply.
Example: Calculating Sigma Values
Here is an advanced example demonstrating BIGINT‘s power at enormous numeric scales. The sigma summation formula in mathematics has applications in physics, computer science and other fields for calculating series totals:
n
_____
\ k
σ / Σ , where k = 1, 2, 3,..., n
---
k=1
This calculates the sum of all values for k from 1 to n. By making n very large, the sums involved grow exponentially.
Here is PostgreSQL sample code applying the sigma formula at scale:
CREATE TABLE sigmas (
n BIGINT,
sigma BIGINT
);
DO $$
DECLARE
x BIGINT;
BEGIN
FOR x IN 1..1000000000 LOOP
INSERT INTO sigmas (n, sigma)
VALUES (x, (x*(x + 1)/2));
END LOOP;
END;
$$ LANGUAGE plpgsql;
SELECT n, sigma
FROM sigmas
WHERE n = 100000000;
This uses a stored procedure to iterate and calculate sigma for all values of n from 1 to 1 billion. It stores results in the sigmas table then queries back the sum when n reaches 100 million.
The final sigma value returned is 5 x 10^16 – which requires BIGINT to store!
While basic in concept, this example highlights the power of BIGINT arithmetic at scales from 1 to 1 billion without error or approximation. No other numeric type in PostgreSQL can achieve this.
Future BIGINT Needs
Looking ahead, increasing data volumes will continue requiring wider numeric types in databases like PostgreSQL. While BIGINT enables quadrillions of discrete values today, its 64 bit width may one day become insufficient as well.
Potential futuristic use cases like advanced AI analytics, quantum computing or interplanetary network data could push PostgreSQL integer limits further. Expanding the maximum width of future BIGINT implementations may eventually become necessary.
128-bit and even 256-bit hardware integer types already exist, hinting at possibilities ahead to support exponentially bigger data. PostgreSQL‘s flexible numeric options ideally position it for any future BIGINT evolutions that such next-generation applications may require.
Conclusion
For working with very large integers, PostgreSQL‘s BIGINT data type delivers expanded range without loss of performance or precision. It enables calculations and analytics at enormous numeric scales not possible in smaller integer types like INTEGER.
Careful use of BIGINT balanced with storage and computational overhead considerations allows building high-volume integer data pipelines that can scale smoothly into the future.
As information generation continues growing exponentially across scientific and industrial domains, PostgreSQL‘s strong numeric data type support makes it an ideal platform for manage extreme-sized integer data needs.


