As a database developer, I frequently rely on sequences to generate unique IDs that can be used as primary keys, transaction numbers, or any other purpose that requires a unique series of integers.
While creating Oracle sequences is easy, altering their attributes as application needs evolve can be challenging. Poorly optimized sequences can drag down performance in your entire system.
In this handbook, I aim to demystify Oracle‘s ALTER SEQUENCE statement through practical tips, expert guidance, and code examples tailored for developers.
We will cover:
- When and why to alter sequences
- Modifying increment, limits, cache settings
- Generating gaps, negative and descending values
- Enabling and disabling reusable cycles
- Impacts on ORM frameworks like Hibernate
- Use in analytical/data pipeline applications
- Comparision with alternatives like triggers, sequences in code
And more. By the end, you will master sequence alterations to handle complex numbering schemes with ease!
Altering Sequences – Why Bother?
Many developers I know just create standard sequences and forget about them. But when requirements change, unoptimized sequences can lead to:
- Primary key clashes in distributed systems
- Bottlenecks due to excessive IO
- Out-of-range errors breaking the application
Altering their attributes proactively can save tons of Refactoring effort later.
According to Schmitt and Godbole of UCLA, inappropriate use of increment, cache and limits in Oracle sequences leads to scalability issues more often than is acknowledged. Misconfigured sequences also frequently cause performance issues as highlighted by proof-of-concept applications in this research.
As a best practice, one should revisit critical sequences regularly and tune them using ALTER SEQUENCE whenever indexes change or traffic patterns evolve.
Now let‘s see some common scenarios where altering attributes becomes essential.
Key Use Cases
Though the syntax is straightforward, understanding of when to change sequence parameters requires deeper insight.
1. Avoiding Gaps and Clashes
If an INSERT fails after obtaining a sequence number, that number is lost or "wasted", creating a gap:

Gaps degrade Cassandra-style unique row identifiers. They also cause clashes in distributed ID schemes like Twitter Snowflake that assume sequences are gapless:
CREATE SEQUENCE order_id
START WITH 1
INCREMENT BY 1;
INSERT INTO orders (id, amount)
VALUES (order_id.NEXTVAL, 100); -- got 1
-- Insert failed, 1 is lost forever
INSERT INTO orders (id, amount)
VALUES (order_id.NEXTVAL, 200); -- got 2 instead of 1 again
- Fix: Enable
ORDERcaching to avoid skips
Gaps can be avoided by ensuring orderly allocation of cached values:
ALTER SEQUENCE order_id
CACHE 20
ORDER;
Now numbers are always assigned sequentially within a cache even if transactions fail in between.
2. Partitioning ID Ranges
When distributing database load geographically, we may want to split ID ranges across datacenters, eg. order IDs 1-1000 for Asia, 1001-2000 for Europe etc.
- Fix: Change increment value
The increment can altered to control ID distribution:
ALTER SEQUENCE order_id
INCREMENT BY 1000;
Now organizations can safely insert orders without remote coordination.
3. Circular Identifier Systems
In IoT systems with embedded devices, IDs may rollover quickly due to large volume. Reusing identifiers by cycling over a short range makes practical sense here.
- Fix: Enable CYCLE option
We can alter the sequence to automatically restart on hitting MAXVALUE:
ALTER SEQUENCE sensor_id
MAXVALUE 10000
CYCLE;
Devices can now generate upto 10K IDs in loops without collisions.
There are many more scenarios where altering increment, cache and limits becomes essential. The key is constant tuning to match application access patterns.
Now let‘s look at some waste-case performances issues caused by unoptimized sequences.
Performance Impact
While sequences offer convenience, overuse without careful tuning can become an insidious source of scalability issues.
As per the study by UCLA researchers, just 500 concurrently accessed sequences increased CPU usage by 50X on the test database. Similarly, IO latency spiked from 10 ms to over 400 ms with 750 active sequences due to increased disk reads.
Since sequences rely on underlying tables for persistence, excessive seqence allocation can bloat SGA through database buffer cache pollution. They may even cause latch contention on the sequence cache data structures in SGA memory.
Let‘s understand how different settings contribute here.
Increment Value
The increment dictates how quickly sequence numbers are exhausted. A big increment leads to slower allocation.
As per best practices from Oracle experts Kyte and Kuhn in Expert Oracle Database Architecture, increments over 50 or 100 should be avoided especially for frequently accessed sequences:
"Increments larger than 50 or 100 will likely cause scalability problems"
This is because huge increments force accessing the sequence value over and over from disk before one is allocated.

So use the smallest increment suitable for the application. Alter increment later if needed instead of overprovisioning.
Cache Settings
Inappropriate cache size is another huge factor affecting sequence performance.
By default Oracle caches only 20 values. So in a transaction-heavy application, sequences may need to reload cache after processing every 20 rows! This can add hundreds of expensive disk reads.
The graph below shows how miss rate (cache reloads) shoots up with higher concurrent sessions for default cache size:

Tuning the cache size to match transaction volume using ALTER is critical:
ALTER SEQUENCE my_seq
CACHE 1000;
A bigger cache improves runtime by reducing physical IO. However, also consider using a sequence in code if cache memory is still inadequate.
So ensure cache size is large enough to avoid misses but not so huge to cause latch issues or bloat SGA.
Order Clause
By default sequence caches created without ordering guarantees for performance. However this can also cause primary key violations under concurrency leading to errors.
Enabling ORDER in alter statement forces sequential allocation eliminating gaps at minor cost of speed:
ALTER SEQUENCE my_seq
ORDER;
While changing increment or cache settings seem straightforward, managing limits requires a deeper understanding around cycles and retries.
Handling Limits and Cycles
Sequences have predefined min (1) and max (10^27) limits which applications may want to restrict or extend. There are also special use cases needing descending ranges.
Let‘s go through some examples.
MINVALUE and MAXVALUE
The MINVALUE and MAXVALUE define the upper and lower boundaries of a sequence:
CREATE SEQUENCE my_seq
MINVALUE 1
MAXVALUE 1000;
Reaching these limits has the following impact:
- Further requests for next number fail by default
- Cycled sequences restart from COUNT/MIN resp.
While ORA-08004 errors provide safety against overflow, fixing failed code can be disruptive. This needs to be considered before arbitrarily altering boundaries.
As an example, let‘s look at how limits affect an identity column backing up app data:
CREATE TABLE data (
id INT GENERATED BY DEFAULT AS IDENTITY (START WITH 1),
payload VARCHAR2(1000)
);
If we change maximum below current value on the linked sequence:
ALTER SEQUENCE data_seq
MAXVALUE 500;
Further INSERTS like below will bomb:
INSERT INTO data (payload) VALUES (‘Test‘);
-- ORA-08004: sequence exceeds MAXVALUE and cannot be instantiated
So ALTER LIMIT only after accounting for downstream side effects. Set thresholds wisely keeping enough headroom for organic growth in dependent objects.
DESCENDING and NEGATIVE
Descending sequences are occasionally useful for countdown triggers or unconventional primary keys:
ALTER SEQUENCE my_seq
INCREMENT BY -1;
Similarly, some apps require negative numbers as valid IDs.
In both cases, remember to alter the MINVALUE as needed to suit range limits:
ALTER SEQUENCE my_seq
INCREMENT BY -1
MINVALUE -1000;
This allows full traversal from MAXVALUE downto MINVALUE.
CYCLE Option
The CYCLE option makes the sequence automatically restart from COUNT/MIN after hitting its limit:
ALTER SEQUENCE my_seq
MAXVALUE 10000
CYCLE;
This can be useful for cases needing reusable IDs like machine IDs, transaction counters etc.
However, further requests post cycling will burn a new value making previously returned numbers invalid. So cycling requires careful application logic to avoid dupes.
Instead of directly altering, following Oracle best practices around cycling is highly recommended:
- Set a LONG timeout period if possible
- Create new sequences when close to cycle
- Use 999 as special trigger value
These patterns reduce collision probability compared to naive cycling.
So in summary, altering limits or cycling requires good understanding of downstream behaviors across the system. Balance safety with restart needs.
Now that we have covered common use cases through detailed examples, let‘s shift focus to architectural impacts when integrating Oracle sequences with application code.
Implications for ORM Frameworks
Most applications use ORM frameworks like Hibernate that abstract sequence complexities into auto-generated IDs.
However,ALTER commands can still break these mappings due to mismatch in expectancy.
For example, Hibernate caches the current max ID value during session flush:
@Entity
@Table(name = "cust")
public class Customer {
@Id
@Column(name = "ID")
@GeneratedValue(strategy = SEQUENCE)
private Integer id;
}
If we change boundaries on the linked sequence concurrently, cached numbers may exceed new MAXVALUE or MINVALUE.
This will either fail transactions or cause cycling issues despite ORM handling retries internally.
As a best practice highlighted in this Hibernate tip, make sure to:
- ALTER sequences cautiously with enough padding
- Refresh sessionFactory after changes
- Test race conditions thoroughly
Similarly, other ORMs also have caveats around caching and alter statements. So take care to validate integration touch points before rolling out changes.
Now that we have seen application level impacts, let‘s discuss some use cases where altering plays a key role.
Sequence Alteration Patterns in Analytical Apps
In analytics platforms managing data pipelines, sequences help generate idempotent IDs needed to assign unique row label across incremental ETL jobs.
For example, a daily job may populate a column like LOAD_ID in new rows fetched to track fresh ingestion. Sequences let teams do this without locks or other coordination.
However over time, the huge volumes can easily exceed default MAXVALUE exhausting sequence range. Altering limits becomes essential here:
ALTER SEQUENCE load_seq
MAXVALUE 9999999999;
What‘s more, analytics systems may even leverage RESTART capability to programmatically control ID reuse across historical partitions:
ALTER SEQUENCE load_seq RESTART WITH 1000000000;
This exposes interfaces allowing engineers to systematically vary the load numbering as per architecture needs.
Another example is generating surrogate keys during ETL rollups to enable complex joins, like combining user sessions data across days stored in different partitions.
Instead of costly merges which scale exponentially across days, sequences help create unique session keys independent of partitions:
LOAD_DATE | SESSION_KEY | EVENTS
----------|-------------|---------
20230101 | 1 | {Launch...}
20230102 | 1 | {Click...}
Later analysis just needs to join on SESSION_KEY despite dates being different.
Such schemes can handle gaps by caching + ordering sequences:
ALTER SEQUENCE session_key_seq
CACHE 100000
ORDER;
So in data systems, alter provides a simple yet powerful tool for architects to version ID generations.
With so many critical applications, it is important to also understand how sequence changes fare against alternative approaches.
Sequence Alteration vs Other Options
While ALTER SEQUENCE helps optimize identifiers dynamically, other techniques like application-based sequences or triggers also exist.
Let‘s compare pros and cons of each:
| Approach | Advantages | Disadvantages |
|---|---|---|
| Alter Sequence | – Simple – Safe for concurrency – Handles crashes gracefully |
– Changes visible across stack – Overuse affects performance |
| Sequence in App | – Isolated business logic – Avoid DB dependency |
– No builtin caching – Concurrency issues |
| Triggers | – Encapsulates sequence | – Complicated – Risk of mutating table |
Application Sequences
Many languages like Java allow creating sequence-like constructs natively. This avoids database dependency:
AtomicLong counter = new AtomicLong(1);
// Get next value
long id = counter.getAndIncrement();
However, application sequences don‘t support cached optimization, restart cycles etc. offered by database variants. Concurrency also needs explicit handling increasing likelihood of collisions due to race conditions.
So app-based is simpler but less scalable. Use for low volume local needs.
Sequence Triggers
Database triggers can embed sequences behind API, exposing only an identity column:
CREATE TABLE cust (
id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
name VARCHAR2(100)
);
CREATE SEQUENCE s_cust START WITH 1000;
CREATE TRIGGER cust_id_trg
BEFORE INSERT ON cust
FOR EACH ROW
BEGIN
:NEW.id := s_cust.NEXTVAL;
END;
However, business logic gets tightly coupled in triggers. They are also notoriously error-prone in edge cases due to internal mutating table risks.
So triggers require extreme care around integration. Reserve for cases where strong encapsulation needs outweigh complexity drawbacks.
In summary, native sequence alteration balances simplicity with scalability. Use triggers or application sequences only if custom handling is unavoidable.
Conclusion
We have covered a wide gamut from performance best practices, use case analysis to architectural patterns and dependency management relevant for ALTER SEQUENCE.
Here are the key insights distilled for developers:
- Proactively tune increments, caches to avoid scalability bottlenecks
- Alter limits carefully accounting for side effects on related systems
- Reset sequences to minimize gaps/clashes in distributed ID schemes
- Enable cycling only when retry logic ensures no duplicate IDs
- Validate integration touchpoints with ORM post sequence changes
- Consider changes to analytical pipelines using sequences for idempotency
While structures may start simple, large systems evolve in complexity rapidly. Mastering dynamic management of sequences is critical for long term sustainability of any database application.
Hope these practical examples and expert design guidelines help you optimize sequences safely with ALTER! Do share any other relevant tips I may have missed.


