SQLite UNIQUE Constraint in Practice (2026)

Why UNIQUE matters in 1-file databases

I work with SQLite because it ships as 1 C library and stores data in 1 file, which means 1 deploy artifact and 0 server processes for many apps. You should treat UNIQUE as your first line of defense against duplicate data, because 1 bad duplicate can cascade into 10 downstream bugs. I like to say UNIQUE is the “bouncer at the door”: it checks the guest list and blocks duplicates before they enter, and it does that in 1 step per write attempt.

SQLite is a lightweight RDBMS with a serverless, file-based engine, and that makes constraint enforcement happen right inside the same process that runs your app. In practice, that means the UNIQUE check is local, deterministic, and consistent across 1 codebase and 1 data file. You should assume every UNIQUE constraint you add becomes part of the contract between your app and your data, and I recommend writing that contract explicitly in DDL with numbers in mind, not just in application code.

UNIQUE in 1 sentence, with 1 number

UNIQUE says: “For the set of columns I name, the count of rows with the same values must be 1 or 0, never 2 or more.” That’s the promise, and you should enforce it at the database layer, not only in your app logic.

Syntax cheat sheet (SQLite)

I like to keep a 4‑line mental model:

CREATE TABLE t(

col1 TEXT UNIQUE,

col2 INTEGER,

UNIQUE(col2, col1)

);

That snippet shows 2 styles: column-level and table-level. You can use either style, and you can define 1, 2, or more UNIQUE constraints per table.

Single-column UNIQUE, the smallest useful example

Here is the smallest example I use when I teach this concept to a 5th‑grader: imagine 1 class roster, 1 kid per roll number, and 1 roll number per kid.

CREATE TABLE student(

name TEXT,

roll INTEGER UNIQUE,

rank INTEGER

);

The rule here is: roll must be unique. If you try to insert 2 rows with roll=7, the second insert fails. That single rule prevents exactly 1 class from having 2 kids with the same roll number.

Now insert 3 rows:

INSERT INTO student(name, roll, rank) VALUES

(‘Ava‘, 1, 3),

(‘Ben‘, 2, 1),

(‘Cam‘, 3, 2);

Result: 3 rows exist, 0 conflicts, and the UNIQUE rule is satisfied.

Try a duplicate roll:

INSERT INTO student(name, roll, rank) VALUES (‘Dee‘, 2, 4);

Result: 1 constraint error, 0 new rows.

UNIQUE allows NULL in SQLite, and that matters

SQLite treats NULL as “unknown,” and unknown values do not match each other. That means a UNIQUE column can hold multiple NULLs. You should remember this rule because it surprises people who are new to SQL.

Example:

CREATE TABLE badge(

id INTEGER PRIMARY KEY,

code TEXT UNIQUE

);

Now insert 3 rows with NULL code:

INSERT INTO badge(code) VALUES (NULL), (NULL), (NULL);

Result: 3 rows inserted, 0 conflicts. This is a feature, not a bug. If you want “only 1 NULL,” you need a different rule, such as a partial UNIQUE index or a CHECK constraint.

Analogy: think of NULL as “blank.” If 3 kids leave the locker number blank, you still have 3 kids, and you have 0 duplicates because “blank” is not the same as a concrete number like 7.

Composite UNIQUE: two columns become one key

When I want uniqueness across 2 or more columns, I define a composite UNIQUE constraint. You should do this for cases like (teamid, jerseynumber) or (tenant_id, email).

Example:

CREATE TABLE team_member(

team_id INTEGER,

jersey_number INTEGER,

name TEXT,

UNIQUE(teamid, jerseynumber)

);

Now these rows are valid:

INSERT INTO teammember(teamid, jersey_number, name) VALUES

(1, 9, ‘Rin‘),

(1, 10, ‘Sol‘),

(2, 9, ‘Tao‘);

You have 3 rows and 0 conflicts because the pair (teamid, jerseynumber) is unique. You can reuse jerseynumber=9 as long as teamid is different.

If you insert (1, 9) again, the second insert fails. That’s 1 rule and 1 table-level constraint doing the job of 2 columns of app logic.

Multiple UNIQUE constraints in 1 table

SQLite lets you define more than 1 UNIQUE constraint per table. I use this when a table has several “must be unique” facts.

CREATE TABLE account(

id INTEGER PRIMARY KEY,

email TEXT UNIQUE,

username TEXT UNIQUE,

phone TEXT

);

You now have 2 separate unique rules: email must be unique and username must be unique. The count of duplicates for each is 0. This is a better contract than “I promise in the code” because it is enforced 24/7, even if 1 import script forgets to check.

UNIQUE vs PRIMARY KEY (2 clear differences)

I tell teams to remember 2 numbers:

1) PRIMARY KEY: exactly 1 per table.

2) UNIQUE: 1 or more per table.

Also, PRIMARY KEY implies NOT NULL, so it allows 0 NULL values. UNIQUE allows NULL values, and SQLite allows more than 1 NULL. That’s a real difference in behavior, not just wording.

Example:

CREATE TABLE demo(

pk INTEGER PRIMARY KEY,

u INTEGER UNIQUE

);

The pk column can never be NULL, not even once. The u column can be NULL 1 time, 2 times, or 20 times, and SQLite still treats it as unique because NULL is not equal to NULL.

UNIQUE plus indexes: what SQLite builds for you

SQLite creates a unique index under the hood for each UNIQUE constraint. That index is how the database checks for duplicates quickly. I avoid claims like “fast” without numbers, so here is a simple numeric way to reason about it: a B‑tree lookup costs roughly log2(n) comparisons. For n=1,000,000 rows, log2(n) is about 20 steps. That’s a small number compared to scanning 1,000,000 rows.

You should expect 1 extra index per UNIQUE constraint. If you define 3 UNIQUE constraints, you get 3 indexes. That is the tradeoff: 3 extra index maintenance operations per insert, and 3 extra data structures to keep consistent.

Conflict handling: choose 1 strategy on purpose

SQLite supports conflict clauses that control what happens when a UNIQUE rule is violated. You should pick 1 behavior per use case instead of relying on defaults.

Example: abort on conflict (default):

INSERT INTO account(email, username) VALUES (‘[email protected]‘, ‘ava‘);

INSERT INTO account(email, username) VALUES (‘[email protected]‘, ‘ben‘);

Result: second insert fails, 0 rows added.

Example: ignore on conflict:

INSERT OR IGNORE INTO account(email, username) VALUES (‘[email protected]‘, ‘cam‘);

Result: 0 rows inserted if the email already exists, and 1 row inserted if it doesn’t.

Example: replace on conflict:

INSERT OR REPLACE INTO account(email, username) VALUES (‘[email protected]‘, ‘dee‘);

Result: SQLite deletes the old row and inserts the new row. That is 1 delete and 1 insert, not 1 update, so you should be careful with foreign keys.

Partial UNIQUE index: allow duplicates except for 1 condition

SQLite supports partial indexes, and that lets you make UNIQUE apply to only part of a table. You should use this when you want “unique among active records, but duplicates allowed among archived records.”

Example:

CREATE TABLE email_address(

id INTEGER PRIMARY KEY,

user_id INTEGER,

email TEXT,

active INTEGER

);

CREATE UNIQUE INDEX emailuniqueactive

ON email_address(email)

WHERE active = 1;

Now you can have the same email 2 times as long as only 1 row has active=1. This gives you 1 active email and 1 or more archived emails without breaking the contract.

Collation and case: make uniqueness match your rules

SQLite compares TEXT values using collation. If you want “Case-insensitive uniqueness,” specify a collation that matches your rule.

Example:

CREATE TABLE handle(

id INTEGER PRIMARY KEY,

name TEXT COLLATE NOCASE UNIQUE

);

With NOCASE, “Ava” and “ava” are treated as the same value, so the second insert fails. Without NOCASE, those 2 strings are distinct and both inserts succeed.

I recommend writing the collation rule explicitly because you can’t assume 1 reader knows what SQLite is doing in a given column.

UNIQUE with WITHOUT ROWID tables

SQLite supports WITHOUT ROWID tables, which store data directly in a primary key index. That means the primary key is the table, and UNIQUE constraints still work as secondary indexes.

Example:

CREATE TABLE logline(

ts INTEGER,

source TEXT,

message TEXT,

PRIMARY KEY (ts, source)

) WITHOUT ROWID;

You still can add UNIQUE constraints on other columns if you need them, and SQLite will enforce them with unique indexes just like normal tables.

ALTER TABLE and migrations: plan around constraints

SQLite has limited ALTER TABLE support. If you need to add a UNIQUE constraint to an existing column, you often need to rebuild the table. I handle this with a 3‑step migration pattern:

1) Create a new table with the constraint.

2) Copy data from old to new, checking for duplicates.

3) Drop the old table and rename the new table.

Here is a pattern I use, with 3 explicit steps:

BEGIN;

CREATE TABLE account_new(

id INTEGER PRIMARY KEY,

email TEXT UNIQUE,

username TEXT UNIQUE

);

INSERT INTO account_new(id, email, username)

SELECT id, email, username FROM account;

DROP TABLE account;

ALTER TABLE account_new RENAME TO account;

COMMIT;

You should run a duplicate detection query before step 2 if you expect bad data. For example:

SELECT email, COUNT(*) AS c

FROM account

GROUP BY email

HAVING c > 1;

That query gives you 1 row per duplicate value and a count c you can use to fix data before the migration.

Traditional vs modern workflows (with numbers)

I still teach the traditional flow so teams know what’s going on, but I also “vibe” with modern tooling because it cuts 2 kinds of friction: typing and feedback time.

Aspect

Traditional workflow

Modern “vibing code” workflow —

— Schema changes

Write SQL by hand in 1 editor

Generate SQL with 1 AI tool, then verify in 1 minute Feedback loop

Run 1 migration, then open 1 DB viewer

Hot reload + 1 SQL client with watch queries Error checking

Read 1 error after run

Get 3 inline hints before run Type safety

0‑type or dynamic

TypeScript-first with 1 schema source

I recommend using AI assistants like Copilot, Claude, or Cursor to draft the DDL, but you should still read every line before it hits production. My rule is 1 human review for 1 generated change set.

How I use AI assistants in 3 concrete steps

1) I ask the assistant to generate the CREATE TABLE with UNIQUE constraints for a specific domain, like “tenant_id + email.”

2) I ask for 3 test inserts: 2 valid, 1 invalid, so I can verify constraint behavior quickly.

3) I run those inserts in a scratch DB and confirm that 1 invalid insert fails.

This gives me 3 results in 3 minutes, and it keeps the feedback loop tight.

TypeScript-first schema thinking, even with SQLite

If you build with TypeScript, you should model the unique rules in your types and your DB. I keep the rule in both places because 2 layers catch 2 classes of bugs.

Example with a simple TypeScript interface that matches the UNIQUE rules:

type Account = {

id: number;

email: string; // unique

username: string; // unique

};

Then I mirror the rules in SQL, so the DB enforces them even if 1 service bypasses the type checker.

Modern frameworks and SQLite in 2026‑style stacks

I often pair SQLite with fast local dev tools like Vite, Bun, and modern Next.js setups because they cut time-to-feedback. You should expect hot reload to update UI in under 1 second and SQL changes to apply in 1 migration step.

For deployments, I see 3 common paths:

1) Ship SQLite on a server instance with Docker.

2) Use serverless containers that mount a volume for 1 persistent file.

3) For edge or worker environments, move writes to a central service and keep SQLite for local dev.

That last point matters: SQLite is a single-file database, so you should consider how 1 file travels across 2 or more deployment nodes.

Container-first workflow (1 example)

Here is a simple Docker flow I use for demos:

FROM alpine:3.20

RUN apk add –no-cache sqlite

WORKDIR /app

CMD ["sqlite3", "/app/app.db"]

This is 4 lines, 1 container, and 1 DB file. You can mount /app as a volume and keep data persistent across restarts. The number to remember is 1: 1 file, 1 process, 1 database.

Comparing UNIQUE checks in app code vs DB

I like to explain this with a 5th‑grade analogy: checking uniqueness in code only is like asking 1 friend to remember 100 birthdays. It works until you add a second friend. The DB is the shared calendar that all friends use.

Here’s a 2‑column comparison:

Rule location

Risk level

Example numeric failure —

— App-only check

Higher

2 concurrent writes can pass 1 check and still collide DB constraint

Lower

1 constraint error and 0 duplicates

If 2 requests arrive at the same time, your app may read “no duplicate” twice and then insert 2 rows. The DB constraint prevents the second insert, so you get 1 error and 0 duplicates. That’s the difference that saves you from 1 late-night incident.

Performance thinking with concrete numbers

I avoid fuzzy claims, so here’s a numeric way to reason about costs:

  • Every UNIQUE constraint adds 1 index.
  • Every insert touches 1 base table plus N unique indexes.
  • For N=2 unique constraints, that is 1 table write + 2 index writes.

If you insert 100,000 rows and each insert touches 3 structures, you have 300,000 total structure updates. That’s a measurable cost you can time in your own system.

A simple test loop you can run:

BEGIN;

— Repeat this insert 100000 times in a script

INSERT INTO account(email, username) VALUES (‘e‘ |

X, ‘u‘

X);

COMMIT;

Then measure total seconds and compute rows/sec = 100000 / seconds. If seconds = 2.5, rows/sec = 40,000. That’s a concrete metric you can use across 2 different schema designs.

Testing UNIQUE rules with 3 explicit checks

I recommend 3 tests per UNIQUE rule:

1) Insert 2 unique values → expect 2 rows.

2) Insert 1 duplicate value → expect 1 error.

3) Insert 1 NULL (if allowed) → expect 1 row.

Example test SQL:

INSERT INTO badge(code) VALUES (‘A1‘);

INSERT INTO badge(code) VALUES (‘A2‘);

INSERT INTO badge(code) VALUES (‘A1‘); — should fail

INSERT INTO badge(code) VALUES (NULL); — should succeed

That gives you 4 statements, 3 passes, 1 fail, and a clear picture of behavior.

Debugging a UNIQUE error fast

When I hit a UNIQUE error, I ask 3 quick questions:

1) Which constraint fired? I check the error message and the schema.

2) Which value collided? I query for the value directly.

3) Is the duplicate real or is it a case/collation issue?

A direct query example:

SELECT * FROM account WHERE email = ‘[email protected]‘;

That gives you the conflicting row in 1 query. If the row exists, you decide whether to update or reject. If the row doesn’t exist, you may be in a transaction or using a different connection.

UNIQUE with upserts: a modern pattern

SQLite supports upserts with ON CONFLICT. I use this when I want 1 statement to insert or update based on uniqueness.

INSERT INTO account(email, username)

VALUES (‘[email protected]‘, ‘ava‘)

ON CONFLICT(email) DO UPDATE SET

username = excluded.username;

Here’s the numeric behavior:

  • If email is new: 1 insert, 0 updates.
  • If email exists: 0 inserts, 1 update.

You should prefer this over “read then write,” because it reduces 2 round trips to 1.

UNIQUE and data imports: 1 safe pattern

For bulk imports, I often use a staging table without UNIQUE, then merge into a target table that has UNIQUE. The numbers are simple:

1) Load 1,000 rows into staging.

2) Insert into target with conflict rules.

3) Count conflicts with 1 query.

Example merge:

INSERT OR IGNORE INTO account(email, username)

SELECT email, username FROM account_stage;

Then check how many were inserted:

SELECT COUNT(*) FROM account;

If you started with 500 rows and ended with 900, you inserted 400 new rows and ignored 600 duplicates. Those are concrete numbers you can report to a pipeline.

SQLite UNIQUE in modern app stacks

Here’s a real‑world flow I see in 2026‑style stacks, with numbers attached:

1) You scaffold a project with 1 command in Vite or Bun.

2) You define 1 SQLite schema file and 3 UNIQUE rules.

3) You run 1 migration and 1 seed script.

4) You run 5 test inserts to confirm 2 pass, 1 fails, 2 pass with NULL.

You then plug that DB into Next.js or another server framework and deploy with 1 container or 1 serverless job. The count of steps matters because you can automate them in CI.

A simple analogy you can reuse

Think of UNIQUE as “only 1 seat per ticket number.” If 2 people show up with ticket #12, the second person can’t sit, and the system blocks the duplicate. This is easier to explain to non‑technical teammates, and it helps them accept why 1 duplicate causes 1 error.

A focused checklist I use

  • Define 1 UNIQUE constraint for every real‑world identifier.
  • Decide if NULL is allowed; if not, add NOT NULL.
  • Pick 1 conflict strategy: ABORT, IGNORE, REPLACE, or UPSERT.
  • Add 1 test case that proves the constraint fires.
  • Measure 1 insert benchmark after you add 1 or more constraints.

Final thoughts, with 3 concrete takeaways

I recommend these 3 takeaways to anyone building with SQLite:

1) Put UNIQUE in the schema, not just in code; it prevents 1 duplicate even under 2 concurrent writes.

2) Remember SQLite allows multiple NULLs in UNIQUE columns; decide if that matches your rules.

3) Use modern tooling to shorten feedback cycles: 1 AI draft, 1 review, 1 test run.

If you follow those 3 steps, you’ll ship 1 stronger schema, 0 silent duplicates, and a data model that stays clean even as your app grows.

Scroll to Top